Why do digital cables sound different?


I have been talking to a few e-mail buddies and have a question that isn't being satisfactorily answered this far. So...I'm asking the experts on the forum to pitch in. This has probably been asked before but I can't find any references for it. Can someone explain why one DIGITAL cable (coaxial, BNC, etc.) can sound different than another? There are also similar claims for Toslink. In my mind, we're just trying to move bits from one place to another. Doesn't the digital stream get reconstituted and re-clocked on the receiving end anyway? Please enlighten me and maybe send along some URLs for my edification. Thanks, Dan
danielho

Showing 19 responses by gmkowal5c1a

Redkiwi, even if jitter did cause a problem, I do not think it would manifest itself in harmonically related distortions. I can't see it causeing either higher or lower order distortions.
Bruce, cut the crap! We are not talking about frequencies in the GHz region. These are simple logic levels moving at less than a MHz. I have been a digital designer for 20 years and have never heard such garbage. I there is any effect at all it would result from delay caused by capacitance in the cable not by excessive standing waves. If I knew what the impedance of the DAC was, I bet we would would see very little return loss if we swept the cable on a network analyzer even up to 50 MHz. Sorry to burst your bubble but you are way off base.
One more thing, the logic analyzer used actually had 1 meg memory, not 16 meg. Someone borrowed my 16 meg aquisition module.
Don't worry about starting a religious war. The reason for these forums is to get the big picture. There are a million opinions out there and you should consider the ones that make sense to you. Part of the fun of these forums is to learn a little and give a little in return. I read a reply to one of my posts the other day that mentioned that audio was a "passionate undertaking". I subscribe to this wholeheartedly because if the music we listen to and the equipment that provides that music does not empower emotion and passion, why listen at all? If some of the responses seed a bit emotional, just understand that it is only because of the passion that we have for the music.
Frogman, rather than challenge the fact that you hear the difference I will simply state what I know to be facts. While I most certainly do not have the trained ears of a musician, I think, (maybe wrongfully so) that God has blessed me with a decent set of ears. While I have not listened to a set of fiber optic cables, I can not discearn any real sonic differences between good quality copper (or silver etc.) cables. I also submit that the physics involved may not support sonic differences that are discearnable to my ears. I believe you when you say you can hear the differences, but maybe the differences are so subtle that one would need musican's ears to hear them. To quote Dennis Miller: "that's my opinion, I could be wrong".
Actually, Redkiwi I captured a megasample. I had a meg of aquisition memory and I filled it. After I filled the 1 meg I ceased taking data. The test took 80 minutes for all the data to be captured (106 bits every .5 sec [5 16 bit patterns plus the placeholder pattern]). I captured 16960 patterns or words if you will each 16 bits long.
All I did was set up a burst of each pattern and a signature pattern (to be used in the little vbasic program I wrote) and had the pattern generator resend the the sequence over and over until I filled up the aquisition memory of the analyzer (It took some time so I did some real work also). However many samples it took to fill up the aquisition memory is how many samples I got. I did not count how many patterns I captured but it was alot! I used a Tektronix TLA704 analyzer so I stored the results on the analyzers hard disk. Tek's analyzer also runs windows so I wrote the Vbasic program right on the analyzer. I did a simple compare of the 5 bursts I sent (16 bits of all 1's, all 0's etc.) until I hit the signature burst and I started the compare over again and repeated the process until the software could not find the signature burst. If I got a situation where it did not compare I incremented a counter in my program. The counter never incremented so there were no errors. To test my software I buggered up my captured data file a bit and the program caught it. It was really an easy experiment but I probably spent more time on it than I should have at work but I trust nobody will tell my boss. As a designer of these types of systems I am sure you have run similar tests. While this is not my forte ( I do DSP design) it sounded like a valid test and you challenged me to the task.
1439bhr, I got a question I am sure you can answer. Does the low pass filter on the output of the DAC also handle any aliasing that may be present? Also, how much phase noise is acceptable in the system before there is any degredation and does phase noise closer to the carrier or farther away from the carrier seem to have the most effect. Thanks in advance for your response.
I have read your post and can not dispute your claims with the exception with the exception of one. Jitter is not very difficult to measure accurately if you understand the measurement concept. Jitter and be made simply with a spectrum analyzer and a calculation. All that is needed to convert phase noise to jitter is a simple calculation. A spectrum analyzer can be used to measure phase noise if the device to be tested has little or no AM because the analyzer can not tell the difference between AM and phase noise.
One more thing. The rep rate of the bits in each burst was 10KHz. Ok, I should have set the rate to 40KHz, but I still doubt I would have lost any data as this is a pretty low frequency for these cables or at least 2 of the 3 cables.
Redkiwi, the overshoot and undershoot you speak of is caused by capacitance in the cable. While overshoot and undershoot themselves may not necessarily effect the DAC output signal, the capacitance in the cable may effect the data pulses risetime. This effect, I would assume may be audible. While jitter is a degrading factor in a system of this type the transmission cable is not likely to increase or add jitter to the bitstream. I feel capacitance is the real culprit here. The less capacitance in the cable the better.
Ehider, I am still the skeptic, but I appreciate your viewpoint and candor. BTW, Is your employer Burr Brown? Thanks for the reply.
I would like to reply to your question. No, No, No, No, No, No!!!!! There is simply no way that any digital cable can color or change the sound of a system. While there are physical reasons why analog cables (interconnects, speaker cables etc.) can have an effect on sound quality it is beyond the laws of physical possibility for a digital cable to have an effect. The D/A in your equipment could care less if the 0111001111 bit stream came from a piece of copper or fiberoptic material. I understand that audio is very subjective but one must be careful when opening one's mouth. Consider the ramifications of someone taking what you are saying as gospel and purchasing an expensive cable what there is no physical possibility the cable will effect the sound one bit.
I will be honest and say that I do not use fiber optic cables so I can not say definitively there is a sonic difference between digital cables. You have heard differences, so I will take your word for it. As for the reason.... I am an EE not a physisist. If I were to take a stab at it, with my limited knowledge of optics and lightwave, I would guess the differences would be due to propagation delay, loss or refraction differences in the cables. Maybe someone else with a better understanding of optics in this forum could enlighten us both.
I understand that the transmission is real time. I just do not believe a cable that is in good working order will cause a bit or two to be dropped. I agree with Blues_Man that the problem is probably in the transmitter or reciever if bits are being lost. I ran a quick test on my 3 digital cables at work today using a logic analyzer ( a scope is not the right tool for a real time test) with a pattern generator and deep memory (16 meg). I simply output a 16 bit burst every .5 seconds. The rep rate within the burst was set to 10KHz. I simply tied the pattern generator's clock to the logic analyzer's clock so that every time the pat gen's clock went high I captured the data until I filled up the memory and saved the data. I tried this with alternating 0's and 1's, all 0's , all 1's, a pattern of 0110111011001111 and it's complement. Once I had captured the data I saved them as ASCII files and I wrote a small visual basic program to look for missing bits in the patterns and found none. I also fed a repeating pattern of 0's and 1's into the cables and terminated the cable with what I approximated was the impedance of my D/A. I looked at the waveforms with a scope and looked for any droop, overshoot and undershoot. The risetime of the pulses appeared to be close on all 3 cables but I did notice some some other differences. I noticed one cable inparticular did cause more overshoot than the rest but when I varied the impedance used to terminate the cable I could minimize the overshoot (probably more capacitance in this cable causing an impedance mismatch). I marked this cable and gave all 3 cables to a another engineer who has a a separate DAC and transport to take home to see if the cables sound any different from one another. I am sorry but I did not hear very much of a difference between the cables to begin, with but I thought this would be a more subjective test. As for real time and loosing bits, the logic analyzer does not lie. I will let you know what he thinks. I can not think of a better way to test the real time data transmission characteristics of a cable. I burned up today's lunch time trying this, tommorrow I think I will have something to eat :-)Thanks for the post
Redkiwi, I am not a designer of digital audio playback devices but I thought that the data sent to the DAC is asynchronis. If that is true where are the timing errors? Is the data clocked from the transport to the DAC or is it really asynchronis? I do not know, but if it is asynch, then the statement about arriving with perfect timing does not make sense to me. Is it the timing between bits that you are talking about? If that is the case, how can the cable change the timing between the bits or even cause jitter?
Nice explaination 1439bhr! You seem to be the first poster who really seems to know completely what they are talking about (I include myself in the uninformed). Very well put and easily understandable. I did catch the 1/4 cycle but I knew what you meant. Great Job!
Blues, You are absolutely correct! The purpose of my test was not to simulate the real time data transmission of a digital audio playback system but merely to prove to a poster that the digital cable has little to do with making bits fall out. I chose to do an experiment so I would have physical evidence for what I was claiming and it was fun to do. I do not have the time nor the equipment to simulate a real life situation. I also agree that jitter is not a major contributor. The only thing I do take exception with is your claim that a clock is needed for jitter measurement. A spectrum analyzer can be used to measure phase noise and a calculation can be made to get jitter from the phase noise measurement. The calculation is well documented and the Ma Bells of the world have been useing this method for years. Thanks for your post and happy listening.
Sorry, I guess we got a little carried away with the technical aspects but I learned alot in this thread and also thank those who enlightened me. The music quickens my pulse but I also enjoy the understanding of how it all comes together.