Why do digital cables sound different?


I have been talking to a few e-mail buddies and have a question that isn't being satisfactorily answered this far. So...I'm asking the experts on the forum to pitch in. This has probably been asked before but I can't find any references for it. Can someone explain why one DIGITAL cable (coaxial, BNC, etc.) can sound different than another? There are also similar claims for Toslink. In my mind, we're just trying to move bits from one place to another. Doesn't the digital stream get reconstituted and re-clocked on the receiving end anyway? Please enlighten me and maybe send along some URLs for my edification. Thanks, Dan
danielho
Ehider, I am still the skeptic, but I appreciate your viewpoint and candor. BTW, Is your employer Burr Brown? Thanks for the reply.
I am not sure we are disagreeing - over and undershoot being a capacitance issue, is the same thing as saying it is a bandwidth issue (but not necessarily the reverse). So far as I know there is no capacitance issue wrt glass cables, but they sound different from one another.
I will be honest and say that I do not use fiber optic cables so I can not say definitively there is a sonic difference between digital cables. You have heard differences, so I will take your word for it. As for the reason.... I am an EE not a physisist. If I were to take a stab at it, with my limited knowledge of optics and lightwave, I would guess the differences would be due to propagation delay, loss or refraction differences in the cables. Maybe someone else with a better understanding of optics in this forum could enlighten us both.
Don't worry about starting a religious war. The reason for these forums is to get the big picture. There are a million opinions out there and you should consider the ones that make sense to you. Part of the fun of these forums is to learn a little and give a little in return. I read a reply to one of my posts the other day that mentioned that audio was a "passionate undertaking". I subscribe to this wholeheartedly because if the music we listen to and the equipment that provides that music does not empower emotion and passion, why listen at all? If some of the responses seed a bit emotional, just understand that it is only because of the passion that we have for the music.
Frogman, rather than challenge the fact that you hear the difference I will simply state what I know to be facts. While I most certainly do not have the trained ears of a musician, I think, (maybe wrongfully so) that God has blessed me with a decent set of ears. While I have not listened to a set of fiber optic cables, I can not discearn any real sonic differences between good quality copper (or silver etc.) cables. I also submit that the physics involved may not support sonic differences that are discearnable to my ears. I believe you when you say you can hear the differences, but maybe the differences are so subtle that one would need musican's ears to hear them. To quote Dennis Miller: "that's my opinion, I could be wrong".
Gmkowal I don't believe that you understand the signal transfer in CD playback. Most other digital transactions people have discussed are NOT real-time. CD playback is real-time. The protocol is one way with 44000 transmissions per second. Signal is lost. Bits are dropped. On other real-time digital aplications, say telephony, there is both a retry mechanism if the protocol allows and data recovery. CD protocol has no such provisions. Again rember this is technology that is almost 20 years old. The improvement in technology since the CD standard to the present is greater than the improvement in technology from Mr Bell And Edison to the start of CD technology. I suggest that your claimed knowledge of physics is flawed by your lack of understanding of this signal transfer. Before you go blasting a lot of other people, you'd better know what yo're talking about. As for some backgrtound, I've been working with digital audio on and off for over 30 years. I have spent over 8 years designing commercial digital audio and video and data transfer interfaces. So I'm not someone who has just read a few articles in StereoShill.
Gmkowal just a follow on my previous response. If you don't believe me, prove it yourself. Attach a scope to the output of your DAC. Play some test tones on your transport, ( Use frequencies above 8k that's where the effect starts to be more noticable). Get some different cables and measure the signals that come out of the DAC. I have measured some pretty significant differences. I've always believed if you can measure the difference, it exists.
Blues_man - Given the inadequacies of the CD protocol and your experience in the area of digital audio data transfer interfaces, what are the benefits brought to the environment by using a high-end digital interconnect over a basic, well-built digital interconnect? That was really the question at the beginning of this post, and I think we're all still curious. -Kirk
The problem is really in the transmitter and receiver. Even though there is a "standard", there are always difference in implementation of the hardware. I started out believing that there was no difference in digital cables. That was before I was familiar with the CD standard which isn't very robust. After a while I started measuring lots of cables and interfaces just out of curiosity. Its not easy to determine which cable sounds best with a particular transport / DAC combo. You have to rely on what other people have tried. First definitely goe with AES/EBU interface over RCA cable. It's not that much more. The one I liked best was the MIT reference. Its very expensive, $800, I believe its not worth the money but if cost is no object. I'll be putting the one I tested for sale here soon (at least half off). As I posted in another thread, the best thing is to get a Transport / DAC with a custom interface. I got the Spectral because I thought it sounded best. If you have access to a scope, use the method I suggested above with a high frequency tone. Whichever cable gives the most accurate signal is probably the best.
I understand that the transmission is real time. I just do not believe a cable that is in good working order will cause a bit or two to be dropped. I agree with Blues_Man that the problem is probably in the transmitter or reciever if bits are being lost. I ran a quick test on my 3 digital cables at work today using a logic analyzer ( a scope is not the right tool for a real time test) with a pattern generator and deep memory (16 meg). I simply output a 16 bit burst every .5 seconds. The rep rate within the burst was set to 10KHz. I simply tied the pattern generator's clock to the logic analyzer's clock so that every time the pat gen's clock went high I captured the data until I filled up the memory and saved the data. I tried this with alternating 0's and 1's, all 0's , all 1's, a pattern of 0110111011001111 and it's complement. Once I had captured the data I saved them as ASCII files and I wrote a small visual basic program to look for missing bits in the patterns and found none. I also fed a repeating pattern of 0's and 1's into the cables and terminated the cable with what I approximated was the impedance of my D/A. I looked at the waveforms with a scope and looked for any droop, overshoot and undershoot. The risetime of the pulses appeared to be close on all 3 cables but I did notice some some other differences. I noticed one cable inparticular did cause more overshoot than the rest but when I varied the impedance used to terminate the cable I could minimize the overshoot (probably more capacitance in this cable causing an impedance mismatch). I marked this cable and gave all 3 cables to a another engineer who has a a separate DAC and transport to take home to see if the cables sound any different from one another. I am sorry but I did not hear very much of a difference between the cables to begin, with but I thought this would be a more subjective test. As for real time and loosing bits, the logic analyzer does not lie. I will let you know what he thinks. I can not think of a better way to test the real time data transmission characteristics of a cable. I burned up today's lunch time trying this, tommorrow I think I will have something to eat :-)Thanks for the post
One more thing, the logic analyzer used actually had 1 meg memory, not 16 meg. Someone borrowed my 16 meg aquisition module.
I no longer have an analyzer, but whebn I did it was oked up to CD players and Transport / DACs. You just can't send bit patterns with your equipment because its probably far more sophisticated than what's in a CD playback system. Remember you also have to send 44,100 samples per second. A scope works fine at the output of the DAC to determine differences in the analog output of test signals. If you use the same transport, same DAC and same source, any differences must be from the cable. Also disconnect the destination end of the cable and put a volt meter on it, notice the large differences.
OOps almost forgot, in your test how did you verify that all samples sent were received and stored? If you sent 500,000 samples, did 500, samples get stored?
All I did was set up a burst of each pattern and a signature pattern (to be used in the little vbasic program I wrote) and had the pattern generator resend the the sequence over and over until I filled up the aquisition memory of the analyzer (It took some time so I did some real work also). However many samples it took to fill up the aquisition memory is how many samples I got. I did not count how many patterns I captured but it was alot! I used a Tektronix TLA704 analyzer so I stored the results on the analyzers hard disk. Tek's analyzer also runs windows so I wrote the Vbasic program right on the analyzer. I did a simple compare of the 5 bursts I sent (16 bits of all 1's, all 0's etc.) until I hit the signature burst and I started the compare over again and repeated the process until the software could not find the signature burst. If I got a situation where it did not compare I incremented a counter in my program. The counter never incremented so there were no errors. To test my software I buggered up my captured data file a bit and the program caught it. It was really an easy experiment but I probably spent more time on it than I should have at work but I trust nobody will tell my boss. As a designer of these types of systems I am sure you have run similar tests. While this is not my forte ( I do DSP design) it sounded like a valid test and you challenged me to the task.
One more thing. The rep rate of the bits in each burst was 10KHz. Ok, I should have set the rate to 40KHz, but I still doubt I would have lost any data as this is a pretty low frequency for these cables or at least 2 of the 3 cables.
Multiply that by 16 bits per word Gmkowal. I guess for the fourth time, the issue is not dropping bits, but that timing errors (not bit errors) cause harmonic distortion at the output of the DAC chip. Your investigations should focus on this phenomena - ie. varying the jitter at input (while leaving the bits the same) and measure the change in harmonic distortion at the output.
I'm still missing something, but I've been known to be dense before - my take on timing errors would be that they cause bits to be misinterpreted - since sender and receiver have subtly different experiences with their independent clocking mechanisms, the receiver interprets things slightly askew and gets a different "answer" than the sender sent. The result would be different bits fed into the DAC than if there was a perfect transfer. Is there something to "timing errors" beyond this that I'm missing?
Now you have got me Kthomas - I don't know, and have not done much work on it since I cannot change how DACs work. I have only got so far as observing how transmission interfaces are subject to jitter and how jitter results in harmonic distortion. I suspect you are not quite right about the DAC getting bits wrong and it is more about how the DAC converts the digits to an analogue wave-form and how this is upset by timimg errors in the arrival of the digits. I surmise that the DAC cannot construct an accurate analogue waveform from a datastream containing jitter errors without a perfect buffering system. One can imagine how with no buffering at all, a DAC would find it difficult to create a perfect analogue waveform with digits arriving with imperfect timing. When I add to this the fact that I have never heard a buffering system that eliminates upstream jitter (they just reduce it or change it), then I can intuitively imagine how the problem arises. If I understood more about how buffering systems fail to work perfectly then I might have a better answer.
On someone's reccomendation I purchased a RadShack Digital RCA cable. They claimed it was very good, it was so bad there was audible static. I worked with an engineer once who screwed up the implementation of my DAC driver that it was always 2 bits off on each word. This causes the static. On most protocols I've workrd with, if they don't sync up they just drop the data. I assume that the thought here is dropping a sample here or there is no big deal. I'm pretty sure they didn't realize how audible the inadequacies were going to be. If they really wanted to do this right, they should have gone with laser pickup on analog discs.
Redkiwi, I am not a designer of digital audio playback devices but I thought that the data sent to the DAC is asynchronis. If that is true where are the timing errors? Is the data clocked from the transport to the DAC or is it really asynchronis? I do not know, but if it is asynch, then the statement about arriving with perfect timing does not make sense to me. Is it the timing between bits that you are talking about? If that is the case, how can the cable change the timing between the bits or even cause jitter?
Redkiwi, even if jitter did cause a problem, I do not think it would manifest itself in harmonically related distortions. I can't see it causeing either higher or lower order distortions.
Actually, Redkiwi I captured a megasample. I had a meg of aquisition memory and I filled it. After I filled the 1 meg I ceased taking data. The test took 80 minutes for all the data to be captured (106 bits every .5 sec [5 16 bit patterns plus the placeholder pattern]). I captured 16960 patterns or words if you will each 16 bits long.
WOW, this is a MUCH better response than I'd anticipated when I asked the first question. Thanks to everyone for the feedback. The wide spectrum of expertise in this forum is astounding! Now to stir the pot a little... Through this discussion, we have some idea of what may be going on within the cable and the interaction of the cable and sending/receiving mechanisms. However, what is the difference in the sound that is perceived? This is my thinking...If the stream is just 1's and 0's...what happens when there is degradation? How does it manifests itself? There seems to be a few phenomena that could occur...scrambled bits, missing bits, signal too low (too hight), timing errors, etc. To me, errors in the data stream would sound like dropouts/static, even wrong notes, etc. and not the same type of subtle differences as in analogue cable. However, there are reports of more bass, the highs being too bright, etc. There are even reports of digital cables with the "family" sound of the company's analogue cables. How can this be possible? As well, if not bits are lost, what difference does timing errors make? Isn't part of the reconstitution of sound at the receiving end, reclocking the signal? Sorry for so many questions, however this thread has been very interesting and educational! Thanks everyone!
It is asynchronous, and it's fixed rate. In a "normal" setup, sender and receiver each have their own clocks that are supposed to work identically. I've never studied the CD interface, so somebody who has can correct me if I mis-state something, but in typical asynchronous communications, each byte has a "start" bit and a "stop" bit surrounding the eight bits of data, so timing errors would have to be severe enough to cause problems from the time the start bit occurs to the time the stop bit arrives. Since it's a fixed data rate, it takes just as many bits to transmit high frequencies as low frequencies, so the chance of error should be the same. In any case, I can't see any way that the cable would make a difference in the ultimate delivery of the bits based on timing errors as long as it's the usual "good working condition".

Or, I have no idea of what I'm talking about and would love to understand it with you, Danielho. I definitely don't have any idea how a DAC works electrically, certainly not on the analog side - to me, the problem is broken down into "chunks" where getting the samples delivered to the input of the DAC is separate from what the DAC does with that sample - I'm assuming that with a known stream of samples, the DAC will produce a known output. -Kirk

Hi, I've been away for a while, but have been watching the pot bubble on this topic. Let's talk about jitter. Jitter is not a harmonic distortion. It is a clock timing error that introduces an effect called phase noise either when a signal is sampled (at the A/D) or the reconstruction of a signal (at the D/A), or both. Think of it this way: a sine wave goes through 360 degrees of phase over a single cycle. Suppose we were to sample a sine wave whose frequency was exactly 11.025 kHz. This means that with a 44.1 kHz sample rate we would take exactly four samples of the sine wave every cycle. The digital samples would each represent an advance in the sine wave's phase by 45 degrees (1/4 of a cycle). The DAC clock is also supposed to run at 44.1 kHz; think of this as a "strobe" that occurs every 22.676 nanoseconds (millionths of a second) that tells the DAC when to create an analog voltage corresponding to the digital word currently in the DAC's input register. In the case of our sine wave, this creates a stairstep approximation to the sinewave, four steps per cycle. Shannon's theorem says that by applying a perfect low pass filter to the stairsteps, we can recover the original sinewave (let's set aside quantization error for the moment... that's a different issue). Jitter means that these strobes don't come exactly when scheduled, but a little early or late, in random fashion. We still have a stairstep approximation to the sine wave, and the levels of the stair step are right, but the "risers" between steps are a little early or late -- they aren't exatly 22.676 microseconds apart. When this stairtep is lowpass filtered, you get something that looks like a sine wave, but if you look very close at segments of the sine wave, you will discover that they don't correspond to a sinewave of at exactly 11.025 kHz but sometimes to a sinewave at a tiny bit higher frequency, and sometimes to a sinewave at tiny bit lower frequency. Frequency is a measure of how fast phase changes. When the stairstep risers which corresponds to 45 degrees of phase of the sinewave in our example, comes a little early, we get an analog signal that looks like a bit of a sine wave at slightly above 11.025 kHz. Conversely, if the stairstep riser is a bit late, it's as if our sine wave took a bit longer to go through 1/4 of a cycle, as if it has a frequency slightly less than 11.025 kHz. You can think of this as a sort of unwanted frequency modulation, introducing a broadband noise in the audio. If the jitter is uncorrelated with the signal, most of the energy is centered around the true tone frequency, falling off with at lower and higher frequencies. If the jitter is correlated with the signal, peaks in the noise spectrum can occur at discrete frequencies. Of the two effects, I'd bet the latter is more noticeable and objectionable. Where does jitter come from? It can come if one tries to construct the DAC clock from the SPDIF signal itself. The data rate of the SPDIF signal is 2.8224 Mb/sec = 64 bits x 44,100 samples/sec (the extra bits are used for header info). The waveforms used to represent ones and zeroes are designed so that there is always a transition from high to low or low to high from bit to bit, with a "zero" having a constant level and a "one" having within it a transition from high to low or low to high (depending on whether the previous symbol ended with a "high" or a "low"). Writing down an analysis of this situation requires advanced mathematics, so suffice it to say that if one does a spectrum analysis of this signal (comprising a sequence of square pulses), there will be a very strong peak at 5.6448 MHz (=128 x 44.1 kHz). A phase locked loop can be used to lock onto this spectrum peak in attempt to recover a 5.6448 MHz clock signal, and if we square up the sine wave and use a simple 128:1 countdown divider would produce a 44.1 kHz clock. Simple, but the devil is in the details. The problem is that the bit stream is not a steady pattern of ones and zeroes; instead it's an unpredictable mix of ones and zeros. So if we look closely at the spectrum of the SPDIF waveform we don't find a perfect tone at 5.6448 MHz, but a very high peak that falls off rapidly with frequency. It has the spectrum of a jittered sine wave! This means the clock recovered from the SPDIF data stream is jittered. The jitter is there due to the fundamental randomness of the data stream, not because of imperfections in transmitting the data from transport to DAC, or cable mismatch, or dropped bits or anything else. In other words, even if you assume PERFECT data, PERFECT cable, PERFECT transport, and PERFECT DAC, you still get jitter IF you recover the clock from the SPIF data stream. (You won't do better using IMPERFECT components, by the way). The way out of the problem is not to recover the DAC clock from the data stream. Use other means. For example, instead of direct clock recovery, use indirect clock recovery. That is, stuff the data into a FIFO buffer, and reclock it out at 44.1 kHz, USING YOUR OWN VERY STABLE (low-jitter) CLOCK -- not one derived from the SPIF bitstream. Watch the buffer, and if it's starting to fill up, bump up the DAC clock rate a bit and start emptying the buffer faster. If the FIFO buffer is emptying out, back off the clock rate a bit. If the transport is doing it's job right, data will be coming in at constant rate, and ideally, that rate is exactly 44,100 samples per seconds (per channel). In reality, it may be a bit off the ideal and wander around a bit (this partly explains why different transports can "sound different" -- these errors make the pitch may be a bit off, or wander around a tiny bit). Note that recovering the DAC clock from the SPDIF data stream allows the DAC clock to follow these errors in the transport data clock rate -- an advantage of direct clock recovery. But use a big enough buffer so that the changes to DAC clock rate don't have to happen very often or be very big, and even these errors are overcome. Thus indirect clock recovery avoids jitter, and overcomes transport-induced data rate errors (instead of just repeating them). That's exactly what a small Discman-type portable CD player with skip-free circuitry does. Shock and vibration halt data coming from the transport, so for audio to continue, there must be a large enough backlog built up in the FIFO to carry on until the mechanical servos can move the optical pickup back to proper place. Better audio DACs, such as the Levinson 360S use this FIFO buffering and reclocking idea to avoid jitter, as well. In principle, a DAC that uses this kind of indirect clock recovery will be impervious to the electrical nuances of different digital cables meeting SPDIF interface specifications. And that's as it should be.
Geez... sorry for the brain fart in my posting above. 1/4 of a cycle is 90 degrees of phase, not 45. Try visualizing a sine wave that's sampled 8 times per cycle and rebuilt using 8 stair steps. Now suppose that some of the stairsteps risers come a teeny bit early or a teeny bit late and you'll get the picture.
Nice explaination 1439bhr! You seem to be the first poster who really seems to know completely what they are talking about (I include myself in the uninformed). Very well put and easily understandable. I did catch the 1/4 cycle but I knew what you meant. Great Job!
1439bhr, I got a question I am sure you can answer. Does the low pass filter on the output of the DAC also handle any aliasing that may be present? Also, how much phase noise is acceptable in the system before there is any degredation and does phase noise closer to the carrier or farther away from the carrier seem to have the most effect. Thanks in advance for your response.
Using a FIFO and reclocking is not the end all and is not so simple. When the Gensis Digital Lens came out, Bob Harley assumed that all transports would sound exactly the same. He found that he was wrong. Rather than reclock, the best solution, given the current CD methodolgy is to have a custom data transfer mechanism like Lvinson and Spectral. I believe that the SPDIF interface is inadequate for high quality playback. I did some testing about 7-8 years ago to prove that there was no difference in transports or digital cables and published that dat on the net. At the time I had only worked with proprietary digital record and playback protocols and was not that familiar with the CD playback mechanism. My tests proved just the opposite of what I thought. Not only that but subjectively the tests seemed to show that jitter was not the most important of the digital flaws, both read errors from discs and signal loss of high frequencies seemed to be more of a problem. While discman buffering is universal and basically required for a device that bounces around, everyone I've ever owned always said to turn off the buffering when not needed because the sound was not as good as the direct signal. I still believe that the fewer components in between the transport and the DAC the better. Buffering of CD data has all sorts of other issues that need to be handled like skipping forward / back etc that make it impractical since you always have to handle the case when the buffer is empty. All the sysytems I have designerd, and most commercial systems with proprietary protocols in general send raw data to the DAC and the DAC interface has the only clock and jitter is at an absolute minimum. Jitter is a fact of life in any digital medium. The use of the clock has been there since digital technologies in the 60s. I always get a kick out of manufacturers who claim jitter reduction devices that produce 0 (zero) jitter. It's physically impossible to accurately measure jitter anyway. We can improve the accuracy, but there is always some small error in the measurement. This error will decrease as technology improves. By the way jitter errors have a much more detrimental effect than just causing pitch errors. Lack of perfect integrity of the audio signal effects soundstage and imaging and if jitter is so bad that the dac syncs incorrectly on the signal, severe static can be produced. See my previous postings.
I have read your post and can not dispute your claims with the exception with the exception of one. Jitter is not very difficult to measure accurately if you understand the measurement concept. Jitter and be made simply with a spectrum analyzer and a calculation. All that is needed to convert phase noise to jitter is a simple calculation. A spectrum analyzer can be used to measure phase noise if the device to be tested has little or no AM because the analyzer can not tell the difference between AM and phase noise.
The Levinson DAC provides SPDIF, AES/EBU, and Toslink input (and ATT, I believe) interfaces. It does not support other "custom" interfaces such as I2ES, which attempt to deliver accurate, jitter-free clock signals to the DAC. Problems with the transport such as dropped data bits, etc., is a separate issue from the effect of digital cables. A back-of-the envelope calculation reveals that clock jitter needs to be less than 100 picoseconds to ensure that all jitter artifacts are at least 96 dB down from a signal at 20 kHz (the most stressing case). Bob Harley notes in his book "Guide to High End Audio" that several popular clock recovery chips, based on PLLs, produce about 3-4 nanoseconds of jitter (about 16 dB worse than the 100 ps reference point). Delivering a 44.1 kHz clock signal to a DAC and assuring jitter less than 100 ps has a stability of about 4 parts per million, which is achievable with competent design. Devices such as a Digital Time Lens, which provide an SPDIF output, remain at the mercy of clock recovery jitter at the DAC. The best they can hope to do is to stabilize the transmit end clocking as seen by the clock recovery circuit.
Gmkowal 2 points here. It ought to be obvious that your test is inadequate, simply use a CD source. It should take about 4 seconds if memory serves me to fill a 1 meg buffer of SPDIF data. That's real time. When I say jitter can not be measured accurately I'm simply pointing out that any device that measures jitter has to have a clock, that clock has jitter, so your measurement has to be off by some amount. That amount may be small but it always exists. My point was that many claims of jitter reduction devices are total BS. 1439bhr I thought the Levinson also had a proprietary data transfer interface. My point was that some component combinations do and they seem to do the best job. My points were simply that jitter reduction devices are a poor substitute for a good implementation between a DAC and transport. They may even incrrease jitter. My second point is that the jitter on most "good" systems is low enough to not be a major factor in the sonic degredation. I believe jitter is third behind errors in the transport and signal loss of high Khz signals.
Blues, You are absolutely correct! The purpose of my test was not to simulate the real time data transmission of a digital audio playback system but merely to prove to a poster that the digital cable has little to do with making bits fall out. I chose to do an experiment so I would have physical evidence for what I was claiming and it was fun to do. I do not have the time nor the equipment to simulate a real life situation. I also agree that jitter is not a major contributor. The only thing I do take exception with is your claim that a clock is needed for jitter measurement. A spectrum analyzer can be used to measure phase noise and a calculation can be made to get jitter from the phase noise measurement. The calculation is well documented and the Ma Bells of the world have been useing this method for years. Thanks for your post and happy listening.
Thank you Bluesman, 1439, Gmkowal, Redkiwi and all of the above. The technical data strained my patience, but was well worth following. I am sure that some mysteries will always remain, but with things being this complicated, I will have to continue using my ears as my guide. I wonder if the sound that I prefer would show a smaller amount of the problems pointed out here? Very interesting how small pitch shifts effect the soundstage, Bluesman.
Sorry, I guess we got a little carried away with the technical aspects but I learned alot in this thread and also thank those who enlightened me. The music quickens my pulse but I also enjoy the understanding of how it all comes together.
Daniel,
I've just posted a reply saying that if you cannot hear a difference between a Cardas Highspeed and a Cardas Lightning 15 digital cable, you have a low fi system.I would like to retrack this statement.
What I really meant was, if you cannot hear any difference between these two digital cables, then the question whether different digital cables sound differently should not concern you because you are one of those few in the CANNOT HEAR DIFFERENCE CAMP. In that case, the question of why digital cables would sound different should not concern you either.
Go try out these cables and see if they sound diffently to you and then pursue the balance of your question as to what makes them sound different.
Whether different digital cables sound different is very dependent on the DAC design. Digital cables will typically NOT drop bits. Any degradation has to do with jitter that is introduced due to the cable losses. DAC's that completely re-clock the data stream will be much less sensitive to the quality of the digital cable and any jitter that the cable causes. However, most DAC's are not like this and therefore have some sensitivity to jitter.

Digital interconnects that exhibit low-capacitance, low-dielectric-absorption and generally low-losses will introduce the least jitter into the digital bit-stream.

This is why they sound different.
Audioengr ( hi !! ),

What part do you think vswr and impedance mismatches between source, cable and load play when it comes to transferring signals from transport to DAC ? Since we are dealing with an RF based signal, transmission line theories DO directly apply here.

Your statements about low dielectric absorption intrigue me. Shouldn't a cable that has a higher velocity factor be less prone to signal deterioration / absorption due to the signal spending less time in the cable ? If that is correct, than a cable that is "slower", such as those that make use of a teflon dielectric configured in a standard coaxial design, would be a relatively poor performer. If that is the case, cheaper "foam" insulated cables "should" perform appr 13% better than more costly Teflon versions given equivalent conductor materials. Any thoughts or comments on this ? Sean
>
Due to different configurations they act as tone controls. And I DOOOOO hear quite a difference in some. Most to me sound way too bright. Some interconnects produce quite a difference in the sound of a system. Price does not seem to me to be a big factor, rather finding one that shapes the sound to my ears preference. I rewired a system for a friend that thought he had good sound (very harsh). The results were jaw dropping even to him (Tin ears). His ears at least are better than Julian H. He cost me a lot of money buying on his recommendations. YIKES!!!!
Hi,

Digital cables can make a HUGE difference. But all those fancy multi-megabuck cables with exotic materials won't buy you much, unless they also solve the key technical problems. At high frequencies, these cables MUST be impedance matched. Ie ...Coax connections should be 75Ohm. If not, reflections inducing jitter related distortions will occur, and this IS what differentiates these cables. This along with dialectric absorption and inproper shielding. It is my feeling that cables that sound sharper, cleaner, more resolved, are simply reducing the jitter component compared to others. If your DAC reclocks than these differences are less apparant. If it does not, as my EVS doesnt, you will be blown away.

When I went from a rather expensive "fancy" cable to an RG6U(Quad Shielded 75Ohm) design cheapo cable(email me if your interested in what this was)...The differences were as big as the difference between various DACs...not insignificant. I love it when something cheap does the same job of something much more expensive, and sometimes better.

Good luck
Jt, rather than clog up your inbox and have to make several individual replies, how about posting your comments here ? I for one would like to know what you have gotten such good results with, especially since both I and my Brother along with several others here are running EVS DAC's.

Before you comment, you should be aware that there is NO such thing as a 75 ohm RCA connection. This is true regardless of what Canare states. Nor is the nominal impedance of the wiring from the digital input to the circuit board inside the EVS 75 ohms. None the less, you may have stumbled onto something in terms of increased power transfer / minimized loss / reduced standing waves. The last part ( reduced standing waves ) is directly related to a reduction in jitter. Sean
>

Zilla - it is impossible for a digital cable to act as a "tone control". What happens is that the jitter caused by the cable losses results in frequency modulation of the analog signal. This can cause lack of clarity and focus. In general, the better clarity and focus, the better the digital cable.
Jt25741 - You are dead-on when you say that jitter is the main issue with digital cables. However, with some cables, you get what you pay for. Every cable manufacturer has a digital cable and many of them are not even close to 75 ohms characteristic impedance. They need to be 75 ohms. As for the connector, the best you can do with an RCA is to get the impedance right up to the entry point of the jack. After that there will be a discontinuity. Jacks are never 75 ohms. Some manufacturers do this. I do.
Sean - Transmission-line effects are the main concern with digital cables. Characteristic impedance matching is a big part of this. However, dispersion of the digital signal is also caused by dielectric absorption, which can cause jitter, so just matching to 75 ohms is not sufficient to minimize jitter.

"Shouldn't a cable that has a higher velocity factor be less prone to signal deterioration / absorption due to the signal spending less time in the cable ?"

There will generally be less absorption in a high-velocity cable because in order to get high-velocity, you need a low dielectric constant. Low dielectric constant results in lower capacitance and lower dielectric absorption. The time that the signal transits the cable (propagation time) is really of little consequence itself. This will obviously change depending on the length of the cable. The rise-time of a SP/DIF signal is on the order of 20 nsec, so you would have to have 100 feet to equal the risetime. Technically, this makes impedance a non-issue for a 6-foot SP/DIF cable. However, in practice, impedance discontinuities do impact the sound, particularly the image focus and detail, by adding to the jitter.

As for dielectrics, PVC is at the bottom, getting progressively better with foamed poly, solid Teflon, foamed Teflon, expanded Teflon and finally air. I use Expanded Teflon in my Digital cable. It is hard to put a percentage on the improvement without measuring it. I have plans to purchase a Tek CSA803 communications analyzer, which will measure jitter accurately to a few picoseconds, so I will eventually be able to measure this.
I would have to agree with the impedance argument, if a digital connection says 75ohm you need a 75ohm capable cable that is purpose designed to sound good doing what you want to use it for. I would suggest looking at Canare's RCAP true 75ohm RCA connectors $3-$5 each and some of the canare LV-77S cable I think it is $2 a foot or so. That combination will outperform $60-$100 digital interconnects. Canare takes cables very very seriously and they are highly regarded by the profesional audio world.
Ooop's, I was commenting on speaker cables and interconnects and missed the word digital.
Frogman's insistance on believing that Kimber's digital cables exhibit the same sort of characteristics as their analog ones may be true, or it may just be that he associates those characteristics with the Kimber brand, and his brain tells him what to expect.

We can't overlook the affect of our preconceptions on what we hear. I would bet that many of the "golden eared" of the world would be shocked to learn their conclusions in true double-blind tests. (especially if not told WHAT they're testing!)

As a reviewer in one of the high-end mags wrote (about 20 years ago) "The amplifier delivers 300 WPC into an 8 ohm load and is housed in rich persimmon wood." He then went on to describe the amplifier's sound as "warm" and added, tongue in cheek, "which is typical for amplifiers housed in rich persimmon wood..."

Or, as I responded to my friend recently who asked, "Can you really hear the difference between a stock power cable and one costing $2000?" I replied, "If you just spent $2000 on a power cable, you'll hear a difference."

No hate mail please - I have upgraded power cables, interconnects, etc. My point is simply that our brains can convince us that we hear almost anything. Heck, that's why our systems sound good at all - because our brain fills in what's missing. Therefore, be sure YOU hear it - don't take someone else's word for it.
John: I don't think that you'll find anyone here that would deny that brands / prices / cosmetics may have more influence over what we hear than many of us would like to admit. I try to forget about all of this stuff and just listen. Obviousy, i try to start off with products that are at least well designed to begin with, but one sometimes does not even have that much info to work with when auditioning specific items. Sean
>
This is a long and old thread. Read 1439bhr and audioengr for good answers. I disagree with 1439bhr that fifo buffering is the best solution, though it is a pretty good one.

There are many references on the web about this. Understanding in the audio community has come a long way since the thread started.

It is clear that digital cables make a huge difference on most but not all DACs. Yes, it's because of jitter, not data errors. The cables carry not only the data, but also the clock information. For SPDIF the clock is recovered from dual phase sinewave encoding of the data, and I'm not sure about the Toslink mechanism (square or sine?). This is a terrible way to get the clock.

Imagine, a nice clock signal controls a transport with a FIFO. There is some jitter coming out of the FIFO, but at least the timing is controlled by the master clock right there at the transport. Then the data is encoded, sent to transmitter that changes it to dual-phase sine waves, through a lousy digital cable, into a DAC receiver, where it is put back to digital where the clock and data are recovered. We then send the data and clock straight into a DAC or upsampler.

Now, how good is that clock? An SPDIF receiver uses a phase-lock loop (PLL) to recover the data and timing. It typically has a time constant that passes all jitter below 10 kHz directly onto the DAC. Yup, your music is at the mercy of the transport clock, transport jitter, cable quality, and SPDIF trasmitter/receiver circuitry. It's a wonder we get decent sound at all. Well, in fact, in a lot of cases we don't. The SPDIF standard was made at a time when 2 ns of jitter was considered to be inaudible. This is totally not the case. I have read that "inaudible" is more like 25 ps of jitter.

The problem is that the DAC needs an accurate clock to prevent distortions in the sound. There are 3 solutions that I know about:

1) Use a huge FIFO buffer to buffer up about 1 second of data. You can then reclock with any jitter being around 1Hz or lower in frequency. This can work, but I think there can be jitter problems with FIFOs themselves.
2) Throw away SPDIF since it is junk, and go for a bidirectional communication standard like firewire. This allows the master clock of the system to be in the DAC, not in the transport. The DAC tells the transport to go faster or slower, and only a small FIFO is needed in the DAC.
3) Reclock the data using an upsampler.

Number 3) is most commonly used now, with the proliferation of the 24/192 upsampling DACs and converters. Now, you can do oversampling without reclocking (4x, 8x for instance), but since these new "upsamplers" are not a multiple of 44.1 in their timebase, they are called asynchronous upsamplers. They take samples of the data on their own clock. Yes, this reclocks the data, but unless it is done right, you just resample the jitter into the data stream now instead of the clock.

In order to for an upsampler to reclock without jitter getting into the datastream, the upsampler must "track" the incoming jitter. My Bel Canto DAC2 has an upsampler chip that acts like a digital PLL, which digitally tracks the incoming jitter and acts like a PLL with a base frequency of 3Hz. Audio-band jitter is highly reduced.

You know what? Cables don't make much of a difference on my Bel Canto DAC. But on my Denon AVR-5800 receiver, the differences are so clear. Toslink sound very different from SPDIF as well on the Denon (haven't tested the Bel Canto -- just love the SPDIF on it).

So, I hope this cable debate goes away in a few years, because frankly I think it's nuts that I spend a few hundred on a Harmonic Tech Cyber-link platinum. And I'll be darned if I'll spend thousands on a transport. The clock must be accurate right at the DAC chip, and a good cable is only part of the battle. With newer DACs I hope to be able to happily agree that "digital cables make no difference". Right now though, on most DACs, it just isn't true. Cables make a difference, as do everything else in the digital chain (yup, even power cords ... but I won't go there in this response).