Why do digital cables sound different?


I have been talking to a few e-mail buddies and have a question that isn't being satisfactorily answered this far. So...I'm asking the experts on the forum to pitch in. This has probably been asked before but I can't find any references for it. Can someone explain why one DIGITAL cable (coaxial, BNC, etc.) can sound different than another? There are also similar claims for Toslink. In my mind, we're just trying to move bits from one place to another. Doesn't the digital stream get reconstituted and re-clocked on the receiving end anyway? Please enlighten me and maybe send along some URLs for my edification. Thanks, Dan
danielho

Showing 7 responses by redkiwi

I am not sure we are disagreeing - over and undershoot being a capacitance issue, is the same thing as saying it is a bandwidth issue (but not necessarily the reverse). So far as I know there is no capacitance issue wrt glass cables, but they sound different from one another.
First - the digital datastream carried on a coax connection is an analogue signal, albeit used to represent 1's and 0's. Second - I have never seen a signal coming out of one of these cables on a scope that is a perfect square wave. Third - lack of perfection in the square wave means jitter. Fourth - jitter produces harmonic distortion in the output of the DAC, different forms of jitter distortion producing different harmonic signatures - some sounding soft, some sounding harsh. Fifth - I have never heard or measured a reclocking device (including the Genesis Digital Lens) that does not reveal some of the jitter distortion created by upstream cables and components. And what is more important is that digital cables do sound different, provided of course you have a high resolution system and sensitive ears. I am intrigued however about the observed phenomena of a cable's sonic signature when used as an analogue interconnect, being present when used as a digital cable. I have heard this too, and with cables other than Kimber, and I reject the placebo argument in the context of how I test components. I find this one harder to explain and can only surmise that we cannot look at interfaces between components as separate systems, and that each interface may leak artifacts of itself into other parts of the total system. The active devices that buffer interfaces are meant to deal with this, but perhaps no real world electronic part works exactly how it is designed to work?
You make a mistake if you think that just transmitting the bits accurately is all that is required. Jitter (or time-based distortion) is irrelevant when there is no need for time syncronous transmission - ie. computer communications. But in audio or video you must deal/live with time-based distortion. Don't fall for the marketing BS that says a Levinson DAC or a Discman eliminates time-based issues through buffering. And by the way, don't believe that doing away with cables eliminates the problem either - otherwise we would have stuck with the three-in-one music centres of the 60's - Bmpnyc, your dream came true forty years ago.
Multiply that by 16 bits per word Gmkowal. I guess for the fourth time, the issue is not dropping bits, but that timing errors (not bit errors) cause harmonic distortion at the output of the DAC chip. Your investigations should focus on this phenomena - ie. varying the jitter at input (while leaving the bits the same) and measure the change in harmonic distortion at the output.
I think I am repeating myself, but continue to get replies which imply that the only reason why digital cables could sound different is bit errors. This is not the reason at all - the reason is jitter - noise-based and time-based distortion. The problem is not about distortion causing a DAC to read a 0 as a 1, or a 1 as a 0. It is about the fact that we are talking about real-time transmission and that a DAC produces harmonic distortion at its output when the arrival times of the 0s and the 1s are not perfectly, regularly spaced. I really am having trouble saying this in as many different ways as I can. It is not about redundancy so that when an error occurs the data can be resent - we are not talking about data packet transmission here. Bandwidth capability is in fact an issue here. Even though the bandwidth for data transmission is low by most standards, if the cable was only just able to transfer the data accurately then the square waves would be very rounded indeed and jitter errors at the DAC would be enormous. Higher bandwidth cables allow sharper corners to the square wave with less undershoot or overshoot. Optical cables are also free from earth noise adding to the signal. It is not about bit errors, it is about timing based distortions. I work with loads of PhD telecommunications engineers but their grasp of these concepts is slight at best, because it is irrelevant for the audio fidelity needs of telephony and irrelevant for data packet transmission. But the best of them acknowledge that their training is insufficient for high quality audio.
Now you have got me Kthomas - I don't know, and have not done much work on it since I cannot change how DACs work. I have only got so far as observing how transmission interfaces are subject to jitter and how jitter results in harmonic distortion. I suspect you are not quite right about the DAC getting bits wrong and it is more about how the DAC converts the digits to an analogue wave-form and how this is upset by timimg errors in the arrival of the digits. I surmise that the DAC cannot construct an accurate analogue waveform from a datastream containing jitter errors without a perfect buffering system. One can imagine how with no buffering at all, a DAC would find it difficult to create a perfect analogue waveform with digits arriving with imperfect timing. When I add to this the fact that I have never heard a buffering system that eliminates upstream jitter (they just reduce it or change it), then I can intuitively imagine how the problem arises. If I understood more about how buffering systems fail to work perfectly then I might have a better answer.
I have a question for the engineers here. When playing around with cables I found something odd. Let me give you an example. If I used a particular belden cable design, albeit in different guages, using all teflon insulation and silver coated copper, I found a consistent flavour to these cables regardless of their application. You will have to trust me that I verified this with some blind testing, but once I recognised the flavour I could always hear when one was inserted into the system. This happened whether a fat one was used for a power cable, or a speaker cable, or a thinner one was used as an audio interconnect or a digital interconnect. I am not saying I didn't hear other things about these cables, but I am saying that there was a consistent additive quality, perhaps some form of ringing, regardless of the application of the particular type of cable.

I recently added such a cable to power a CD player that was connected to the system I was listening to but was not the source I was listening to and immediately heard its sonic signature. I was not expecting to hear it, and was stunned to hear it, so I don't think the issue was psychologically induced.

So other than me being mad (which is a whole 'nother question), is it possible that any cable connected to a system is an antenna of some sort and introduces audible effects even when used as a digital cable, provided there is some analogue component that is electrically connected in some way to that cable?