Why do digital cables sound different?


I have been talking to a few e-mail buddies and have a question that isn't being satisfactorily answered this far. So...I'm asking the experts on the forum to pitch in. This has probably been asked before but I can't find any references for it. Can someone explain why one DIGITAL cable (coaxial, BNC, etc.) can sound different than another? There are also similar claims for Toslink. In my mind, we're just trying to move bits from one place to another. Doesn't the digital stream get reconstituted and re-clocked on the receiving end anyway? Please enlighten me and maybe send along some URLs for my edification. Thanks, Dan
danielho

Showing 9 responses by 1439bhr

Another point: it could be that different cables have differing degrees of shielding effectiveness, and therefore, generate different levels of RFI which is coupled back into the analog electronics. P.S. Don't knock science and engineering. That's what gave you something to listen to. Empiricism has its place, but progress depends on interpreting those results in the framework of physical laws and analytical principles -- or else we'd still be riding ox carts with wooden disc wheels over stone bridges.
Drubin: No. This is a case where "bits is bits" -- while the bitstream obviously carries the musical information, one cannot point to particular bits say, "here, these are the ones carrying the high frequency content". A role that digital cable may play, as has been pointed out earlier, is in the area of transmission line theory. Impedance mismatches may result in distortions in the waveforms representing the bits (!) resulting in errors in clock recovery, resulting in DAC jitter, resulting in harshness in the sound. These problems can be overcome by using a proper 75 ohm coax cable, sufficient bandwidth in the driver and receiver circuits, and DACs that use well-designed buffer logic to accept the data off the SPDIF interface and reclock it back out without jitter. This technology, found on $49 Discman players provides "skip-free" operation so that Discman player while walking, running, bumping into things, etc. Relative to TOSlink, the issue is the bandwidth of the electronics driving the optical transmitters and receivers. If the BW is too narrow, then distortion of the digital waveform ensues, leading to jitter if proper buffering and reclocking circuitry is not used. I agree with previous posters: once you have a reasonably well made optical or 75 ohm coax cable, there is NO value in multi-hundred dollar esoteric audiophile digital or optical cables.
Megasam: yes, all cables have some effect (i.e., error) on the signal waveforms passed though them. However, the beauty of digital communications is that, with proper system design, these analog errors can be ignored or removed at the receiving end. That is, errors are NOT necessarily additive in a chain of digital components, as they are in analog communications (i.e., in analog, the SNR must get worse with each additional component in the signal chain). Example of how errors may be removed in the digital domain include error control coding and reclocking of buffered data. A favorite question of mine is "how can this be so?". When faced with claims of audible differences between digital cables some possible explanations that come to mind are a) placebo effect, b) a substandard cable has been replaced with one of proper bandwidth and impedance (and once it's "right" it can't get "better" in a digital system, or c) some sort of weird equalization of the digital waveform is going where the distortions of a particular cable are being used to compensate for distortions of poorly designed digital data transmitters and receivers in the transport and DAC. In a digital communications system, these kinds of effects are sometimes called ISI, or intersymbol interferance. To me, case c) is not an acceptable state of affairs and cable and audio equipment manufactures should be better serving us.
Even with perfect square waves there will be jitter in a clock recovery circuit, due to the stochastic nature of the bit stream. Buffering the data and reclocking with a nice stable clock avoids the jitter problem, at the expense of some relatively long term drift in average sample rates to accomodate changes in the data transmission clock rate. If done properly (i.e., large data buffer, low loop bandwidths), then the time constant would be on the order of seconds. A $49 Discman CD player with "skip free" circuitry does this. So does my Levinson 360S. Now we just have to get audio manufacturers to work on the price points in between. :-)
I'm not denying reality, just looking for plausible explanations of it, and pointing out that existence of a plausible explanation (other than placebo effect) means that our beloved audio equipment has design deficiencies. Consider the case of digital cable interconnects between computer equipment: either they work right or they don't. One does not swap printer cables, for example, hoping to increase the resolution of the printed page! If it is indeed the case that digital cables have effect on the sound, why is this an acceptable situation? The whole point of digital technology is to avoid these degradations entirely. I expect more for my dollars, and want to spend more on source material, and less on "work-arounds" (e.g., esoteric cables needed to compensate for performance shortfalls in the equipment design).
bmpnyc... no offense taken. If there's a physical effect, then there's a physical cause. If there's no physical cause, there can be no physical effect. Those of us with the scientific mindset seek to comprehend the linkage. Understanding the linkage is every bit as enjoyable to me as other intangibles such as "beautiful design" and "great build quality".
Geez... sorry for the brain fart in my posting above. 1/4 of a cycle is 90 degrees of phase, not 45. Try visualizing a sine wave that's sampled 8 times per cycle and rebuilt using 8 stair steps. Now suppose that some of the stairsteps risers come a teeny bit early or a teeny bit late and you'll get the picture.
The Levinson DAC provides SPDIF, AES/EBU, and Toslink input (and ATT, I believe) interfaces. It does not support other "custom" interfaces such as I2ES, which attempt to deliver accurate, jitter-free clock signals to the DAC. Problems with the transport such as dropped data bits, etc., is a separate issue from the effect of digital cables. A back-of-the envelope calculation reveals that clock jitter needs to be less than 100 picoseconds to ensure that all jitter artifacts are at least 96 dB down from a signal at 20 kHz (the most stressing case). Bob Harley notes in his book "Guide to High End Audio" that several popular clock recovery chips, based on PLLs, produce about 3-4 nanoseconds of jitter (about 16 dB worse than the 100 ps reference point). Delivering a 44.1 kHz clock signal to a DAC and assuring jitter less than 100 ps has a stability of about 4 parts per million, which is achievable with competent design. Devices such as a Digital Time Lens, which provide an SPDIF output, remain at the mercy of clock recovery jitter at the DAC. The best they can hope to do is to stabilize the transmit end clocking as seen by the clock recovery circuit.
Hi, I've been away for a while, but have been watching the pot bubble on this topic. Let's talk about jitter. Jitter is not a harmonic distortion. It is a clock timing error that introduces an effect called phase noise either when a signal is sampled (at the A/D) or the reconstruction of a signal (at the D/A), or both. Think of it this way: a sine wave goes through 360 degrees of phase over a single cycle. Suppose we were to sample a sine wave whose frequency was exactly 11.025 kHz. This means that with a 44.1 kHz sample rate we would take exactly four samples of the sine wave every cycle. The digital samples would each represent an advance in the sine wave's phase by 45 degrees (1/4 of a cycle). The DAC clock is also supposed to run at 44.1 kHz; think of this as a "strobe" that occurs every 22.676 nanoseconds (millionths of a second) that tells the DAC when to create an analog voltage corresponding to the digital word currently in the DAC's input register. In the case of our sine wave, this creates a stairstep approximation to the sinewave, four steps per cycle. Shannon's theorem says that by applying a perfect low pass filter to the stairsteps, we can recover the original sinewave (let's set aside quantization error for the moment... that's a different issue). Jitter means that these strobes don't come exactly when scheduled, but a little early or late, in random fashion. We still have a stairstep approximation to the sine wave, and the levels of the stair step are right, but the "risers" between steps are a little early or late -- they aren't exatly 22.676 microseconds apart. When this stairtep is lowpass filtered, you get something that looks like a sine wave, but if you look very close at segments of the sine wave, you will discover that they don't correspond to a sinewave of at exactly 11.025 kHz but sometimes to a sinewave at a tiny bit higher frequency, and sometimes to a sinewave at tiny bit lower frequency. Frequency is a measure of how fast phase changes. When the stairstep risers which corresponds to 45 degrees of phase of the sinewave in our example, comes a little early, we get an analog signal that looks like a bit of a sine wave at slightly above 11.025 kHz. Conversely, if the stairstep riser is a bit late, it's as if our sine wave took a bit longer to go through 1/4 of a cycle, as if it has a frequency slightly less than 11.025 kHz. You can think of this as a sort of unwanted frequency modulation, introducing a broadband noise in the audio. If the jitter is uncorrelated with the signal, most of the energy is centered around the true tone frequency, falling off with at lower and higher frequencies. If the jitter is correlated with the signal, peaks in the noise spectrum can occur at discrete frequencies. Of the two effects, I'd bet the latter is more noticeable and objectionable. Where does jitter come from? It can come if one tries to construct the DAC clock from the SPDIF signal itself. The data rate of the SPDIF signal is 2.8224 Mb/sec = 64 bits x 44,100 samples/sec (the extra bits are used for header info). The waveforms used to represent ones and zeroes are designed so that there is always a transition from high to low or low to high from bit to bit, with a "zero" having a constant level and a "one" having within it a transition from high to low or low to high (depending on whether the previous symbol ended with a "high" or a "low"). Writing down an analysis of this situation requires advanced mathematics, so suffice it to say that if one does a spectrum analysis of this signal (comprising a sequence of square pulses), there will be a very strong peak at 5.6448 MHz (=128 x 44.1 kHz). A phase locked loop can be used to lock onto this spectrum peak in attempt to recover a 5.6448 MHz clock signal, and if we square up the sine wave and use a simple 128:1 countdown divider would produce a 44.1 kHz clock. Simple, but the devil is in the details. The problem is that the bit stream is not a steady pattern of ones and zeroes; instead it's an unpredictable mix of ones and zeros. So if we look closely at the spectrum of the SPDIF waveform we don't find a perfect tone at 5.6448 MHz, but a very high peak that falls off rapidly with frequency. It has the spectrum of a jittered sine wave! This means the clock recovered from the SPDIF data stream is jittered. The jitter is there due to the fundamental randomness of the data stream, not because of imperfections in transmitting the data from transport to DAC, or cable mismatch, or dropped bits or anything else. In other words, even if you assume PERFECT data, PERFECT cable, PERFECT transport, and PERFECT DAC, you still get jitter IF you recover the clock from the SPIF data stream. (You won't do better using IMPERFECT components, by the way). The way out of the problem is not to recover the DAC clock from the data stream. Use other means. For example, instead of direct clock recovery, use indirect clock recovery. That is, stuff the data into a FIFO buffer, and reclock it out at 44.1 kHz, USING YOUR OWN VERY STABLE (low-jitter) CLOCK -- not one derived from the SPIF bitstream. Watch the buffer, and if it's starting to fill up, bump up the DAC clock rate a bit and start emptying the buffer faster. If the FIFO buffer is emptying out, back off the clock rate a bit. If the transport is doing it's job right, data will be coming in at constant rate, and ideally, that rate is exactly 44,100 samples per seconds (per channel). In reality, it may be a bit off the ideal and wander around a bit (this partly explains why different transports can "sound different" -- these errors make the pitch may be a bit off, or wander around a tiny bit). Note that recovering the DAC clock from the SPDIF data stream allows the DAC clock to follow these errors in the transport data clock rate -- an advantage of direct clock recovery. But use a big enough buffer so that the changes to DAC clock rate don't have to happen very often or be very big, and even these errors are overcome. Thus indirect clock recovery avoids jitter, and overcomes transport-induced data rate errors (instead of just repeating them). That's exactly what a small Discman-type portable CD player with skip-free circuitry does. Shock and vibration halt data coming from the transport, so for audio to continue, there must be a large enough backlog built up in the FIFO to carry on until the mechanical servos can move the optical pickup back to proper place. Better audio DACs, such as the Levinson 360S use this FIFO buffering and reclocking idea to avoid jitter, as well. In principle, a DAC that uses this kind of indirect clock recovery will be impervious to the electrical nuances of different digital cables meeting SPDIF interface specifications. And that's as it should be.