Why do digital cables sound different?


I have been talking to a few e-mail buddies and have a question that isn't being satisfactorily answered this far. So...I'm asking the experts on the forum to pitch in. This has probably been asked before but I can't find any references for it. Can someone explain why one DIGITAL cable (coaxial, BNC, etc.) can sound different than another? There are also similar claims for Toslink. In my mind, we're just trying to move bits from one place to another. Doesn't the digital stream get reconstituted and re-clocked on the receiving end anyway? Please enlighten me and maybe send along some URLs for my edification. Thanks, Dan
danielho
Redkiwi, even if jitter did cause a problem, I do not think it would manifest itself in harmonically related distortions. I can't see it causeing either higher or lower order distortions.
Actually, Redkiwi I captured a megasample. I had a meg of aquisition memory and I filled it. After I filled the 1 meg I ceased taking data. The test took 80 minutes for all the data to be captured (106 bits every .5 sec [5 16 bit patterns plus the placeholder pattern]). I captured 16960 patterns or words if you will each 16 bits long.
WOW, this is a MUCH better response than I'd anticipated when I asked the first question. Thanks to everyone for the feedback. The wide spectrum of expertise in this forum is astounding! Now to stir the pot a little... Through this discussion, we have some idea of what may be going on within the cable and the interaction of the cable and sending/receiving mechanisms. However, what is the difference in the sound that is perceived? This is my thinking...If the stream is just 1's and 0's...what happens when there is degradation? How does it manifests itself? There seems to be a few phenomena that could occur...scrambled bits, missing bits, signal too low (too hight), timing errors, etc. To me, errors in the data stream would sound like dropouts/static, even wrong notes, etc. and not the same type of subtle differences as in analogue cable. However, there are reports of more bass, the highs being too bright, etc. There are even reports of digital cables with the "family" sound of the company's analogue cables. How can this be possible? As well, if not bits are lost, what difference does timing errors make? Isn't part of the reconstitution of sound at the receiving end, reclocking the signal? Sorry for so many questions, however this thread has been very interesting and educational! Thanks everyone!
It is asynchronous, and it's fixed rate. In a "normal" setup, sender and receiver each have their own clocks that are supposed to work identically. I've never studied the CD interface, so somebody who has can correct me if I mis-state something, but in typical asynchronous communications, each byte has a "start" bit and a "stop" bit surrounding the eight bits of data, so timing errors would have to be severe enough to cause problems from the time the start bit occurs to the time the stop bit arrives. Since it's a fixed data rate, it takes just as many bits to transmit high frequencies as low frequencies, so the chance of error should be the same. In any case, I can't see any way that the cable would make a difference in the ultimate delivery of the bits based on timing errors as long as it's the usual "good working condition".

Or, I have no idea of what I'm talking about and would love to understand it with you, Danielho. I definitely don't have any idea how a DAC works electrically, certainly not on the analog side - to me, the problem is broken down into "chunks" where getting the samples delivered to the input of the DAC is separate from what the DAC does with that sample - I'm assuming that with a known stream of samples, the DAC will produce a known output. -Kirk

Hi, I've been away for a while, but have been watching the pot bubble on this topic. Let's talk about jitter. Jitter is not a harmonic distortion. It is a clock timing error that introduces an effect called phase noise either when a signal is sampled (at the A/D) or the reconstruction of a signal (at the D/A), or both. Think of it this way: a sine wave goes through 360 degrees of phase over a single cycle. Suppose we were to sample a sine wave whose frequency was exactly 11.025 kHz. This means that with a 44.1 kHz sample rate we would take exactly four samples of the sine wave every cycle. The digital samples would each represent an advance in the sine wave's phase by 45 degrees (1/4 of a cycle). The DAC clock is also supposed to run at 44.1 kHz; think of this as a "strobe" that occurs every 22.676 nanoseconds (millionths of a second) that tells the DAC when to create an analog voltage corresponding to the digital word currently in the DAC's input register. In the case of our sine wave, this creates a stairstep approximation to the sinewave, four steps per cycle. Shannon's theorem says that by applying a perfect low pass filter to the stairsteps, we can recover the original sinewave (let's set aside quantization error for the moment... that's a different issue). Jitter means that these strobes don't come exactly when scheduled, but a little early or late, in random fashion. We still have a stairstep approximation to the sine wave, and the levels of the stair step are right, but the "risers" between steps are a little early or late -- they aren't exatly 22.676 microseconds apart. When this stairtep is lowpass filtered, you get something that looks like a sine wave, but if you look very close at segments of the sine wave, you will discover that they don't correspond to a sinewave of at exactly 11.025 kHz but sometimes to a sinewave at a tiny bit higher frequency, and sometimes to a sinewave at tiny bit lower frequency. Frequency is a measure of how fast phase changes. When the stairstep risers which corresponds to 45 degrees of phase of the sinewave in our example, comes a little early, we get an analog signal that looks like a bit of a sine wave at slightly above 11.025 kHz. Conversely, if the stairstep riser is a bit late, it's as if our sine wave took a bit longer to go through 1/4 of a cycle, as if it has a frequency slightly less than 11.025 kHz. You can think of this as a sort of unwanted frequency modulation, introducing a broadband noise in the audio. If the jitter is uncorrelated with the signal, most of the energy is centered around the true tone frequency, falling off with at lower and higher frequencies. If the jitter is correlated with the signal, peaks in the noise spectrum can occur at discrete frequencies. Of the two effects, I'd bet the latter is more noticeable and objectionable. Where does jitter come from? It can come if one tries to construct the DAC clock from the SPDIF signal itself. The data rate of the SPDIF signal is 2.8224 Mb/sec = 64 bits x 44,100 samples/sec (the extra bits are used for header info). The waveforms used to represent ones and zeroes are designed so that there is always a transition from high to low or low to high from bit to bit, with a "zero" having a constant level and a "one" having within it a transition from high to low or low to high (depending on whether the previous symbol ended with a "high" or a "low"). Writing down an analysis of this situation requires advanced mathematics, so suffice it to say that if one does a spectrum analysis of this signal (comprising a sequence of square pulses), there will be a very strong peak at 5.6448 MHz (=128 x 44.1 kHz). A phase locked loop can be used to lock onto this spectrum peak in attempt to recover a 5.6448 MHz clock signal, and if we square up the sine wave and use a simple 128:1 countdown divider would produce a 44.1 kHz clock. Simple, but the devil is in the details. The problem is that the bit stream is not a steady pattern of ones and zeroes; instead it's an unpredictable mix of ones and zeros. So if we look closely at the spectrum of the SPDIF waveform we don't find a perfect tone at 5.6448 MHz, but a very high peak that falls off rapidly with frequency. It has the spectrum of a jittered sine wave! This means the clock recovered from the SPDIF data stream is jittered. The jitter is there due to the fundamental randomness of the data stream, not because of imperfections in transmitting the data from transport to DAC, or cable mismatch, or dropped bits or anything else. In other words, even if you assume PERFECT data, PERFECT cable, PERFECT transport, and PERFECT DAC, you still get jitter IF you recover the clock from the SPIF data stream. (You won't do better using IMPERFECT components, by the way). The way out of the problem is not to recover the DAC clock from the data stream. Use other means. For example, instead of direct clock recovery, use indirect clock recovery. That is, stuff the data into a FIFO buffer, and reclock it out at 44.1 kHz, USING YOUR OWN VERY STABLE (low-jitter) CLOCK -- not one derived from the SPIF bitstream. Watch the buffer, and if it's starting to fill up, bump up the DAC clock rate a bit and start emptying the buffer faster. If the FIFO buffer is emptying out, back off the clock rate a bit. If the transport is doing it's job right, data will be coming in at constant rate, and ideally, that rate is exactly 44,100 samples per seconds (per channel). In reality, it may be a bit off the ideal and wander around a bit (this partly explains why different transports can "sound different" -- these errors make the pitch may be a bit off, or wander around a tiny bit). Note that recovering the DAC clock from the SPDIF data stream allows the DAC clock to follow these errors in the transport data clock rate -- an advantage of direct clock recovery. But use a big enough buffer so that the changes to DAC clock rate don't have to happen very often or be very big, and even these errors are overcome. Thus indirect clock recovery avoids jitter, and overcomes transport-induced data rate errors (instead of just repeating them). That's exactly what a small Discman-type portable CD player with skip-free circuitry does. Shock and vibration halt data coming from the transport, so for audio to continue, there must be a large enough backlog built up in the FIFO to carry on until the mechanical servos can move the optical pickup back to proper place. Better audio DACs, such as the Levinson 360S use this FIFO buffering and reclocking idea to avoid jitter, as well. In principle, a DAC that uses this kind of indirect clock recovery will be impervious to the electrical nuances of different digital cables meeting SPDIF interface specifications. And that's as it should be.