Does the quality of a digital signal matter?


I recently heard a demonstration where a CD player was played with and without being supported with three Nordost Sort Kones. The difference was audible to me, but did not blow me away.

I was discussing the Sort Kones with a friend of mine that is an electrical engineer and also a musical audio guy. It was his opinion that these items could certain make an improvement in an analogue signal, but shouldn't do anything for a digital signal. He said that as long as the component receiving the digital signal can recognize a 1 or 0 then the signal is successful. It's a pass/fail situation and doesn't rely on levels of quality.

An example that he gave me was that we think nothing of using a cheap CDRW drive to duplicate a CD with no worry about the quality being reduced. If the signal isn't read in full an error is reported so we know that the entire signal has been sent.

I believe he said that it's possible to show that a more expensive digital cable is better than another, but the end product doesn't change.

There was a test done with HDMI cables that tested cables of different prices. The only difference in picture quality was noted when a cable was defective and there was an obvious problem on the display.

I realize that the most use analogue signals, but for those of us that use a receiver for our D/A, does the CD players quality matter? Any thoughts?
mceljo
"I've never owned a player with one, so I assume that error handling is not an issue."

I would not assume that all devices or software programs are designed to deliver optimal sound and hence all source bits in real time.

Some may take short cuts and have less robust error correction if assuring optimal sound quality is not a primary goal.

Digital devices that are designed to enable optimal sound quality should be able to accomplish that goal by assuring that all source bits available are in fact transmitted and utilized, but there is nothing that guarantees all devices or software programs in play do this.
The points Kijanki made about timing, jitter, and reflections on impedance boundaries merit added emphasis and explanation, imo.

The S/PDIF and AES/EBU interfaces which are most commonly used to transmit data from transport to dac are inherently prone to jitter, meaning short-term random fluctuations in the amount of time between each of the 44,100 samples which are converted by the dac for each channel in each second (for redbook cd data).

As Kijanki stated, "Jitter creates sidebands at very low level (in order of <-60dB) but audible since not harmonically related to root frequency. With music (many frequencies) it means noise. This noise is difficult to detect because it is present only when signal is present thus manifest itself as a lack of clarity."

One major contributor to jitter is electrical noise that will be riding on the digital signal. Another is what are called vswr (voltage standing wave ratio) effects, that come into play at high frequencies (such as the frequency components of digital audio signals), which result in reflection back toward the source of some of the signal energy whenever an impedance match (between connectors, cables, output circuits, and input circuits) is less than perfect.

Some fraction of the signal energy that is reflected back from the dac input toward the transport output will be re-reflected from the transport output or other impedance discontinuity, and arrive at the dac input at a later time than the originally incident waveform, causing distortion of the waveform. Whether or not that distortion will result in audibly significant jitter, besides being dependent on the amplitude of the re-reflections, is very much dependent on what point on the original waveform their arrival coincides with.

Therefore the LENGTH of the connecting cable can assume major importance, conceivably much more so than the quality of the cable. And in this case, shorter is not necessarily better. See this paper, which as an EE strikes me as technically plausible, and which is also supported by experimental evidence from at least one member here whose opinions I respect:

http://www.positive-feedback.com/Issue14/spdif.htm

Factors which determine the significance of these effects, besides cable length and quality, include the risetime and falltime of the output signal of the particular transport, the jitter rejection capabilities of the dac, the amount of electrical noise that may be generated by and picked up from other components in the system, ground offsets between the two components; the value of the logic threshold for the digital receiver chip at the input of the dac; the clock rate of the data (redbook or high rez), the degree of the impedance mismatches that are present, and many other factors.

Also, keep in mind that what we are dealing with is an audio SYSTEM, the implication being that components can interact in ways that are non-obvious and that do not directly relate to the signal path that is being considered.

For instance, physical placement of a digital component relative to analog components and cables, as well as the ac power distribution arrangement, can affect coupling of digital noise into analog circuit points, with unpredictable effects. Digital signals have substantial radio frequency content, which can couple to other parts of the system through cables, power wiring, and the air.

All of which adds up to the fact that differences can be expected, but does NOT necessarily mean that more expensive = better.

Regards,
-- Al

P.S: I am also an EE, in my case having considerable experience designing high speed a/d and d/a converter circuits for non-audio applications.

I may be over simplifying this a bit, but it sounds like the proximity of the components that "read" the CD can have an effect on the analog signal created in the DAC. Would this be justification for a completely seperate DAC?

How does this relate to a Toslink cable that is optical?
Jitter is not a problem with "digital" part of digital (the robust part). Jitter is part of the analog problem with digital and can be regarded as a D to A problem (or, in the studio an A to D problem). It is an analog timing problem whereby distortion can be introduced at the DAC/ADC stage because of drift in the clock. To accurately convert digital to analog or analog to digital requires an extremely accurate clock.

I stand by my statement that you can copy a copy of a digital signal and repeat the copy of each subsequent copy 1000's of times with no degradation.

You cannot do this with any analog media - within ten to twenty copies or a copy the degradation becomes extremely audible (or visible in the case of a VHS cassette)

The evidence is that digital signals are extremely robust compared to analog.