Doesn't DAC Do Heavy Lifting vs. Transport?


As I am no expert, I was wondering just what one looks for in a CD transport if one has a separate DAC. Maximum stability and minimum jitter I assume, but beyond that? Doesn't the DAC do the heavy lifting? Does the transport impart it's own influence on the sound? How? What transports are recommended under $500?
soundbit
They do make a considerable difference, unfortunately no good cheap ones I know of. Last summer I took a transport and several DACs over to a friend who uses a universal player. He is an experienced audiophile and didn't expect much if any difference. He was surprised that he could hear a considerable difference and thought that the transport made more difference than the DAC after listening to several combinations.
Like you, I assumed that the DAC, not the transport, did the "heavy lifting." That is until I improved my transport with the addition of a reclocker, which lowered the jitter. The improvements were far from subtle. They included: greater perceived resolution, less high frequency shrillness, and better image focus.

My conclusion: The transport is very important. Is it more important than the DAC? I don't know. It probably depends on which DAC and which transport.
Post removed 
If the DAC is playing music from a buffer, then aren't the only requirements placed on the transport that it 1) keeps the buffer full and 2) doesn't pass incorrect data? So then, what the heck does it matter about jitter?
Bigbucks - I assume your question about using a buffer is for me. The point I was trying to make is that the amount of jitter in a transport is an important consideration for achieving good sound. This was an attempt to address the OP's question, "Does the transport impart its own influence on the sound?" I only mentioned the reclocker as an illustration of how jitter was audible in my system. But you are correct in pointing out that, when a reclocker is used, the jitter characteristics of the transport are less important, or perhaps irrelevant, depending upon the method of reclocking. Similarly, if the DAC reclocks the incoming signal, the jitter characteristics of the transport are less important, or perhaps irrelevant, depending on the method of reclocking.
I guess that I always have assumed (there it is) that an external DAC doesn't ever rely on the transport for data timing. Instead, it empties its buffer at the right time intervals according to its own clock. Therefore, the only jitter is the DAC's jitter and the transport therefore must be totally irrelevant. Your experience means what? That some DACs don't use their own clock and instead rely on the transport's clock?

I just don't understand (unfortunately).
I have owned 5 or 6 (probably more) dacs and thought each had a unique sound. In my experience the transport, which was the digital out from a cdp or dvd player, was irrelevant. At least I could hear no difference when subbing in a different transport.
I'll take it one step further and say it is the analogue output section of the dac that is most important. What I mean is the number crunching done in the dac to provide the analogue signal is not as important as how the analogue signal is delivered. Of course YMMV.
Post removed 
Bigbucks5: I guess that I always have assumed (there it is) that an external DAC doesn't ever rely on the transport for data timing. Instead, it empties its buffer at the right time intervals according to its own clock. Therefore, the only jitter is the DAC's jitter and the transport therefore must be totally irrelevant. Your experience means what? That some DACs don't use their own clock and instead rely on the transport's clock?
Your questions are excellent, Bigbucks, as are the op's. The answer is that, yes, run-of-the-mill transport + dac combinations rely on the transport's clock. And due to the fact that a conventional SPDIF or AES/EBU interface does not provide a means for slaving the timing of the transport to the timing of the dac, avoiding reliance on the transport's clock creates considerable complication and added expense.

What you are envisioning is a FIFO (First-In, First-Out) buffer in the dac component, with data being clocked into the buffer synchronously to the transport's clock (as extracted from the SPDIF or AES/EBU data stream), and data being clocked out of the buffer by a completely independent (asynchronous) clock, generated internally within the dac component.

That approach has been used in a number of high end dac's, with some success. However, with no synchronization between the dac clock and the transport clock, and if the buffer size is not very large, expectable tolerances in the accuracies of those two clocks WILL cause the buffer to either empty or overflow, during the hour or so of music that a cd may contain.

Avoiding that possibility requires either a large buffer (at least several seconds long, I believe), which increases cost and also creates a corresponding delay in the start of playback, and/or sophisticated means of periodically re-synchronizing to eliminate accumulated timing drift between clocks, without affecting the music.

A one-box player avoids all of those difficulties because although data retrieval from the disk and clocking of the dac chip occur at completely different rates, and a buffer memory is incorporated between the transport section and the dac section, all of the clocks are ultimately derived from a single internal clock generator, so there is no long-term drift between them.

The fundamental problem with two-box approaches is that the SPDIF and AES/EBU interfaces provide for clock transmission in only one direction, from transport to dac. And the fact that clock and data are combined into a single signal, requiring the clock to be extracted from that signal in the dac component, only makes matters worse in terms of providing a jitter-free clock to the dac chip.

Re asynchronous sample rate converters, and other approaches that have been used to work around these fundamental limitations, fwiw here are some words from Charles Hansen of Ayre, in this excellent white paper:
Over the years many schemes have been implemented by various manufacturers in attempts to improve the jitter performance of the S/PDIF connection, including dual PLL’s, VCXO’s (Voltage-Controlled Crystal Oscillators), frequency synthesizers, FIFO (First-In, First-Out) buffers for the audio data, external re-clocking (”jitter reduction”) devices, and so forth. While all of these methods are able to reduce the jitter levels, they cannot eliminate the jitter that is inherently added by the S/PDIF connection. Another approach to reduce jitter that has become increasingly popular in recent years is to use an ASRC (Asynchronous Sample Rate Converter) chip. The idea is that the original audio data is replaced with newly calculated data that represents what the audio data would have been if the incoming signal had most of the jitter filtered out. The technical theory behind this method is sound, as demonstrated by the measured performance, which is generally quite good. However the audible performance of these devices is controversial, and Ayre has avoided this approach as it completely discards the original audio data.
On another note, happy 2010 to all!

-- Al
Following up on my previous post, the specification for consumer SPDIF, IEC 60958-3 defines a "normal" specification for long-term timing accuracy of 1000 parts per million, or 0.1%. That corresponds to a maximum error across a 74 minute cd of 4.44 seconds. Blech! A $10 Timex watch does vastly better than that.

There is also a spec for "high accuracy mode" of 50ppm. I don't know how widely that is used, but in any event for a dac design to not be locked into use only with certain transports, it would have to be compatible with the 1000ppm tolerance.

So a FIFO-based approach in the DAC would have to provide a large enough FIFO to store 4.44 seconds worth of data (at 44,100 samples per second x 2 channels x 16 bits/channel for redbook; more for hi res formats), plus an additional allowance for its own time base inaccuracy, plus some margin. Meaning that the minimum FIFO size would have to probably approach perhaps 10 seconds worth of storage. That would introduce a corresponding delay in playback, measured from when the transport has moved to the proper track and started to send data. And that delay would be repeatedly incurred each time a new track is selected.

So you can see that there are definitely drawbacks to that approach, resulting from fundamental limitations in the SPDIF specifications, that were most likely driven by cost considerations within the constraints of early 1980's technology.

Regards,
-- Al
Slight correction to my previous post: With a FIFO-based approach, if 10 seconds of storage were nominally required, the FIFO itself would probably be sized to hold 20 seconds worth of data, and playback would commence when it filled up half-way. That would allow accommodation of timing differences in either direction (i.e., the clock to the dac chip being either faster or slower than the clock from the transport).

Regards,
-- Al