Jt, rather than clog up your inbox and have to make several individual replies, how about posting your comments here ? I for one would like to know what you have gotten such good results with, especially since both I and my Brother along with several others here are running EVS DAC's.
Before you comment, you should be aware that there is NO such thing as a 75 ohm RCA connection. This is true regardless of what Canare states. Nor is the nominal impedance of the wiring from the digital input to the circuit board inside the EVS 75 ohms. None the less, you may have stumbled onto something in terms of increased power transfer / minimized loss / reduced standing waves. The last part ( reduced standing waves ) is directly related to a reduction in jitter. Sean >
|
Hi,
Digital cables can make a HUGE difference. But all those fancy multi-megabuck cables with exotic materials won't buy you much, unless they also solve the key technical problems. At high frequencies, these cables MUST be impedance matched. Ie ...Coax connections should be 75Ohm. If not, reflections inducing jitter related distortions will occur, and this IS what differentiates these cables. This along with dialectric absorption and inproper shielding. It is my feeling that cables that sound sharper, cleaner, more resolved, are simply reducing the jitter component compared to others. If your DAC reclocks than these differences are less apparant. If it does not, as my EVS doesnt, you will be blown away.
When I went from a rather expensive "fancy" cable to an RG6U(Quad Shielded 75Ohm) design cheapo cable(email me if your interested in what this was)...The differences were as big as the difference between various DACs...not insignificant. I love it when something cheap does the same job of something much more expensive, and sometimes better.
Good luck |
Due to different configurations they act as tone controls. And I DOOOOO hear quite a difference in some. Most to me sound way too bright. Some interconnects produce quite a difference in the sound of a system. Price does not seem to me to be a big factor, rather finding one that shapes the sound to my ears preference. I rewired a system for a friend that thought he had good sound (very harsh). The results were jaw dropping even to him (Tin ears). His ears at least are better than Julian H. He cost me a lot of money buying on his recommendations. YIKES!!!! |
Audioengr ( hi !! ),
What part do you think vswr and impedance mismatches between source, cable and load play when it comes to transferring signals from transport to DAC ? Since we are dealing with an RF based signal, transmission line theories DO directly apply here.
Your statements about low dielectric absorption intrigue me. Shouldn't a cable that has a higher velocity factor be less prone to signal deterioration / absorption due to the signal spending less time in the cable ? If that is correct, than a cable that is "slower", such as those that make use of a teflon dielectric configured in a standard coaxial design, would be a relatively poor performer. If that is the case, cheaper "foam" insulated cables "should" perform appr 13% better than more costly Teflon versions given equivalent conductor materials. Any thoughts or comments on this ? Sean > |
Whether different digital cables sound different is very dependent on the DAC design. Digital cables will typically NOT drop bits. Any degradation has to do with jitter that is introduced due to the cable losses. DAC's that completely re-clock the data stream will be much less sensitive to the quality of the digital cable and any jitter that the cable causes. However, most DAC's are not like this and therefore have some sensitivity to jitter.
Digital interconnects that exhibit low-capacitance, low-dielectric-absorption and generally low-losses will introduce the least jitter into the digital bit-stream.
This is why they sound different. |
Daniel, I've just posted a reply saying that if you cannot hear a difference between a Cardas Highspeed and a Cardas Lightning 15 digital cable, you have a low fi system.I would like to retrack this statement. What I really meant was, if you cannot hear any difference between these two digital cables, then the question whether different digital cables sound differently should not concern you because you are one of those few in the CANNOT HEAR DIFFERENCE CAMP. In that case, the question of why digital cables would sound different should not concern you either. Go try out these cables and see if they sound diffently to you and then pursue the balance of your question as to what makes them sound different. |
Sorry, I guess we got a little carried away with the technical aspects but I learned alot in this thread and also thank those who enlightened me. The music quickens my pulse but I also enjoy the understanding of how it all comes together. |
Thank you Bluesman, 1439, Gmkowal, Redkiwi and all of the above. The technical data strained my patience, but was well worth following. I am sure that some mysteries will always remain, but with things being this complicated, I will have to continue using my ears as my guide. I wonder if the sound that I prefer would show a smaller amount of the problems pointed out here? Very interesting how small pitch shifts effect the soundstage, Bluesman. |
Blues, You are absolutely correct! The purpose of my test was not to simulate the real time data transmission of a digital audio playback system but merely to prove to a poster that the digital cable has little to do with making bits fall out. I chose to do an experiment so I would have physical evidence for what I was claiming and it was fun to do. I do not have the time nor the equipment to simulate a real life situation. I also agree that jitter is not a major contributor. The only thing I do take exception with is your claim that a clock is needed for jitter measurement. A spectrum analyzer can be used to measure phase noise and a calculation can be made to get jitter from the phase noise measurement. The calculation is well documented and the Ma Bells of the world have been useing this method for years. Thanks for your post and happy listening. |
Gmkowal 2 points here. It ought to be obvious that your test is inadequate, simply use a CD source. It should take about 4 seconds if memory serves me to fill a 1 meg buffer of SPDIF data. That's real time. When I say jitter can not be measured accurately I'm simply pointing out that any device that measures jitter has to have a clock, that clock has jitter, so your measurement has to be off by some amount. That amount may be small but it always exists. My point was that many claims of jitter reduction devices are total BS. 1439bhr I thought the Levinson also had a proprietary data transfer interface. My point was that some component combinations do and they seem to do the best job. My points were simply that jitter reduction devices are a poor substitute for a good implementation between a DAC and transport. They may even incrrease jitter. My second point is that the jitter on most "good" systems is low enough to not be a major factor in the sonic degredation. I believe jitter is third behind errors in the transport and signal loss of high Khz signals. |
The Levinson DAC provides SPDIF, AES/EBU, and Toslink input (and ATT, I believe) interfaces. It does not support other "custom" interfaces such as I2ES, which attempt to deliver accurate, jitter-free clock signals to the DAC. Problems with the transport such as dropped data bits, etc., is a separate issue from the effect of digital cables. A back-of-the envelope calculation reveals that clock jitter needs to be less than 100 picoseconds to ensure that all jitter artifacts are at least 96 dB down from a signal at 20 kHz (the most stressing case). Bob Harley notes in his book "Guide to High End Audio" that several popular clock recovery chips, based on PLLs, produce about 3-4 nanoseconds of jitter (about 16 dB worse than the 100 ps reference point). Delivering a 44.1 kHz clock signal to a DAC and assuring jitter less than 100 ps has a stability of about 4 parts per million, which is achievable with competent design. Devices such as a Digital Time Lens, which provide an SPDIF output, remain at the mercy of clock recovery jitter at the DAC. The best they can hope to do is to stabilize the transmit end clocking as seen by the clock recovery circuit. |
I have read your post and can not dispute your claims with the exception with the exception of one. Jitter is not very difficult to measure accurately if you understand the measurement concept. Jitter and be made simply with a spectrum analyzer and a calculation. All that is needed to convert phase noise to jitter is a simple calculation. A spectrum analyzer can be used to measure phase noise if the device to be tested has little or no AM because the analyzer can not tell the difference between AM and phase noise. |
Using a FIFO and reclocking is not the end all and is not so simple. When the Gensis Digital Lens came out, Bob Harley assumed that all transports would sound exactly the same. He found that he was wrong. Rather than reclock, the best solution, given the current CD methodolgy is to have a custom data transfer mechanism like Lvinson and Spectral. I believe that the SPDIF interface is inadequate for high quality playback. I did some testing about 7-8 years ago to prove that there was no difference in transports or digital cables and published that dat on the net. At the time I had only worked with proprietary digital record and playback protocols and was not that familiar with the CD playback mechanism. My tests proved just the opposite of what I thought. Not only that but subjectively the tests seemed to show that jitter was not the most important of the digital flaws, both read errors from discs and signal loss of high frequencies seemed to be more of a problem. While discman buffering is universal and basically required for a device that bounces around, everyone I've ever owned always said to turn off the buffering when not needed because the sound was not as good as the direct signal. I still believe that the fewer components in between the transport and the DAC the better. Buffering of CD data has all sorts of other issues that need to be handled like skipping forward / back etc that make it impractical since you always have to handle the case when the buffer is empty. All the sysytems I have designerd, and most commercial systems with proprietary protocols in general send raw data to the DAC and the DAC interface has the only clock and jitter is at an absolute minimum. Jitter is a fact of life in any digital medium. The use of the clock has been there since digital technologies in the 60s. I always get a kick out of manufacturers who claim jitter reduction devices that produce 0 (zero) jitter. It's physically impossible to accurately measure jitter anyway. We can improve the accuracy, but there is always some small error in the measurement. This error will decrease as technology improves. By the way jitter errors have a much more detrimental effect than just causing pitch errors. Lack of perfect integrity of the audio signal effects soundstage and imaging and if jitter is so bad that the dac syncs incorrectly on the signal, severe static can be produced. See my previous postings. |
1439bhr, I got a question I am sure you can answer. Does the low pass filter on the output of the DAC also handle any aliasing that may be present? Also, how much phase noise is acceptable in the system before there is any degredation and does phase noise closer to the carrier or farther away from the carrier seem to have the most effect. Thanks in advance for your response. |
Nice explaination 1439bhr! You seem to be the first poster who really seems to know completely what they are talking about (I include myself in the uninformed). Very well put and easily understandable. I did catch the 1/4 cycle but I knew what you meant. Great Job! |
Geez... sorry for the brain fart in my posting above. 1/4 of a cycle is 90 degrees of phase, not 45. Try visualizing a sine wave that's sampled 8 times per cycle and rebuilt using 8 stair steps. Now suppose that some of the stairsteps risers come a teeny bit early or a teeny bit late and you'll get the picture. |
Hi, I've been away for a while, but have been watching the pot bubble on this topic. Let's talk about jitter. Jitter is not a harmonic distortion. It is a clock timing error that introduces an effect called phase noise either when a signal is sampled (at the A/D) or the reconstruction of a signal (at the D/A), or both. Think of it this way: a sine wave goes through 360 degrees of phase over a single cycle. Suppose we were to sample a sine wave whose frequency was exactly 11.025 kHz. This means that with a 44.1 kHz sample rate we would take exactly four samples of the sine wave every cycle. The digital samples would each represent an advance in the sine wave's phase by 45 degrees (1/4 of a cycle). The DAC clock is also supposed to run at 44.1 kHz; think of this as a "strobe" that occurs every 22.676 nanoseconds (millionths of a second) that tells the DAC when to create an analog voltage corresponding to the digital word currently in the DAC's input register. In the case of our sine wave, this creates a stairstep approximation to the sinewave, four steps per cycle. Shannon's theorem says that by applying a perfect low pass filter to the stairsteps, we can recover the original sinewave (let's set aside quantization error for the moment... that's a different issue). Jitter means that these strobes don't come exactly when scheduled, but a little early or late, in random fashion. We still have a stairstep approximation to the sine wave, and the levels of the stair step are right, but the "risers" between steps are a little early or late -- they aren't exatly 22.676 microseconds apart. When this stairtep is lowpass filtered, you get something that looks like a sine wave, but if you look very close at segments of the sine wave, you will discover that they don't correspond to a sinewave of at exactly 11.025 kHz but sometimes to a sinewave at a tiny bit higher frequency, and sometimes to a sinewave at tiny bit lower frequency. Frequency is a measure of how fast phase changes. When the stairstep risers which corresponds to 45 degrees of phase of the sinewave in our example, comes a little early, we get an analog signal that looks like a bit of a sine wave at slightly above 11.025 kHz. Conversely, if the stairstep riser is a bit late, it's as if our sine wave took a bit longer to go through 1/4 of a cycle, as if it has a frequency slightly less than 11.025 kHz. You can think of this as a sort of unwanted frequency modulation, introducing a broadband noise in the audio. If the jitter is uncorrelated with the signal, most of the energy is centered around the true tone frequency, falling off with at lower and higher frequencies. If the jitter is correlated with the signal, peaks in the noise spectrum can occur at discrete frequencies. Of the two effects, I'd bet the latter is more noticeable and objectionable. Where does jitter come from? It can come if one tries to construct the DAC clock from the SPDIF signal itself. The data rate of the SPDIF signal is 2.8224 Mb/sec = 64 bits x 44,100 samples/sec (the extra bits are used for header info). The waveforms used to represent ones and zeroes are designed so that there is always a transition from high to low or low to high from bit to bit, with a "zero" having a constant level and a "one" having within it a transition from high to low or low to high (depending on whether the previous symbol ended with a "high" or a "low"). Writing down an analysis of this situation requires advanced mathematics, so suffice it to say that if one does a spectrum analysis of this signal (comprising a sequence of square pulses), there will be a very strong peak at 5.6448 MHz (=128 x 44.1 kHz). A phase locked loop can be used to lock onto this spectrum peak in attempt to recover a 5.6448 MHz clock signal, and if we square up the sine wave and use a simple 128:1 countdown divider would produce a 44.1 kHz clock. Simple, but the devil is in the details. The problem is that the bit stream is not a steady pattern of ones and zeroes; instead it's an unpredictable mix of ones and zeros. So if we look closely at the spectrum of the SPDIF waveform we don't find a perfect tone at 5.6448 MHz, but a very high peak that falls off rapidly with frequency. It has the spectrum of a jittered sine wave! This means the clock recovered from the SPDIF data stream is jittered. The jitter is there due to the fundamental randomness of the data stream, not because of imperfections in transmitting the data from transport to DAC, or cable mismatch, or dropped bits or anything else. In other words, even if you assume PERFECT data, PERFECT cable, PERFECT transport, and PERFECT DAC, you still get jitter IF you recover the clock from the SPIF data stream. (You won't do better using IMPERFECT components, by the way). The way out of the problem is not to recover the DAC clock from the data stream. Use other means. For example, instead of direct clock recovery, use indirect clock recovery. That is, stuff the data into a FIFO buffer, and reclock it out at 44.1 kHz, USING YOUR OWN VERY STABLE (low-jitter) CLOCK -- not one derived from the SPIF bitstream. Watch the buffer, and if it's starting to fill up, bump up the DAC clock rate a bit and start emptying the buffer faster. If the FIFO buffer is emptying out, back off the clock rate a bit. If the transport is doing it's job right, data will be coming in at constant rate, and ideally, that rate is exactly 44,100 samples per seconds (per channel). In reality, it may be a bit off the ideal and wander around a bit (this partly explains why different transports can "sound different" -- these errors make the pitch may be a bit off, or wander around a tiny bit). Note that recovering the DAC clock from the SPDIF data stream allows the DAC clock to follow these errors in the transport data clock rate -- an advantage of direct clock recovery. But use a big enough buffer so that the changes to DAC clock rate don't have to happen very often or be very big, and even these errors are overcome. Thus indirect clock recovery avoids jitter, and overcomes transport-induced data rate errors (instead of just repeating them). That's exactly what a small Discman-type portable CD player with skip-free circuitry does. Shock and vibration halt data coming from the transport, so for audio to continue, there must be a large enough backlog built up in the FIFO to carry on until the mechanical servos can move the optical pickup back to proper place. Better audio DACs, such as the Levinson 360S use this FIFO buffering and reclocking idea to avoid jitter, as well. In principle, a DAC that uses this kind of indirect clock recovery will be impervious to the electrical nuances of different digital cables meeting SPDIF interface specifications. And that's as it should be. |
It is asynchronous, and it's fixed rate. In a "normal" setup, sender and receiver each have their own clocks that are supposed to work identically. I've never studied the CD interface, so somebody who has can correct me if I mis-state something, but in typical asynchronous communications, each byte has a "start" bit and a "stop" bit surrounding the eight bits of data, so timing errors would have to be severe enough to cause problems from the time the start bit occurs to the time the stop bit arrives. Since it's a fixed data rate, it takes just as many bits to transmit high frequencies as low frequencies, so the chance of error should be the same. In any case, I can't see any way that the cable would make a difference in the ultimate delivery of the bits based on timing errors as long as it's the usual "good working condition". Or, I have no idea of what I'm talking about and would love to understand it with you, Danielho. I definitely don't have any idea how a DAC works electrically, certainly not on the analog side - to me, the problem is broken down into "chunks" where getting the samples delivered to the input of the DAC is separate from what the DAC does with that sample - I'm assuming that with a known stream of samples, the DAC will produce a known output. -Kirk |
WOW, this is a MUCH better response than I'd anticipated when I asked the first question. Thanks to everyone for the feedback. The wide spectrum of expertise in this forum is astounding! Now to stir the pot a little... Through this discussion, we have some idea of what may be going on within the cable and the interaction of the cable and sending/receiving mechanisms. However, what is the difference in the sound that is perceived? This is my thinking...If the stream is just 1's and 0's...what happens when there is degradation? How does it manifests itself? There seems to be a few phenomena that could occur...scrambled bits, missing bits, signal too low (too hight), timing errors, etc. To me, errors in the data stream would sound like dropouts/static, even wrong notes, etc. and not the same type of subtle differences as in analogue cable. However, there are reports of more bass, the highs being too bright, etc. There are even reports of digital cables with the "family" sound of the company's analogue cables. How can this be possible? As well, if not bits are lost, what difference does timing errors make? Isn't part of the reconstitution of sound at the receiving end, reclocking the signal? Sorry for so many questions, however this thread has been very interesting and educational! Thanks everyone! |
Actually, Redkiwi I captured a megasample. I had a meg of aquisition memory and I filled it. After I filled the 1 meg I ceased taking data. The test took 80 minutes for all the data to be captured (106 bits every .5 sec [5 16 bit patterns plus the placeholder pattern]). I captured 16960 patterns or words if you will each 16 bits long. |
Redkiwi, even if jitter did cause a problem, I do not think it would manifest itself in harmonically related distortions. I can't see it causeing either higher or lower order distortions. |
Redkiwi, I am not a designer of digital audio playback devices but I thought that the data sent to the DAC is asynchronis. If that is true where are the timing errors? Is the data clocked from the transport to the DAC or is it really asynchronis? I do not know, but if it is asynch, then the statement about arriving with perfect timing does not make sense to me. Is it the timing between bits that you are talking about? If that is the case, how can the cable change the timing between the bits or even cause jitter? |
On someone's reccomendation I purchased a RadShack Digital RCA cable. They claimed it was very good, it was so bad there was audible static. I worked with an engineer once who screwed up the implementation of my DAC driver that it was always 2 bits off on each word. This causes the static. On most protocols I've workrd with, if they don't sync up they just drop the data. I assume that the thought here is dropping a sample here or there is no big deal. I'm pretty sure they didn't realize how audible the inadequacies were going to be. If they really wanted to do this right, they should have gone with laser pickup on analog discs. |
Now you have got me Kthomas - I don't know, and have not done much work on it since I cannot change how DACs work. I have only got so far as observing how transmission interfaces are subject to jitter and how jitter results in harmonic distortion. I suspect you are not quite right about the DAC getting bits wrong and it is more about how the DAC converts the digits to an analogue wave-form and how this is upset by timimg errors in the arrival of the digits. I surmise that the DAC cannot construct an accurate analogue waveform from a datastream containing jitter errors without a perfect buffering system. One can imagine how with no buffering at all, a DAC would find it difficult to create a perfect analogue waveform with digits arriving with imperfect timing. When I add to this the fact that I have never heard a buffering system that eliminates upstream jitter (they just reduce it or change it), then I can intuitively imagine how the problem arises. If I understood more about how buffering systems fail to work perfectly then I might have a better answer. |
I'm still missing something, but I've been known to be dense before - my take on timing errors would be that they cause bits to be misinterpreted - since sender and receiver have subtly different experiences with their independent clocking mechanisms, the receiver interprets things slightly askew and gets a different "answer" than the sender sent. The result would be different bits fed into the DAC than if there was a perfect transfer. Is there something to "timing errors" beyond this that I'm missing? |
Multiply that by 16 bits per word Gmkowal. I guess for the fourth time, the issue is not dropping bits, but that timing errors (not bit errors) cause harmonic distortion at the output of the DAC chip. Your investigations should focus on this phenomena - ie. varying the jitter at input (while leaving the bits the same) and measure the change in harmonic distortion at the output. |
One more thing. The rep rate of the bits in each burst was 10KHz. Ok, I should have set the rate to 40KHz, but I still doubt I would have lost any data as this is a pretty low frequency for these cables or at least 2 of the 3 cables. |
All I did was set up a burst of each pattern and a signature pattern (to be used in the little vbasic program I wrote) and had the pattern generator resend the the sequence over and over until I filled up the aquisition memory of the analyzer (It took some time so I did some real work also). However many samples it took to fill up the aquisition memory is how many samples I got. I did not count how many patterns I captured but it was alot! I used a Tektronix TLA704 analyzer so I stored the results on the analyzers hard disk. Tek's analyzer also runs windows so I wrote the Vbasic program right on the analyzer. I did a simple compare of the 5 bursts I sent (16 bits of all 1's, all 0's etc.) until I hit the signature burst and I started the compare over again and repeated the process until the software could not find the signature burst. If I got a situation where it did not compare I incremented a counter in my program. The counter never incremented so there were no errors. To test my software I buggered up my captured data file a bit and the program caught it. It was really an easy experiment but I probably spent more time on it than I should have at work but I trust nobody will tell my boss. As a designer of these types of systems I am sure you have run similar tests. While this is not my forte ( I do DSP design) it sounded like a valid test and you challenged me to the task. |
OOps almost forgot, in your test how did you verify that all samples sent were received and stored? If you sent 500,000 samples, did 500, samples get stored? |
I no longer have an analyzer, but whebn I did it was oked up to CD players and Transport / DACs. You just can't send bit patterns with your equipment because its probably far more sophisticated than what's in a CD playback system. Remember you also have to send 44,100 samples per second. A scope works fine at the output of the DAC to determine differences in the analog output of test signals. If you use the same transport, same DAC and same source, any differences must be from the cable. Also disconnect the destination end of the cable and put a volt meter on it, notice the large differences. |
One more thing, the logic analyzer used actually had 1 meg memory, not 16 meg. Someone borrowed my 16 meg aquisition module. |
I understand that the transmission is real time. I just do not believe a cable that is in good working order will cause a bit or two to be dropped. I agree with Blues_Man that the problem is probably in the transmitter or reciever if bits are being lost. I ran a quick test on my 3 digital cables at work today using a logic analyzer ( a scope is not the right tool for a real time test) with a pattern generator and deep memory (16 meg). I simply output a 16 bit burst every .5 seconds. The rep rate within the burst was set to 10KHz. I simply tied the pattern generator's clock to the logic analyzer's clock so that every time the pat gen's clock went high I captured the data until I filled up the memory and saved the data. I tried this with alternating 0's and 1's, all 0's , all 1's, a pattern of 0110111011001111 and it's complement. Once I had captured the data I saved them as ASCII files and I wrote a small visual basic program to look for missing bits in the patterns and found none. I also fed a repeating pattern of 0's and 1's into the cables and terminated the cable with what I approximated was the impedance of my D/A. I looked at the waveforms with a scope and looked for any droop, overshoot and undershoot. The risetime of the pulses appeared to be close on all 3 cables but I did notice some some other differences. I noticed one cable inparticular did cause more overshoot than the rest but when I varied the impedance used to terminate the cable I could minimize the overshoot (probably more capacitance in this cable causing an impedance mismatch). I marked this cable and gave all 3 cables to a another engineer who has a a separate DAC and transport to take home to see if the cables sound any different from one another. I am sorry but I did not hear very much of a difference between the cables to begin, with but I thought this would be a more subjective test. As for real time and loosing bits, the logic analyzer does not lie. I will let you know what he thinks. I can not think of a better way to test the real time data transmission characteristics of a cable. I burned up today's lunch time trying this, tommorrow I think I will have something to eat :-)Thanks for the post |
The problem is really in the transmitter and receiver. Even though there is a "standard", there are always difference in implementation of the hardware. I started out believing that there was no difference in digital cables. That was before I was familiar with the CD standard which isn't very robust. After a while I started measuring lots of cables and interfaces just out of curiosity. Its not easy to determine which cable sounds best with a particular transport / DAC combo. You have to rely on what other people have tried. First definitely goe with AES/EBU interface over RCA cable. It's not that much more. The one I liked best was the MIT reference. Its very expensive, $800, I believe its not worth the money but if cost is no object. I'll be putting the one I tested for sale here soon (at least half off). As I posted in another thread, the best thing is to get a Transport / DAC with a custom interface. I got the Spectral because I thought it sounded best. If you have access to a scope, use the method I suggested above with a high frequency tone. Whichever cable gives the most accurate signal is probably the best. |
Blues_man - Given the inadequacies of the CD protocol and your experience in the area of digital audio data transfer interfaces, what are the benefits brought to the environment by using a high-end digital interconnect over a basic, well-built digital interconnect? That was really the question at the beginning of this post, and I think we're all still curious. -Kirk |
Gmkowal just a follow on my previous response. If you don't believe me, prove it yourself. Attach a scope to the output of your DAC. Play some test tones on your transport, ( Use frequencies above 8k that's where the effect starts to be more noticable). Get some different cables and measure the signals that come out of the DAC. I have measured some pretty significant differences. I've always believed if you can measure the difference, it exists. |
Gmkowal I don't believe that you understand the signal transfer in CD playback. Most other digital transactions people have discussed are NOT real-time. CD playback is real-time. The protocol is one way with 44000 transmissions per second. Signal is lost. Bits are dropped. On other real-time digital aplications, say telephony, there is both a retry mechanism if the protocol allows and data recovery. CD protocol has no such provisions. Again rember this is technology that is almost 20 years old. The improvement in technology since the CD standard to the present is greater than the improvement in technology from Mr Bell And Edison to the start of CD technology. I suggest that your claimed knowledge of physics is flawed by your lack of understanding of this signal transfer. Before you go blasting a lot of other people, you'd better know what yo're talking about. As for some backgrtound, I've been working with digital audio on and off for over 30 years. I have spent over 8 years designing commercial digital audio and video and data transfer interfaces. So I'm not someone who has just read a few articles in StereoShill. |
Frogman, rather than challenge the fact that you hear the difference I will simply state what I know to be facts. While I most certainly do not have the trained ears of a musician, I think, (maybe wrongfully so) that God has blessed me with a decent set of ears. While I have not listened to a set of fiber optic cables, I can not discearn any real sonic differences between good quality copper (or silver etc.) cables. I also submit that the physics involved may not support sonic differences that are discearnable to my ears. I believe you when you say you can hear the differences, but maybe the differences are so subtle that one would need musican's ears to hear them. To quote Dennis Miller: "that's my opinion, I could be wrong". |
Don't worry about starting a religious war. The reason for these forums is to get the big picture. There are a million opinions out there and you should consider the ones that make sense to you. Part of the fun of these forums is to learn a little and give a little in return. I read a reply to one of my posts the other day that mentioned that audio was a "passionate undertaking". I subscribe to this wholeheartedly because if the music we listen to and the equipment that provides that music does not empower emotion and passion, why listen at all? If some of the responses seed a bit emotional, just understand that it is only because of the passion that we have for the music. |
I will be honest and say that I do not use fiber optic cables so I can not say definitively there is a sonic difference between digital cables. You have heard differences, so I will take your word for it. As for the reason.... I am an EE not a physisist. If I were to take a stab at it, with my limited knowledge of optics and lightwave, I would guess the differences would be due to propagation delay, loss or refraction differences in the cables. Maybe someone else with a better understanding of optics in this forum could enlighten us both. |
I am not sure we are disagreeing - over and undershoot being a capacitance issue, is the same thing as saying it is a bandwidth issue (but not necessarily the reverse). So far as I know there is no capacitance issue wrt glass cables, but they sound different from one another. |
Ehider, I am still the skeptic, but I appreciate your viewpoint and candor. BTW, Is your employer Burr Brown? Thanks for the reply. |
Redkiwi, the overshoot and undershoot you speak of is caused by capacitance in the cable. While overshoot and undershoot themselves may not necessarily effect the DAC output signal, the capacitance in the cable may effect the data pulses risetime. This effect, I would assume may be audible. While jitter is a degrading factor in a system of this type the transmission cable is not likely to increase or add jitter to the bitstream. I feel capacitance is the real culprit here. The less capacitance in the cable the better. |
Redkiwi: I get it. It is not only that you say yes and no, but "how" you say it as well. As far as I know nothing is perfect and there are always variences to be had. The sub atomic clock in Denver is pretty accurate, but I would assume that DAC's and digital transmission lines are not even in the same ballpark. Perhaps when we get organic based DAC's all cables will sound the same? |
I think I am repeating myself, but continue to get replies which imply that the only reason why digital cables could sound different is bit errors. This is not the reason at all - the reason is jitter - noise-based and time-based distortion. The problem is not about distortion causing a DAC to read a 0 as a 1, or a 1 as a 0. It is about the fact that we are talking about real-time transmission and that a DAC produces harmonic distortion at its output when the arrival times of the 0s and the 1s are not perfectly, regularly spaced. I really am having trouble saying this in as many different ways as I can. It is not about redundancy so that when an error occurs the data can be resent - we are not talking about data packet transmission here. Bandwidth capability is in fact an issue here. Even though the bandwidth for data transmission is low by most standards, if the cable was only just able to transfer the data accurately then the square waves would be very rounded indeed and jitter errors at the DAC would be enormous. Higher bandwidth cables allow sharper corners to the square wave with less undershoot or overshoot. Optical cables are also free from earth noise adding to the signal. It is not about bit errors, it is about timing based distortions. I work with loads of PhD telecommunications engineers but their grasp of these concepts is slight at best, because it is irrelevant for the audio fidelity needs of telephony and irrelevant for data packet transmission. But the best of them acknowledge that their training is insufficient for high quality audio. |
Response to Kthomas about not offering up a hypothosis. I refuse to offer up a purely speculative hypothosis, about an audio issue, that simply has no presently known logical eletrical engineering basis. I don't know how long you have been involved with high end audio, but let me tell you what happens with far reaching hypothosis, regarding electronic issues that don't seem to make any sense :THEY BECOME THE BASIS FOR MANY SNAKE OIL COMPANIES, THAT PURPORT TO BE HIGH END AUDIO MANUFACTURERS, TO INCORPORATE INTO THEIR PRODUCT LINES! I have seen this happen again and again. I remember when Class A amplifiers were considered the "best" sounding amplifiers, so therefore "Class A amplification are the best amplifier designs". The same thing happended with zero negative feedback designs, and more recently, SET amplification. Unfortunately, these so called "facts" are very misunderstood, and EXTREME generalizations at best. Each "fact" has set the high end electroinic industry back more than you can imagine. As audiophiles buy into these facts, they propenciate manufacturers to build these products, that incorporate these topologies, EVEN IF THIS IS NOT THE BEST WAY TO GET GREAT SOUND! The best example that I can think of regards speaker cable design. It has been hypothosized that a great sounding cable needs a ton of current carrying capabilty. So in turn, the majority of the high end cable companies have produced these huge thick cables for us audiophiles. Indeed, many manufacturers don't even consider that thinner guage designs might produce a much better sound. After all, the more current carrying capability, the better bass and dynamics, right? This is a no longer a hypothosis, but a fact, right? Well, this is just not the case . I personaly know three speaker cable manufacturers that are pulling their hair out, trying to design cables that represent what the audiophile community thinks are better (read big and heavy)... BUT ...this is just not true with many better sounding, properly engineered configurations. Other cable companies (that I personaly know...read; big, well known names) have figured out how to make each of their larger cable offerings sound better, as they incorporate more conductors and make each cable more expensive in the process. More importantly, they can then charge an ear, arm and a leg for these huge cables. These guys (that know the real truth) are laughing all the way to the bank! The other cable manufacturers ,that design huge cables. are not necessarily trying to rip us off. They have just not discovered the ultimate truth about cable configurations and conuctor size.(It amazes me how a "fact" can be created overnight in the audio engineering community.) This is why I refuse to offer up speculative, potential non-sense hypothosis that has no engineering basis. When I have a legitimate hypothsis, that has some fundamental electronic explanation, I will always offer it up as a hypothosis (with hope that it will not become a "fact" overnight). |
I have a small confession.. I enjoyed choosing the cables in my system. This enabled me to "customize" an otherwise mediocre system, and approach the high end without spending too much money. I know "too much" is relative. My cabling is valued around 35% of the cost of hardware, but thanks to Audiogon I was able to spend around 20%. |
Why is it that when refuting the argument that there is no technical reason for a digital cable to impart a sonic difference, those who hear a difference resort to the, "well, your system must not be resolving enough and/or your hearing not good enough or well-trained enough" line? Why not, "you're claiming I'm pre-disposed to hear a difference, while I think you're pre-disposed to NOT hear a difference". I would guess that if you took all the systems owned by those who say that they don't hear a difference and compared them to the systems of those that do, the quality and resolving capabilities would be quite similar. In any case, that point of view always detracts from the discussion in general, especially when it's layered on top of, "I don't know why it sounds better, it just does". If the system is indeed more resolving, offer up a hypothesis on how a system on which such differences can be heard effects this improvement so we can all learn from it. |
Hey Gmkowal, Digital cables do sound different!. From a theoretical standpoint, there does not seem to be a basis to "different sounding digital cables". As an Electrical Engineer myself, who represents a high end manufacturer of A/D and D/A converters, I absolutley agree with your statements. I, unfortunately, like so many, cannot come up with a good explanation to why different cables yield different sounds. Perhaps it is not the transmission media per say, but the interaction at either or both ends, or a combination, or both, I wish I could find the reason, because it exists!. Sometimes the sonic differences can be as large as changing interconnects, really!. Each D/A converter is different to how large the change may be. (I am currently on my ninth digital front end, and my sixth transport, so I have quite a bit of experience here). If you have already tried to hear the differences (for giggles, since it can't possibly be true), and you don't hear any, then the resolution of your speakers, amplifiers or other components is not allowing you to experience the differences. Some of the major sonic differences between cables involve the harmonic structure of the music , the soundstage width or depth. Unfortunately, this is also the smallest detail to preserve all the way to the speakers. Please try to open your mind on this one, it took me 3 years of preaching "there is no way there can be a difference, I do this for a living!", then (for giggles) I tried to prove my point. Boy, was that an embarassing moment. As a digital designer, you should consider that maybe there is something we are not considering as trained and schooled "experts". In the end, maybe you can become the hero that comes up with a logical reason to "why do digital cables sound different?". I stopped trying, and just listen to music through my best sounding digital interconnect. |
With all due respect, and I respect your 20 year experience as a digital designer, my 25 years as a professional musician tell me very loudly and clearly that I hear these differences. What's more, they aren't all that subtle in many cases. My musical colleagues (those that care about these things )also hear them. Is it not more productive and potentially enlightening to consider as plausible what the ears of those who use them for a living hear. I hate to break your bubble, but I assure you that the subtleties (subtle variations in timbre, pitch, time etc. ) that a musician has to be sensitive to playing in say a Brahms clarinet trio are far more subtle in absolute terms than the oftentimes obvious diifferences that are heard between cables, including digital. By the way, digitally recorded music to many colleagues of mine still does't "swing" the way it should and certainly not as well as good analogue. The groove or "fun factor" is diminished; not catastrophically but diminished none the less. I would like to respectfully encourage all of us to do more listening without focusing on the technical aspects of the sound. "Hearing" is not only what takes place in our ears, but letting that go on to touch us emotionally. Then that in turn frees us to "hear" more, and the cycle continues. There is infinitely more to hear/experience in most good music than most think. I remember that years ago when I first started reading the mags a couple of reviewers were fond of pointing out in their description of the prowess (or lack thereof)of various very expensive components, that these components were somehow to be praised for allowing the listener to "hear that the instrument being played was an English horn and not an oboe". This is almost laughable, I assure you that the difference in timbre between these two instruments is so obvious, that they can be easily heard over the lamest grocery store sound system. Then why bother? Because there is so much more than most imagine. I point all of this out only to encourage the cynics to consider the possibility that they are missing out on a whole lot of fun in their listening by letting technical issues dictate what and how much they should be able to hear. Happy listening. |
Bruce, cut the crap! We are not talking about frequencies in the GHz region. These are simple logic levels moving at less than a MHz. I have been a digital designer for 20 years and have never heard such garbage. I there is any effect at all it would result from delay caused by capacitance in the cable not by excessive standing waves. If I knew what the impedance of the DAC was, I bet we would would see very little return loss if we swept the cable on a network analyzer even up to 50 MHz. Sorry to burst your bubble but you are way off base. |