Could somebody please explain this to me? Why does a $100 dollar cable sound better than a $50 cable. Why is silver coax better than copper coax? Why do the quality of connectors matter in the digital realm?
I'm currently needing a cable for a Stello U3. Some people claim that are no discernible differences out there b/t different levels of coax cables. They say the only things that matter are impedance,cable length, and adequate shielding.
I stream TIDAL hard wired CAT6 from my 5G modem to my 16 bit capable Apple TV connected via Audioquest Coffee HDMI cable and I still think my system sounds great. However, I have recently decided to purchase a Bluesound so I can take advantage of TIDAL's hi res subscription. I just hope I can hear the difference between 16 and 24 bit.
The retailer suggests I connect the Bluesound to my ARCAM AVR550 using Audioquest DC powered Coffee digital coax at a cost of $429 for just a 1 meter cable. Since the Bluesound costs $499, this digital cable will cost about the same as the Bluesound. I got cold feet over the weekend and told the retailer to order the Audioquest Carbon digital Coax at a price of $192. I'd be curious how many feel I am making a big mistake or is my assumption downgrading the cable from Coffee to Carbon is not to make any difference in sound quality?
Also, I have looked at music streamers ranging in price from $499 to $1,600 to include Bluesound, Roon and Bel Canto music streamers. Is there a significant difference in sound difference between them? From what I can determine the DAC inside my ARCAM is better than the DAC's inside all three music streamers. I think the real difference will be to add a DAC that is capable of unfolding MQA completely. Unless, these players can either unfold 10x from the MQA data, they can really only unfold 1x, which then plays down to 16/44, the same as a 16 bit CD. Also, why we are at it, can you really hear a difference between MQA played through an MQA certified DAC VS 24/192? If not, they I am questioning whether I should switch from TIDA to Qobuz which streams at 24/192. Seems to me like 24/192 sounds better than MQA anyways.
The more things change the more they stay the same. This whole discussion is a rehash of what was said in the early days of this thread five years ago. In fact, it’s a rehash of the cable controversy that started what, 40 years ago? All you need is zip cord or something like that. And technical types were claiming the laws of physics were being broken back then, too. "There are only 3 things that can affect cable performance, resistance, capacitance and inductance." Yeah, right.
Mr_m, yes, inductance and most other cable parameters are proportional to length, including resistance, capacitance, the resistance rise at high frequencies caused by skin effect, the effects of dielectric absorption, and propagation delay.
In the case of analog cables, inductance is most likely to be audibly significant in speaker cable applications, especially if the impedance of the speaker is low at high frequencies (as it generally is in the case of electrostatic speakers), and/or if the cable length is long, and/or if the particular cable has relatively high inductance per unit length. The impedance presented by an inductance is proportional to frequency, and therefore in the upper treble region may become an audibly significant fraction of the impedance of the speaker in those situations.
Inductance (as well as resistance) will usually be unimportant in the case of line-level analog interconnects, since the corresponding impedances will be vastly smaller than the input impedance of the component receiving the signal. In that application capacitance will often be important, particularly if the output impedance of the component providing the signal is high.
In the case of digital cables, inductance is one of the key determinants of what is called "characteristic impedance," which for coaxial S/PDIF is nominally the 75 ohm figure you are probably used to seeing mentioned, and for AES/EBU is nominally the 110 ohm figure you are probably used to seeing mentioned. The characteristic impedance of a cable is NOT proportional to length. However at the RF frequencies which comprise digital audio signals the less than perfect impedance match that will inevitably exist to some degree between the cable’s characteristic impedance and the nominally 75 or 110 ohm impedances of the connected components will result in some fraction of the signal energy reflecting and re-reflecting back and forth along the cable. The arrival of those reflections and re-reflections at the DAC will result in some degree of distortion of the signal waveform as received by the DAC, which may or may not ultimately affect timing jitter at the point of D/A conversion, depending on the arrival times and also on the design of the particular DAC. And those arrival times will be dependent on the length of the cable as well as on the propagation velocity of the particular cable. That is explained well in the paper by Steve N. of Empirical Audio that was linked to earlier.
I won't pretend to know much of the scientific facts about cables and I maybe over simplifying this, but doesn't cable inductance have a large effect on a cables sound. Inductance is affected by length, isn't it???
Sorry Al, don’t buy it. You get an E for Effort. I won’t point out your name dropping. You apparently haven’t had the opportunity to try the Shun Mook Orignal Cable Jacket or the Highwire Cable Wrap, both of which improve the sound of *analog* cables by, you guessed it, addressing the cable reflections. I know what you're thinking, "But, but that disobeys the laws of science!" 😀
cheers
People would be much better off if they believed in too much rather than too little. - PT Barnum
The difference is the frequencies that are involved, Geoff, which bring completely different effects into play. Such as the one that is explained in the paper by Steve N. of Empirical Audio that was referenced earlier in the thread, which explains the rationale for 1.5 meter digital cables. That rationale having no relevance whatsoever to analog audio signals, since it involves the effects of signal reflections that result from impedance mismatches at RF frequencies on timing jitter at the point of D/A conversion. Surely you realize that cable effects can be dependent on frequency, especially when both the frequencies and the application are vastly different? If you don’t, any further debate would be pointless, and I’m not going to engage in any.
Also, your last sentence represents a complete misreading of what I have said. I absolutely did not say, and have never said in this thread or any other, that length differences in analog cables won’t be audible. In fact in many other threads, such as this one in which you’ve participated very recently, I’ve said that they certainly can be audible. I have said, however, that in general the shorter an analog cable is the better, assuming the goal is for the signal to be conveyed in as accurate a manner as possible (i.e., for the sonic effects of the cable to be minimized).
Feel free to have the last word, even if it involves asserting that something I have neither said nor implied is incorrect. To others who may be interested in the subject, I would commend the excellent posts earlier in the thread by Kijanki, AudioEngr (Steve N. of Empirical Audio), and others, as well as my own posts.
"The 1.5 meter length recommendation that is often seen for digital cables has no relevance whatsoever to cables conducting analog audio signals. In general, in the case of analog audio cables the shorter the better, if it makes any difference at all."
>>>>>That’s weird. Most high end cable manufacturers with good hearing like Bob Crump recommend 1.5 m as the optimum length for interconnects, maybe for speaker cables, too. I suspect the same is true for digital cables since they’re all conducting electromagnetic waves, so what’s the diff? Certainly the particular metal, copper or silver or gold infused silver, in a digital cable, like interconnects, is audible so why not the length? My personal favorite reason is that 1.5 m minimizes reflections. Al, I'm confident you will come up with a perfectly valid technical reason why length cannot possibly be an issue for analog cables, and why audiophiles must be imagining things.
Mrblackcrow 8-26-2017 I read somewhere else that all interconnects can benefit from being 1.5m. Somehow that doesn’t sound right to me. So, is it true- will 1.5m RCA interconnects going from my DAC to my pre amp sound better than the 2 ft length I am currently using?
You are correct about that not sounding right. The 1.5 meter length recommendation that is often seen for digital cables has no relevance whatsoever to cables conducting analog audio signals. In general, in the case of analog audio cables the shorter the better, if it makes any difference at all.
Also, regarding S/PDIF and AES/EBU digital cables, as you can see in some of the earlier posts in this thread the optimal length is dependent on a great many component and cable dependent variables, some of which are not usually specified (e.g., risetimes and falltimes of the output of the signal source). So 1.5 meters should be considered as having the best odds of being optimal for a digital cable (unless a very short length is practicable, such as 6 or 8 inches), but other lengths may be better in some cases, and there may be some cases where it won’t matter very much if at all.
Was just reading about how 1.5m is the best length for a digital cable. Something to do w/ reflections. I read somewhere else that all interconnects can benefit from being 1.5m. Somehow that doesn't sound right to me. So, is it true- will 1.5m RCA interconnects going from my DAC to my pre amp sound better than the 2 ft length I am currently using? Thanks
'Jitter measurements are a rat-hole IMO. Jitter has never been effectively correlated with SQ anyway, and based on my experience, it is very dependent on the spectral signature of the jitter. Single jitter measurements are useless to say the least.'
Bingo - we have a winner.
I have mucked around with all sorts of sources and thats it exactly.
'Fortunately, HDMI 1.3+ provides for this and is generally superior to any SPDIF interface, (even with million dollar cables), for this reason. Buy an Oppo BDP-95 and a DAC with an HDMI 1.3 input (Meridian HD621), and you won't have to worry about jitter.'
You must listen to different HDMI stuff than me. On my NAD M51 SPDF and USB easily bests HDMI. And when fed with an Off-Ramp which has jitter below 10ps you can easily hear the improvement over any other method. HDMI is far from jitter free - not by a long shot.
I hasten to add it probably has nothing to do with the interface per-se but what's feeding it via the interface.
Also anyone one who thinks there is any method available today where you don't have to worry about jitter they are whistling dixie. Even on sources with jitter below 10ps like the Audiophello 2 and Off-Ramp you can easily hear the difference.
I have a Playback Designs DAC that they advertise as jitter immune. Even on that DAC you can hear the difference when fed with a low jitter source like an Off-Ramp - although it is not as great as with other DAC's - but it is still there.
Rower - if you are getting bit errors, the eye pattern is so bad that jitter is the least of your concerns.
"reclocking the data at the DAC is the key to superior reproduction"
You would think so, but unless you can synchronize the source to the DAC using word-clock, it is not the best solution. Very few sources have this capability.
You will achieve much lower jitter making the source jitter low and feeding it to a DAC without internal reclocking. This is because all of the techniques for asynchronous reclocking for jitter reduction in DACs are inferior, both PLL and ASRC.
Rower30: what you have outlined is my understanding as well, based on Rf transmission theory. Impedance match at the connectors and loss tangent (dissipation factor) in the cable are the two key parameters.....everything else is secondary. That is why cheap CAT5 cable is far superior to most analog audio cable for digital transmission.
Also reclocking the data at the DAC is the key to superior reproduction. Putting the clock in the data stream was poor engineering from the start. Fortunately, HDMI 1.3+ provides for this and is generally superior to any SPDIF interface, (even with million dollar cables), for this reason. Buy an Oppo BDP-95 and a DAC with an HDMI 1.3 input (Meridian HD621), and you won't have to worry about jitter.
...True, but we are not talking about bit errors here, we are talking about psecs of jitter. The cable matters, as does practically everything else...
Jitter is directly tied to the Eye pattern tests, so the concept is valid for digital audio. The percent jitter is actually calculated off the eye pattern. The jitter in turn can be used to predict BER rate.
"Some manufacturers like Sonic Frontiers implemented an I2Se interface to avoid these issues."
I2S is available on Empirical Audio, PSAudio, Wired for Sound and other gear. Some SE and some differential.
Even I2S requires a good cable. Actually more-so than S/PDIF because the frequencies are a lot higher on I2S.
"I guess my message is, if you supposedly need a $500+ coax cable to get the job done then maybe you need to choose a better interface. Stack the deck in your favor at least, don't be a victim!"
Huge thanks for that eloquent explanation Steve. I was hoping you would see this thread. It's making much more sense to me now. One of the best threads on these Audiogon forums I have read in fact. Thanks again.
"Perhaps you didn't test the right cables, or your preamp creates enough distortion, noise and compression that you dont hear the benefits because they are masked. This is fairly common when using an active preamp. I dont use a preamp, so I dont experience this masking anymore. It's a system after all, so every component and cable matters."
Maybe. But I do hear differences in most everything else I tweak beside digital source and ICs. So I think I have the relative magnitude right at a minimum. I hear a lot of systems and live music and I do not hear any distortion or dynamics issues of significance, but of course we know such things are always in play to some extent in home audio.
Surprised nobody has mentioned the SPDIF clock which is an analog RF waveform (if I'm not mistaken) that must stay in sync with the data.
Not sure if AES/EBU any better in that regard, but it is supposed to be easier to guarantee impedance match (110 ohm in this case).
Some manufacturers like Sonic Frontiers implemented an I2Se interface to avoid these issues.
I guess my message is, if you supposedly need a $500+ coax cable to get the job done then maybe you need to choose a better interface. Stack the deck in your favor at least, don't be a victim!
"How user can possibly know that wire or system meets the specs?"
You can't. The best option for consumers is to read the reviews from a reputable reviewer and then try one.
"In noisy system (external or internal noise) it is better to get fast switching transport getting more of reflections but in very quiet system it might be better to get slower switching transport to minimize reflections."
Have you tried it? It sounds nice in theory, but usually does not work well.
"Are you saying that, assuming some impedance mismatch, 1.5m cable will be always better than 6" cable (that I used not long ago)?"
No, 6" cables are generally not commercially available. I'm saying that a 0.5m or 1m cable will not be as good as a 1.5m cable. In order to actually get 6" total, you would need probably a 3" cable since there is cable in the transmitting and receiving device.
Jitter measurements are a rat-hole IMO. Jitter has never been effectively correlated with SQ anyway, and based on my experience, it is very dependent on the spectral signature of the jitter. Single jitter measurements are useless to say the least.
Rower wrote: "For digital, you just have to pass the eye-pattern level for proper digital signal retention. The more open the "eye" the fewer error bits, the lower the jitter. This should be the full channel, too. Cable and connectors both. You don't need expensive cables to virtually remove bit errors."
True, but we are not talking about bit errors here, we are talking about psecs of jitter. The cable matters, as does practically everything else.
Sure, but I would sell that system and get one that meets the specs so I dont have to try to find a wacked-out cable that matches it.
How user can possibly know that wire or system meets the spects? I prefer to choose cable for the system and not the system for the cable.
This reference is usually noisy due to the system voltages and ground-bounce. Very difficult to make it noise free.
Yes, that's part of the noise I'm talking about. There is no perfectly quiet system and there is no perfectly impedance matched cable. It is always compromise. In noisy system (external or internal noise) it is better to get fast switching transport getting more of reflections but in very quiet system it might be better to get slower switching transport to minimize reflections.
Even at 25nsec, the cable length helps however. the A/BX testing proves it.
Are you saying that, assuming some impedance mismatch, 1.5m cable will be always better than 6" cable (that I used not long ago)? It doesn't make sens. There will be no reflections in 6" mismatched cable, assuming average transport (with 25ns transitions), but a lot of reflections in 1.5m cable. Even if the first reflection misses originating edge there will be consecutive reflections. There are techniques to predict effect of multiple reflections on the signal (Bergeron Diagrams) but it is very complicated task.
As for the measuring the jitter - effects can be measured but I agree with Al that it will be useless since it will depend on all the factors he mentioned. Measuring jitter effects at particular frequency in particular system in particular home etc. has no value to anybody.
For digital, you just have to pass the eye-pattern level for proper digital signal retention. The more open the "eye" the fewer error bits, the lower the jitter. This should be the full channel, too. Cable and connectors both. You don't need expensive cables to virtually remove bit errors. Example, Ethernet can transmit signal 328 feet at 10 MB/sec with virually zero dropped bits with 100-ohm 4PR24 23 AWG wire (250 Meg per wire pair times 4 ). Once you go digital, and with sufficient eye pattern margin, the "sound" is the AD converter, not the bits...which have no sound at all. I would look more at your AD converter than the cables, and such short cables seldom have issues with data rate Xmission. What's a few feet (with proper impedance matching) at best compared to 328 feet / 100 meters?
Shielding is only as important as the environment is poor. Mosy twisted pair digital cable has plenty of CMRR ratio to passively remove noise. But, a shield will attenuate the artifacts so that the remainder of the noise common mode rejection doesn't remove is 20-30dB smaller in magnitude. The shield doesn't improve the CMRR of the system, it just makes the noise the system has to deal with SMALLER. The reduction is the same "something" times X of the orignal signal, the original signal is just smaller.
Don't be fooled with shields, though. A shield concentrates the ground plane tightly around the cable which also magnifies a cables defects. Poor quality UTP cable that works, can be a disaster with an added shield. Once you BEND the cable, the issues keep getting worse, too. So shielding isn't to be taken lightly as an improvement unless you are seriously in the know about the cable under that shield. Most will have better performance with unshielded cables that are twisted pairs. Coaxial cables do better with shield (and have to have one to work, anyway) since the internal construction is easier to manage as it is a round dielectric and holds the shield at a constant distance from the internal unbalanced signal wire, and while the cable is bent, compared to balanced pair cables. Issues with noise are much less a problem with balanced cables that remove the noise through CMRR (RF) and the pair twists (low frequency magnetic interference), and good coaxial cables can put RF noise 100 dB below the signal but are more helpless against magnetic interference when used at audio frequencies below 1 MHz or so.
The typical wires to meet digital Ethernet eye patter and attenuation to cross talk margins (ACR) are sufficiently small to make "skin effect" not much of an issue. The 0 dB (where the signal is larger than the noise) ACR value extends past 500 MHz on good cables. Common electronics can see a signal as much as 6 dB below the 0 dB ACR frequency, too. So "finding" the digital stream isn't too tough, what's tough is the DA conversion at each end. All the advances in Ethernet technology trickle down into other digital media as it becomes affordable.
Kijanki et all are right on that the "system" length is important as you get shorter lengths as the reflected energy caused by impedance mismatch at SPDIF frequencies can be pretty large. On longer cables, reflected energy is attenuated out. Most all digital systems have a much harder time passing "short length" channels than long length channels for that reason. RL (Return Loss) is not so good with RCA connectors as they aren't 75-ohms. So digital audio seems to gets a deserved bad rap on qualty of the interface far in excess of the cable between the connectors.
It is also true that to have a "transmission line" in the true sense, the wavelength nees to be at least 1/4 length relative to the velocity of propogation of the cable. Shorter than that and you don't get reflections (remember the open and closed menometer physics experiments?).
Audioengr point number two is somewhat confused. Dielectric is called a di-electric because it apposes electric flow. This is true because it sits between two conductors and makes a capacitor, which apposes signal change or "flow". The capacitance is a fixed figure when you decide a cable's impedance which is 101670 / (capacitance x velocity of propagation of the dielectic). There is no "absorption" at all based on the capacitance, just the storage and release of energy. So if you want a faster cable at a given impedance, lower the capacitance by changing the dielectric.
101670/ (20.5 x 66) = 75 ohms 101670/ (17.3 x 78) = 75 ohms
Now, what does grab away the energy is the dissipation factor or loss tangent (same thing) of the material. The transverse electromagnetic wave energy transmitted between the two wire (view it as a wave between two plates), and traveling in the dielctric medium, is a reactive vector, and the "real" part verses the "imaginary" part create inefficiencies that causes lost energy. The imaginary (the loss "tangent") part is lost.
He's 100% correct that a vacuum is the best dielectric (or air as the poor man's vacuum) to lessen losses. Teflon does NOT need to be used, however. PP or PE is cheaper, and better than Teflon at normal room temperatures. Teflon is just expensive, and you can listen to your stereo in a 200F room if you like! The cable won't melt. Oh, Teflon is pretty, though.
Be wary of mixed conductors though. With such short distances the advantages are not significant. Yes, the higher frequencies attenuate more, so higher order digital frequencies should arrive less attenuated through silver...but at what length? And at what skin depth? I'd worry a whole lot more about the connectors and DA converter. Those pretty silver cables may look like you care more, but the hidden stuff no one sees matter more.
Steve, thanks very much for your comments and insights.
My one comment in response is in regard to:
10-06-12: Audioengr "Characteristic impedance different than 75 ohm can be very good, as Al mentioned, if it is better match for given system."
Sure, but I would sell that system and get one that meets the specs so I dont have to try to find a wacked-out cable that matches it.
The problem as I see it, in at least most cases, is that there is no practical way for the consumer to know what the transport's output impedance or the DAC's input impedance is. Even JA's measurements don't address those parameters, at least in the reviews I've looked at. And, if I recall correctly, the tolerance defined by the S/PDIF standard is a very loose one, something like +/- 20 ohms or +/- 20%.
I have never seen impedance control on any Transport or DAC circuit board. Occasionally, the wiring from the circuit board to the connector is impedance-controlled, but this is the exception, not the rule.
It all seems to me to add up to a very hit or miss situation, and even more so given that another key parameter, the risetime and falltime of the transport's output signal, is also usually unspecified, and widely variable (e.g., 25 ns or so in many cases, per your paper; 3 ns or less in some cases, per your statement above).
10-06-12: Dura ... and then there is impedance match which I frankly do not quite understand.
See Steve's paper, linked to above, which explains it all nicely.
I use Ethernet CAT-5e wire with quality RCA jacks. It sounded as good as if not slightly better than a $500 Nordost digital cable. I figured that a cable that can pass a 350mhz computer digital signal would work for audio digital connections.
Because the SPDIF digital signal is NOT a matter of the presence or absence of a signal (standard binary). It is the transition from the falling square wave to the next and transition across the top of square wave to the next drop that comprises the binary data. Additionally it is out of sequence to give error correction a fighting chance to keep things straight. You can only see this on an oscilloscope that is rated out to 100 mHz. The rise and fall times are in the nanosecond range and cable quality and correct impedance are key to good sound reproduction. Maybe more so than analog interconnects.
Ones and zeroes doe not exist in real life; it is all analogue signals, interpreted as numbers. You would like thee signals to be the only one on the cable, and you want them unaltered, so interpretation is flawless. If the cable picks up or rejcts other signals, noise, it could upset f.i. feedback loops in amp stages, giving distortion. It the signals that represent the numers themselves are distorted, interpretation and timind can suffer; this is jitter. So a cable could do a few things wrong, and then there is impedance match which I frankly do not quite understand.
"Will a cable of some determinate length not add some measurable, repeatable, non-arbitrary amount of jitter within a particular range of measurement, regardless of any jitter coming from the source component?"
Yes, assuming the signal is repeatable.
"Are there any cable manufacturers that measure and publish jitter specifications for each of their different cable products and cable lengths?"
I cannot think of any cable manufacturers that can afford a $150K analyzer from Agilent that it takes to measure this. Even JA of Stereophile with his latest and greatest AP system cannot measure it.
The other thing you must understand is that a lot of the jitter problem in cables is caused by imperfect receivers in the DAC, not the cable itself. If you put an analyzer at the end of the cable instead of the DAC receiver, everything changes. You lose half of the effects.
"Characteristic impedance different than 75 ohm can be very good, as Al mentioned, if it is better match for given system."
Sure, but I would sell that system and get one that meets the specs so I dont have to try to find a wacked-out cable that matches it.
"Same for slowing down the edges. Uncertainty of threshold is not caused by long transitions but by the noise."
Noise will certainly cause jitter (signal integrity or ground-bounce), but slow edges by themselves will also cause jitter and usually worse based on my experience. The problem is the voltage reference which sets the switching threshold at the receiver. This reference is usually noisy due to the system voltages and ground-bounce. Very difficult to make it noise free.
"With very little noise present longer edges might reduce impedance mismatch caused reflections, reducing jitter in effect. "
It sounds like common sense, but it doesnt work in practice. Faster edges and precise matching works a LOT better.
"Making cable "at least certain length" is not precise since cable is not even considered transmission line when propagation time (one way) is shorter than 1/8 of transition time being about 0.6m for typical 25ns transitions (assuming 5ns/m)"
I know this "rule of thumb", but really low jitter systems have risetimes of 3ns or less, not 25nsec. Even at 25nsec, the cable length helps however. the A/BX testing proves it.
Mapman wrote: "on the several occasions where I have compared different digital cables going into my DAC(s), if there was a difference, it was not enough for me to take clear notice or even care. I know that in theory different levels of jitter is the result and that jitter level matters. But does it really in practice?"
Perhaps you didn't test the right cables, or your preamp creates enough distortion, noise and compression that you dont hear the benefits because they are masked. This is fairly common when using an active preamp. I dont use a preamp, so I dont experience this masking anymore. It's a system after all, so every component and cable matters.
Almarg wrote: "Do you feel that the following may also be significant contributors to sonic differences between S/PDIF interconnects, at least in some systems?
6)Differences in noise-induced jitter, due to ground loop effects and/or RFI/EMI pickup, both of which may be sensitive to cable differences.
7)Differences in radiated RFI, that may affect circuitry in the system that is not directly related to the S/PDIF interface."
These are both potential contributors to jitter, although #6 is not directly related to cable quality, and # 7 is mostly a function of the receiving device IMO.
As for cable pricing, I have found that in general cables below the $500 mark for 1.5m length sound about the same. Significant improvements are not realized until one spends more than $500. This is when you start to get the more exotic constructions, conductors and dielectrics, as well as better shielding. Just my experience.
They do matter just listern to your ears + trust them, I have tested many many digital coax cables + i found the more money i spent the better they sounded to me. this INMO Opinan is down to the design,materials + connectors been, used. the best digi coax cable i've ever tested + still own, it the audioquest "eagle eye" at £700GBP per mtr!, it is just the business, solided silver conductor mutli sheilding, + DBS battery pack, its not hog-wash it works period!!!.
Digital cables are fundamentally different than analog cables.....the cable is an electrical transmission line at 44+ kHz. The 75 ohm impedance match at each end is critical for jitter, minimizing reflections, etc. BNC connectors are made for this, but Canare makes one of the only 75 ohm RCA connectors available......I've tried 'em and they sound better. Any good video cable in between works well, but the cable pedigree is much less important than with analog cables.
We need to look at the whole system. Cable impedance is designed to 75 ohm to match everything else in the signal path. However, everyone knows nothing will be perfect, so it comes down to how close is the actual impedance to 75 ohm. How about the connectors, trace of PC board, etc. What matters is at the point where the clock information is recovered from the signal. If it is not exactly 44.1kHz, there is jitter and it causes distortion to the audio signal. Therefore, cable quality matters if everything else in the system closely matches to 75 ohm. If by accident, everything matches to 70 ohm, it will be good too, but chance for that to happen is very small.
Therefore, professional studio use master clock such that only the 1 and 0s are decoded from incoming signal and not the clock information. In that case, jitter can be minimized to just between the DAC and master clock generator. Then most effect from the cable can be eliminated. A few high end decoder manufacturers adopt this setup like Esoteric and DCS for home use. It is a much more expensive solution, but it works.
Will a cable of some determinate length not add some measurable, repeatable, non-arbitrary amount of jitter within a particular range of measurement, regardless of any jitter coming from the source component?
No, absolutely not. As implied in some of the preceding posts, the amount of jitter that will result with a given cable in a given system, at the point where D/A conversion is performed within the DAC (which is where it matters) depends on a complex set of relationships and interactions between the parameters of the cable, including length, impedance accuracy, shielding effectiveness, shield resistance, propagation velocity, bandwidth, etc., and the technical characteristics of the components it is connecting, including signal risetimes and falltimes, impedance accuracy, jitter rejection capability, ground loop susceptibility, etc.
Many of the relevant component parameters are usually unspecified, and even if they were specified in great detail predictability of the net result of those interactions would still be limited at best.
Seasoned got it right, but I still wonder with current state of digital technology if in practice it is really that much of a problem with most modern gear. Digital technology in this regard has come a long way since the CDs outset around 30 years ago. That makes a big difference.
Of course to whatever extent it may still be a problem in practice, audiophiles will care more about it than most normal people.
In my case, to date, I would have to say that digital cable tweaks have made the least difference of most any tweak I have tried. Most others (digital and analog related) I hear a difference. With digital cables, I am still waiting. I have mostly compared optical versus coax to-date specifically. These are each significantly different so I expected to hear something but have not so far. I have also yet to hear a practical difference through the same DAC from various digital sources. I have compared several digital sources including Marantz DVD player, Denon CD player, ROku SOundbridge and Logitech Squeezebox. They all tend to sound similar and essentially equally good to the point where I determined it did not matter to me.
Will a cable of some determinate length not add some measurable, repeatable, non-arbitrary amount of jitter within a particular range of measurement, regardless of any jitter coming from the source component?
Are there any cable manufacturers that measure and publish jitter specifications for each of their different cable products and cable lengths?
Steve, What you describe is general quality of the cable and not performance of the cable in particular system. Characteristic impedance different than 75 ohm can be very good, as Al mentioned, if it is better match for given system. Same for slowing down the edges. Uncertainty of threshold is not caused by long transitions but by the noise. Long transitions make it only more susceptible to noise induced jitter. With very little noise present longer edges might reduce impedance mismatch caused reflections, reducing jitter in effect. Making cable "at least certain length" is not precise since cable is not even considered transmission line when propagation time (one way) is shorter than 1/8 of transition time being about 0.6m for typical 25ns transitions (assuming 5ns/m).
Yes I'm also an EE with 34 years design engineering experience involved in Data Acquisition design for last 25 years - since you asked, otherwise I don't feel it would be appropriate for me to fortify my posts with it.
My brain tells me no two cables that are physically different conduct electricity (or light for optical) the exact same way. So there has to be differences to some degree. The question for me is then how much and are the differences significant enough to matter in practice?
I wonder about digital ICs in general in this regard more so than analog ones. No two analog ICs usually sound the same to me. But on the several occasions where I have compared different digital cables going into my DAC(s), if there was a difference, it was not enough for me to take clear notice or even care. I know that in theory different levels of jitter is the result and that jitter level matters. But does it really in practice? It's something I have not been able to discern with my own ears so far.
Steve, thanks for your inputs. Do you feel that the following may also be significant contributors to sonic differences between S/PDIF interconnects, at least in some systems?
6)Differences in noise-induced jitter, due to ground loop effects and/or RFI/EMI pickup, both of which may be sensitive to cable differences.
7)Differences in radiated RFI, that may affect circuitry in the system that is not directly related to the S/PDIF interface.
Concerning your no. 3, impedance mismatches, and with respect specifically to the impedance match to the components that are being connected (as opposed to mismatches between cable and connector, or impedance discontinuities within the cable) I would add the thought that what is important is not how accurately the impedance of the cable and connectors match the 75 ohm standard, but how closely they correspond to the actual output impedance of the component driving the cable, and to the actual input impedance of the component that is at the receiving end. Everything else being equal, a cable that is less accurate relative to the 75 ohm standard may therefore outperform a more accurate cable in some systems, if it happens to be a closer match to the component impedances.
Finally, I would be interested in your take on what degree of correlation can generally be expected between cable performance and cable price, for S/PDIF interconnects, given the many variables and system dependencies that are involved in the effects that have been mentioned.
Regards, -- Al
P.S: Re your first question, I am an EE with an extensive background in digital signal transmission (not for audio).
Are there any engineers or physicists in the posts above?
Why is is that everyone thinks they are an expert?
Well, I'm an engineer, and I used to manufacture excellent digital and analog cables, so here are the reasons:
1) Losses that slow the risetime of the signals on the cable - this causes the receiving component to detect the edges with less certainty resulting in more jitter
2) Dielectric Absorption - this is also called "soakage" and is analogous to a sponge absorbing water. The dielectrics absorb some of the charge and then it is not discharged at a constant rate. Some cables eliminate this effect by putting a DC charge on the cable with a battery. Others minimize the effect by using air dielectrics or air-filled teflon etc.. The effect is that the energy required in the signal to make a rising or falling edge is not the same for each edge because of the charge in the dielectric. The signal must overcome this charge and it cannot, so some edges are displaced in time, causing jitter.
3) Impedance mismatches - The nominal impedance of a S/PDIF coax cable should be 75 ohms, but this varies all over the map with different cables and the connectors on the ends also affect this. Impedance discontinuities cause reflections on the cable when the signal is launched into it. These reflections can bounce from end to end until they finally dissipate with the cable losses. If they happen to hit the receiving end when it is detecting the signal edge, the edge may be pushed in time, creating jitter.
4) Metallurgical defects in the conductors - Low-jitter S/PDIF signals can have risetimes in the 1nsec range. When signals this fast are launched into a cable, the conductor metallurgy affects the signal propagation down the cable. If there are a lot of faults in the crystal lattice of the metal conductors, this causes small reflections. They are like small impedance discontinuities. These reflections can appear at the receiver at the time it is detecting the edge and cause the edge to be displaced in time, causing jitter. You can look at TDR plots of this effect on real conductors here:
5) Length of the cable - All S/PDIF coax cables are imperfect and therefore cause some level of reflections, which can result in jitter if the timing of these reflections is unfortunate. By making the cable at least a certain length, one can avoid the effects of these unavoidable reflections, thereby avoiding the added jitter. This has been proven in double-blind tests by the magazine UHF in Canada. Here is a white-paper on the effect:
Many of us simply think that, "hey, digital is nothing more than ones and zeros, so just as long as those ones and zeros get to their destination without changing state or being corrupted, a perfect transmission will occur and no sound difference can possibly be heard, end of story."
As explained by a couple of responses above, there is one aspect of digital transmission that many audiophiles don't get. It is called [drum roll please]...timing.
Those ones and zeros must enter the DAC chip at exactly the right time to be converted into the proper analog waveform shape. If the timing is off (by mere picoseconds) the constructed waveform will be there, but not exactly the correct shape it should be. And that is where much of the sound differences of different cables come into play (and CD transports, etc., for that matter.)
I suspect that, for various reasons, all digital cables have slightly different timing characteristics. Whether your DAC chip and associated circuitry is compatible with a particular timing characteristic determines the sound outcome. The cable's timing, and whether the DAC "likes" it, is not dependant on the cost of the cable.
Stop thinking so simplistically, digital audio is not just about the ones and zeros - that is only part of the story.
There are differences to me. In a word, it may come down to jitter. Some cable designs adversly influence jitter more than others and all the physical elements of the cable design are in play. There are at least 3 kinds of jitter induction: optical, electrical and mechanical. Coax cables may affect both mechanical and electrical jitter. Insulation (less is more) may also influence the sound (openness) as well. Lots of people prefer silver in this application (I use copper and am very satisfied with it) and it may be due to the higher frequencies than with analog, but I'm leaving those with more of an ee background to expound on that. My cables are not shielded as I believe this also constrains the openness a bit (at least with analog IC's, IME) and I believe should ordinarily be avoided unless you have a discernable problem with interference. But, Mlsstl is right - no matter which way you go with it, there will always be someone to tell you you're wrong. Regards.
Robertsong, typical CDP used as a transport outputs digital S/Pdif signal with about 25ns edge transitions. Speed of signal in the wire depends on the dielectric but we can assume about 5ns/m. Very start of transition (knee) travels thru cable and reflects on characteristic impedance boundary (impedance change). That is always the case since there is no perfect match but degree of mismatch (and therefore reflection) changes. It will take 10ns for signal to travel forth and back of 1m cable. Reflection will add to transition in progress changing its shape. This will affect moment in time when level change is recognized by the DAC (threshold). Pick 1.5m cable and we're dealing with 15ns until reflection comes back. It will still add but to second half of transition when level most likely will be already recognized. In this application 1.5m cable will be better than 1m cable but, as I said, it might depend on slew rate of the transport (expensive dedicated transports are often faster) plus dielectric and metal used.
This time variation of edges in time is called jitter. Jitter is basically noise in time domain. In frequency domain it shows as small sidebands not harmonically related to root frequency (music tone). Imagine playing 1kHz tone while digital cable is in close proximity of strong 60Hz noise (power cable). This might produce 60Hz jitter of digital signal creating two sidebands (sum and difference) to tone that is being played. One sideband will have frequency of 1060Hz while another will be 940Hz (-50 to -60dB typical). So instead of just single frequency you'll get three. Now play thousands of frequencies (music) and you'll get 3x more. Replace 60Hz noise with combination of many frequencies (radio stations, 60Hz, etc) plus effects of reflections in the cable and you'll get total mess. This mess is noise that is proportional to amplitude of music signal. The only way to detect it is to see effects of it as lack of clarity, less precise imaging, less "black" background etc. Without signal there will be silence since there will be nothing to modulate.
If we won't take into consideration other effects like ground loops created by the cable or noise collected and injected into analog section, then the only difference between digital cables is jitter. Character of this jitter is affected by the character of the noise - is it random (uncorrelated) or caused by offending frequency like 60Hz (correlated), is shield better in suppressing high or low frequencies, are there any reflections in the cable?
One cable might be better characteristic impedance match to your system, while the other might have shield working better at the particular noise present in your room. More expensive cables tend to have better shielding but might be not the best impedance match to your system. Expensive cable doesn't have to be better. Coax is usually better than Toslink having wider bandwidth (hundreds of MHz vs tens of MHz) but when transport transitions are slow and therefore susceptible to noise and this electrical noise is present then Toslink might sound better. There is no right or wrong because it is system dependent. Toslink might be even blessing if you have ground loops.
Now let's get back to our 940Hz, 1000Hz, 1060Hz example. These sidebands are close to 1kHz tone thus masked a little bit more than frequencies further apart. It is sonic signature of sort, related to type of electric interference in your room, system noise, impedance matching, transport slew rate , cable propagation speed, shield effectiveness at different frequencies and perhaps few other I cannot think of now.
Average person using electricity has "idea" about it but very few understand it. Same should be true for digital cable. We have an idea but we have to try it since there is no way anybody can predict or measure how it will sound in particular system.
Assuming the cable is properly constructed and terminated, then in the digital domain, it is "bits in, bits out" as far as I'm concerned... Or at least until blind or ABX testing proves it to me otherwise.
The conclusion from my previous post, it pays to experiment with different digital cables. They don't have to be uber bucks some just sound "better" than others.. PS: I've never used RCA terminated digital cables only balanced, BNC and glass ATT.
You must have a verified phone number and physical address in order to post in the Audiogon Forums. Please return to Audiogon.com and complete this step. If you have any questions please contact Support.