Toward the end of this thread a member having a very high end system reported that to his amazement inserting a cheap network switch between his router and his Bricasti M1 Special Edition DAC (ca. $10K) + optional network interface resulted in a substantial improvement in sonics. Another member who had suggested to him that he do that had experienced similar benefits in a different high quality system, after having done so based on a number of experiences that were reported by others at a different forum. Why would that be? Certainly not because adding the switch would improve the accuracy with which 1s and 0s are communicated, which was undoubtedly perfect to begin with. As I stated in that thread: Almarg 12-25-2017 ... the benefit that might result, if any, [would] depend on the particular router and perhaps also on the ethernet cabling that is being used, as well as on the particular switch and DAC. Presumably any sonic difference that might occur would result from differences in the waveform characteristics (e.g., risetimes, falltimes, and distortion) and also the noise content of the signal received by the DAC. Which in turn may affect the degree to which the RF content of that signal may find its way around the ethernet interface in the DAC and affect DAC circuitry that is further downstream.
Almarg 12-25-2017 My point is that no matter how good a job the DAC does in cleaning up the signal it receives, and no matter how good the design of the DAC may be, signals and noise don’t necessarily just affect or entirely follow only their intended pathway. And the waveform characteristics and the noise characteristics of the signal that enters the DAC will affect how and if RF energy present in that signal may to at least a small degree find its way via unintended pathways to unintended circuit points "downstream" of the ethernet interface and the internal reclocker you referred to. Steve N. of Empirical Audio (member "Audioengr"), who of course is a respected designer of high end DACs and other digital audio components, subsequently seconded my comment, while suggesting different terminology: Audioengr 1-5-2018 Very true, however I would avoid the term "RF". Its mostly what is referred to as "conducted" interference. In the case of Ethernet, it is leakage across the transformer interface. I don’t see any reason why differences among cables might not have similar effects on the signal, albeit probably to a lesser degree. Regards, -- Al |
Jinjuku 4-27-2018 Question is, do "Audiophile" Ethernet cables make any difference/ improvement in sound quality? I stated as follows in a post dated 4-24-2018: As I’ve said in a number of past threads, the existence of differences does not necessarily mean that more expensive = better results.
I have never said or implied anything that would be suggestive of a high degree of correlation between cable performance and cable price, when it comes to ethernet cables. And in fact several of the improvements I and a number of others have referred to in this thread involved "upgrading" to inexpensive cables. Regards, -- Al |
Jinjuku 4-27-2018
What about all the other operations going on? CPU caching operations,
Interrupts, DMA transfers, Memory Paging, SMPS? This is why I don't buy
into Al's argument.
With all this going on what ever variation of cable is going to be swamped by the system wide operations going on continuously.
Note that I said as follows in a post dated 4-23-2018: Regarding the OP’s specific question, though, I would expect that an Ethernet cable that is upstream of his PC would have less chance of making a difference than one that is directly connected to an audio component, where it would presumably be more likely to couple RF noise into sensitive circuit points within the audio system. Note that I also said as follows in a post dated 4-24-2018: Member Bryoncunningham, who IMO is an especially astute and perceptive listener, and is very thorough in his evaluations, described realizing a substantial sonic improvement by changing from a garden variety unshielded ethernet cable to an **inexpensive** shielded type.
....It should be noted, though, that Bryon’s experience involved an Ethernet cable that was connected directly to one of his audio components, not to a computer that was in turn connected to the audio system. Regards, -- Al |
Jinjuku, to clarify a key point in several of my previous posts, the "RF noise" I have been referring to, that might bypass the ethernet interface and buffer memory in the receiving device and affect the timing of D/A conversion and might also affect analog circuitry further downstream, is NOT primarily noise that is picked up by the cable due to antenna effects. And for that matter it is NOT primarily noise that might be present in the cable due to ground loop effects. As I said in one of my posts dated 4-25-2018: In addition to the effects of shielding on radiated emissions, shielding would presumably also affect the bandwidth, capacitance, and other characteristics of the cable, in turn affecting signal risetimes and falltimes (the amount of time it takes for the signals in the cable to transition between their two voltage states), in turn affecting the spectral composition of RF noise that may find its way past the ethernet interface in the receiving device. Also, small differences in waveform distortion that may occur on the rising and falling edges of the signals, as a result of less than perfect impedance matches, will affect the spectral composition of that noise while not affecting communication of the data. In other words, the RF noise I have been referring to, that might bypass the ethernet interface and buffer memory in the receiving device and affect downstream circuitry, results primarily from the energy of the SIGNAL ITSELF! Perhaps "crosstalk" or "coupling" of some of the signal energy would have been better ways to refer to it. And the amplitude and spectral characteristics of that noise/crosstalk/coupling can be expected to vary as a function of various characteristics of the cable. Such as its bandwidth, which in turn directly affects signal risetimes and falltimes, its impedance, which in turn directly affects signal reflections and hence waveform distortion and hence the spectral composition of the signal, its capacitance, the twisting of its conductors, and its other physical characteristics. And as I also said earlier: Putting it all very basically, responses by those claiming ethernet cables won’t make a difference nearly always focus just on the intended/nominal signal path. The basic point to my earlier posts is that in real world circuitry parasitic/unintended signal paths also exist (via grounds, power supplies, parasitic capacitances, the air, etc.), which may allow RF noise to bypass the intended signal path to some extent, and therefore may account for some or many of the reported differences. Real world circuits do not necessarily perform in the kind of idealized manner that is almost invariably assumed by those who assert that ethernet cables cannot make a difference. And while I am certainly one who recognizes that in general anecdotal evidence should be taken with multiple grains of salt, IMO there is a more than sufficient body of anecdotal evidence, provided by audiophiles whom I consider to be highly credible, to conclude that ethernet interface circuits commonly deviate from that idealized model to an audibly significant degree. Regards, -- Al |
Acepilot71 4-27-2018 May I ask guys with The Ear to describe or qualify the difference you notice depending on the Ethernat cable?
I suggest two categories: - Quality of a music - Quality of a sound
First one relates to the purity of the instruments, voices, etc. Second - presence/absence of parasite noises, distortions etc.
IMHO you should notice only second one. That’s an excellent question, IMO, Acepilot. However I have my doubts that we will be able to infer much in the way of a conclusion from the answers that may be provided. As I stated in my earlier posts in this thread, what seems to me to be a plausible technical explanation for some or many of the reported cable differences is that RF noise whose amplitude and spectral characteristics are cable dependent is to an audibly significant extent finding its way around the ethernet interface and buffer memories in the receiving device, thereby potentially affecting timing jitter at the point of D/A conversion. And perhaps affecting analog circuitry further downstream as well. If that explanation does in fact account for some or many of the cable differences that have been reported it seems to me that the audible consequences of those effects would be just as likely to subjectively manifest themselves in the first of your two categories as in the second. For example, regarding the audible consequences of jitter the following statement appears in this paper by Professor Malcolm Hawksford, a noted academician and researcher in this and other audio-related areas: One of the major difficulties in quantifying and explaining the consequences of jitter is that there are many sources of jitter. Also, jitter can be classed into three basic forms (all can coexist) where there can be periodic, correlated to audio and uncorrelated artifacts. Periodic jitter-related artifacts are further complicated as they can be linked, for example, to mains hum as well as the various clock signals present within equipment. Also, there can be correlated elements with the actual digital signals carrying the audio information. All these inter-related dependencies complicate the interpretation of jitter making it difficult for a simple jitter estimate or spectrum to be interpreted in terms of its subjective consequences. As well as the numerous sources of error, the system architecture itself can influence the way jitter affects the resultant audio signal. For example, the use of noise shaping and up-sampling [10] with linear pulse code modulation (LPCM) alters the spectrum of the jitter induced distortion. Whilst, as suggested in an earlier paper [11], the use of a multiplying digital-to-analogue converter (DAC) with a raised cosine reference signal can in certain circumstances reduce distortion and augment interpolation between samples prior to the low-pass filter reconstruction filter. There are also analogue amplifiers which when processing a sampled-data signal can produce distortion akin to correlated distortion [12]. Finally, the choice of 1-bit sigma-delta modulation (SDM) code [13], pulse-width modulation (PWM) code [14] or multi-level LPCM code [15] changes the nature of jitter distortion. And as stated in the paper’s conclusion: Jitter is an important aspect of digital audio system design and as suggested by the simulations described, it can result in distortion that has a relatively complicated form. As stated, there are several mechanisms that give rise to jitter where in practice it is the relationship between jitter and signal that is critical. Finally, regarding the potential effects of RF noise on analog circuitry, the sonic character of whatever audible consequences may result from effects such as intermodulation of that noise with the audio signal, and demodulation of AM (amplitude modulation) spectral components of the noise, it seems to me could very well manifest themselves in either or both of the two categories you defined. As you aptly said in an earlier post, there are "too many variables in this equation." :-) Best regards, -- Al |
Benzman 4-25-2018 I don’t understand why folks that choose not to believe in differences in cabling actually debate with those of us that can detect differences in sound quality. That seems to be a perennial question in debates such as this. And often parties on both sides of such debates tend to impute a variety of nefarious motivations to those on the other side. My own belief on this question, which I haven’t previously stated, is as follows: I believe that the great majority of those on both sides of such debates are sincere in their beliefs and in their statements. What I believe is usually the motivation of the so-called naysayers can be illustrated with a hypothetical situation: Let’s say that in a music forum various classical music buffs were discussing the music of Johann Sebastian Bach. And let’s say that someone submitted a post asserting in no uncertain terms that Bach was a third rate hack as a composer. Certainly the other participants would feel a natural urge to set the record straight. And it is my belief that so-called naysayers believe, rightly or wrongly (and perhaps in many situations the truth is sufficiently nuanced to lie somewhere in between), that what is being asserted by those on the other side of debates such as this is comparably ridiculous, as well as impossible. And consequently I believe their motivation in such situations is likely to be similar to that of the Bach connoisseurs in the hypothetical situation I described. Regards, -- Al |
Where in the playback system are you referring to this jitter?
The entire point that, and I will keep with Tidal as example, is that once local buffer is filled up, and buffers are indeed static storage, that any timing variance ceases to exist. It’s why I can watch Netflix 4K streamed with no issues.
The fact of the matter, and it is indeed FACT, is if I pull the network cable for 1 second I’ve introduced 1000000000ns of jitter but some how the playback system has managed to deal with this and deal with it to the point that if you are blinded you couldn’t tell me if your life depended on it.
While this timing difference may be in the DA converter circuits, that’s not the same as Ethernet which is burst in nature and asynch. I am referring to timing jitter at the point of D/A conversion. And I am referring to the possibility that cable differences may affect the characteristics of RF noise that may bypass (i.e., find its way **around**) the ethernet interface, buffers, etc. and **to** the circuitry that performs D/A conversion. Regarding disconnection of the cable, putting aside the possible significance of airborne RFI doing so would of course work in the direction of reducing noise that may be coupled from the input circuit to the point of D/A conversion. The 1,000,000,000 ns of jitter you referred to has no relevance to that. Putting it all very basically, responses by those claiming ethernet cables won’t make a difference nearly always focus just on the intended/nominal signal path. The basic point to my earlier posts is that in real world circuitry parasitic/unintended signal paths also exist (via grounds, power supplies, parasitic capacitances, the air, etc.), which may allow RF noise to bypass the intended signal path to some extent, and therefore may account for some or many of the reported differences. Regards, -- Al |
Homes don’t have these challenges. So the shielded designs don’t help, but they could hurt if the shield ties end points to chassis and creates a ground loop. Floated shield would be fine however and the costs are minimal if it makes the audiophile feel better. I agree that shielding is not needed in a home environment to assure that communications on the ethernet link are robust and reliable. However I wouldn’t rule out the possibility that it could make a difference with respect to RF noise that may be coupled **from** the cable **or** from the input circuit of the receiving device to circuit points that are downstream of the ethernet interface in the receiving device. Such as to D/A converter circuits, where timing jitter amounting to far less than one nanosecond is recognized as being audibly significant. (See the section entitled "Jitter Correlation to Audibility" near the end of this paper). In addition to the effects of shielding on radiated emissions, shielding would presumably also affect the bandwidth, capacitance, and other characteristics of the cable, in turn affecting signal risetimes and falltimes (the amount of time it takes for the signals in the cable to transition between their two voltage states), in turn affecting the spectral composition of RF noise that may find its way past the ethernet interface in the receiving device. Also, small differences in waveform distortion that may occur on the rising and falling edges of the signals, as a result of less than perfect impedance matches, will affect the spectral composition of that noise while not affecting communication of the data. When someone tee’s up a track in Tidal on their 100Mb/s cable modem and they pull the Ethernet cable and the song still plays what is actually happening from a cable perspective at that point? Obviously noise that may find its way to circuitry of the receiving device that is downstream of its ethernet interface, as a consequence of the signal it is receiving, will be eliminated. On the other hand, airborne RFI may increase since the cable would no longer be connected to a termination that would absorb the signal energy. Which of those effects may have audible consequences, if in fact either of them does in some applications, as I indicated in my previous posts figures to be highly component and system dependent and to have little if any predictability. I’ve personally had Ethernet cabling from $27 a foot to $233 a foot and compared directly to 315 foot of BerkTek CAT5e. No difference. I don’t doubt your experience. However, I also don’t doubt experiences that have been reported by members such as DGarretson, Bryoncunningham, Grannyring, and others here who are similarly thorough when assessing a change. Regards, -- Al |
As a follow-up to the hypothesis I stated in my previous post, here is a quote from another post I had made in the "Most Important Unloved Cable" thread:
Almarg 3-19-2017 Those reading this thread may wish to also read a series of posts beginning around 2-16-2012 in the following thread:
https://forum.audiogon.com/discussions/shielding-components-from-emi-rfi-help-please
Member
Bryoncunningham, who IMO is an especially astute and perceptive
listener, and is very thorough in his evaluations, described realizing a
substantial sonic improvement by changing from a garden variety
unshielded ethernet cable to an **inexpensive** shielded type. I
described some technical effects which may have accounted for that.
Also, this thread will be of interest:
https://forum.audiogon.com/discussions/are-my-cat5-and-router-my-weak-link
A comment Bryon made on 8-7-2012 in the latter thread:
I
can confirm what Al has reported about my experiences when I replaced
an unshielded Cat 5 cable with a shielded Cat 6 cable. The result was
more resolution. A lot more.
The $7 shielded Cat 6 cable resulted
in a bigger improvement in SQ than several $1,000 power cords and
several $2,000 interconnects. Yes, I know that sounds crazy. I can’t
explain it.
I’m not saying that other systems will benefit similarly. In fact, I doubt it. But it’s certainly an affordable experiment. As
I’ve said in a number of past threads, the existence of differences
does not necessarily mean that more expensive = better results.
It should be noted, though, that Bryon's experience involved an Ethernet cable that was connected directly to one of his audio components, not to a computer that was in turn connected to the audio system. Regards, -- Al |
It seems to me that what has been largely overlooked in this discussion (with the exception of the brief post by Markalarsen) is the fact that 100% of the energy of an electrical signal, especially one that as in the case of Ethernet contains spectral components at very high RF frequencies, does not necessarily go only where it is supposed to go. Experienced designers of high speed digital circuits (of which I happen to be one) will recognize that. And given that a number of members here who are highly respected and highly experienced audiophiles have reported finding that the choice of an Ethernet cable can have significant sonic consequences, I offered the following hypothesis in the "Most Important Unloved Cable" thread that David_Ten linked to in his post early in this thread: Almarg 3-27-2017 Most likely what is happening is that differences in the characteristics of the cables, such as bandwidth, shielding, and even how the pairs of conductors that carry the differential signals are twisted, are affecting the amplitude and spectral characteristics of electrical noise and/or RFI that finds its way via unintended pathways to unintended circuit points "downstream" of the ethernet interface in the receiving device. "Unintended circuit points" may include the D/A circuit itself, resulting in jitter, and/or analog circuit points further downstream in the component or system, where audible frequencies may be affected by noise that is at RF frequencies via effects such as intermodulation or AM demodulation.
"Unintended pathways" may include, among other possibilities, grounds within the receiving device, parasitic capacitances, coupling that may occur into AC power wiring, and the air.
What can be expected regarding such effects, however, is that they will be highly system dependent, and will not have a great deal of predictability. Regarding the OP’s specific question, though, I would expect that an Ethernet cable that is upstream of his PC would have less chance of making a difference than one that is directly connected to an audio component, where it would presumably be more likely to couple RF noise into sensitive circuit points within the audio system. Personally I don’t have an Ethernet connection in my audio system, but that’s my take on it. Regards, -- Al |