@uberwaltz, I don’t see any technical flaws in what was said in the review you linked to. But of course the measurements and listening tests the reviewer performed involved one specific DAC, which for example is a very different animal than the Lyngdorf TDAI 3400 used by @discopants with the same switch. Also, I see that the DAC (which btw is incorrectly referred to in several places in the review as the "Matrix i"; it was correctly referred to in one place near the end of the review as the "Element i") does not provide gigabit Ethernet capability. I’m not sure how many audio components having Ethernet connectivity are capable of operating at gigabit speeds (as opposed to 10/100), but I suppose that any such components might tend to be more susceptible to coupling of very high frequency noise (at hundreds of MHz or more) associated with the risetimes and falltimes of the Ethernet signal, that I referred to a few posts back. As @atdavid aptly said a few posts back: The question is, how good were the circuit designers at either end in ensuring noise didn’t get onto or coupled from the Ethernet "signal"? Best regards, -- Al |
... losses in sending an actual signal, are quite low, 1-2db, even at 100KHz. ... and I would emphasize that losses are even a good deal less than 1-2 db at 100 MHz, as can be seen in the graphs. The question is, how good were the circuit designers at either end in ensuring noise didn’t get onto or coupled from the Ethernet "signal"? +1. Thanks for providing the link. Best regards, -- Al |
@brotw, thanks for the mention, and for your comments. However while I of course agree that spectral components at very high frequencies are present in Ethernet signals when audio data is being conveyed, certainly extending up to hundreds of MHz and probably to a significant degree into the GHz region, I would have to disagree with your analysis. The factor of 2^16 (or 65,536) in your analysis, which of course corresponds to the number of possible signal levels that can be defined by Redbook data, is not being used correctly. What is being conveyed for each sample (for each channel) is simply 16 bits, not 65,536. So to be precise, given also that the 8b/10b encoding used by Ethernet increases the number of bits by 25%, your equation should be: 16 bits/sample x 44,100 samples per second x 2 channels x 1.25 = 1.764 MHz The reason spectral components can be present at hundreds of MHz or more can be analyzed approximately as follows: My understanding is that both 100 mbps and 1000 mbps Ethernet transmit packets of data at clock rates of 125 MHz, which corresponds to a clock interval of 8 ns (nanoseconds). The risetimes and falltimes of the signal must therefore be significantly less than 8 ns. Let’s say 2 ns. If we assume first order rolloff, risetimes and falltimes of 2 ns correspond to 3 db of rolloff at 0.35/2 ns = 175 MHz. First order rolloff corresponds to 20 db/decade, so the spectral content of those risetimes and falltimes would only be down 20 db at 175 MHz x 10 = 1.75 GHz! Thanks again. Regards, --Al |
Jason_k2017 11-4-2019 You must try to understand that almost everything can introduce EM/RF noise. Every single device from the audio server, through every device my audio passes through on the internet and then my exchange to my router to my DAC can. Dozens and possibly hundreds of devices. But none of those sites think it necessary to use a magic cable or a magic switch.
I had said as follows in one of my posts dated 10-29-2019: ... the risetimes, falltimes, noise characteristics, and distortion characteristics of the signal received by the audio component from the network switch or router that is immediately upstream of that component are almost exclusively a function of the network switch or router that is immediately upstream of that component.
Regards, -- Al |
and ’ for the umpteenth time’ almarg Can a digital switch, as the manufacturers and ’reviewers’ say it can, improve audio and video streams passing through it. ? If you don’t know just say so It depends on what is meant by "improve." It will not improve (or change) the data that is being conveyed. It may affect the manner and the degree to which the characteristics of the signal affect downstream circuitry. And consequently it may improve the overall performance of the system and the sonics heard by the listener. If that distinction is too complex for some to understand I don't know what else to say. Regards, -- Al |
This is why I asked my original question, because I could not see how a switch working at level 2 or 3 can affect the encoded audio (or video, they also claim !) in level 7. Are you saying that they can or they can’t. It can NOT be both, and bear in mind that you keep repeating that the digital signal is not degraded by a switch
As I just said in my previous post I have never used the word "degraded" in this thread, except to say that I have not used it and that its use can be misleading. And for the umpteenth time the encoded audio is not affected in the sense of being received inaccurately. The subsequent D/A conversion circuitry and/or analog circuitry is what may be affected by differences in the characteristics of the Ethernet signal. Every single manufacturer and supplier of these special ‘audiophile’ switches and every single ‘reviewer’ claims that these switches will "improve" the digital audio being passed through them. In my previous post I suggested that you re-read my (and Atdavid’s) posts in this thread, beginning with the first of my posts dated 10-29-2019. In the very first paragraph of that post I said as follows: As someone having extensive experience in digital (and analog) design, although not for audio, it is very conceivable to me that a network switch can make a difference sonically. Not because it affects the accuracy with which 1s and 0s are received; not because it affects the timing with which those bits are received; ***and probably not because of most of the reasons that are likely to be offered in the marketing literature of makers of audiophile-oriented switches.*** [Emphasis added].
Regards, -- Al |
Why, if the digital signal is not being degraded and is therefore reaching its destination intact (which is all that matters), do I need a ’special’ switch I believe the word "degraded" is being used in different senses by different people in this discussion. I have not used that word at all in my posts because it can be ambiguous and misleading in the context of digital signal transmission. In a properly functioning Ethernet link all of the data (the 1s and 0s) will be received accurately. Yet at the same time various characteristics of the digital waveform will vary to some extent, depending on whether a switch is immediately upstream of the receiving component, and on the particular switch if one is present. I mentioned some of those characteristics in my first post in this thread, on 10-29-2019, and Atdavid elaborated further on my comments and added to them. Those characteristics may affect D/A conversion circuitry and/or analog circuitry in the component receiving that waveform that is downstream of the Ethernet interface in the component receiving that waveform. I can only suggest that you take the time to re-read all of my posts in this thread and the posts by Atdavid. After doing so, if what we have said still isn’t clear I don’t know how to say it any more clearly. Also, regarding... Why, ... do I need a ’special’ switch? Please note that I haven’t said that you or anyone else necessarily needs a "special switch." In fact my initial post in this thread referred to another thread in which two members reported significant sonic benefits to have resulted from insertion of a switch costing less than $20 into the signal path, in their very high quality systems. My point has simply been that sonic differences that have been reported by long-time highly respected members to have resulted from insertion of a switch or between different switches are technically plausible and explainable, in my opinion as an experienced designer of high speed analog, digital, and D/A converter circuits (not for audio). Regards, -- Al |
jason_k2017 11-3-2019 Why would I install something which will change the audio that the studio took so much care to create, and my DAC has taken so much care to faithfully reproduce? It just doesn’t make any sense to me. Have I misunderstood what the switch manufacturers, and some of the posters on here, have stated a switch will do to my digital audio? I though a switch was a device that simply connected point a to point b. Understandably you may not have read all of the posts in this lengthy thread. Please see the various posts in the thread by me and by Atdavid, which have offered technically-based explanations consistent with the many experience-based anecdotal reports that have been provided in recent years by highly experienced and very highly respected long-time members (such as David_Ten, Grannyring, and the two members I referred to in my initial post in this thread, among others), to the effect that network switches can significantly affect sonics. Begin with the first of my posts in this thread dated 10-29-2019. As you’ll see, the reasons have nothing whatsoever to do with "changing the audio," and have nothing whatsoever to do with improving the accuracy with which bits are conveyed to the DAC (assuming the Ethernet link is functioning properly). They have everything to do with interactions involving ostensibly unrelated signals and circuitry, including interactions involving circuitry that is downstream of the Ethernet interface in the DAC or other receiving component. Interactions that are dependent on the spectral composition of the signal waveforms on the Ethernet link, which in turn can be presumed to vary significantly as a function of the characteristics of the particular switch and its power supply Putting it all very simply, real-world circuits and systems do not necessarily behave in accordance with idealized conceptions of how they should behave. I’ll add that while various "naysayers" who have posted in the thread have either completely ignored those explanations or have dismissed them as being "silly" and/or ignorant I feel safe in presuming that those members do not have extensive background performing detailed design of high speed electronic circuits comprising a mix of digital, analog, and D/A converter circuitry. If indeed they have any circuit design experience at all. Regards, -- Al |
... should I run a single short Ethernet cable from my router to the switch and then connect the various TV devices, plus my Antipodes DX server, to the switch....is it that simple?
@mitch2 What I suspect would be best is to leave the TV-related devices connected as they presently are, directly to the router, and to try (a) inserting the switch between the Antipodes and the Metrum, and then (b) inserting the switch between the router and the Antipodes. And comparing results between (a) and (b) and what you have now.
Assuming it sounds better, am I to understand the next step that would further the sonic improvement would be to purchase an "audiophile" switch ....?
Perhaps. But I don't think anyone can predict with a great deal of confidence that there would be further improvement, given the many component, cable, and system dependencies that are inherent in the explanations I and Atdavid have stated. Best regards, -- Al
|
Mitch2 10-31-2019
I would like to know why I would need a switch as discussed
here, where I would use it, and what it would do for me. My knowledge of
switches is basically non-existent, hence the dumb question.
@mitch2, in
simple terms I would put it that a network switch that is typically used on an Ethernet
network in a home environment can be thought of as a port expander. The Ethernet ports of multiple devices can be
connected to it, and it would provide a path for communications between any two
of them. Typically it would determine
the device to which to send data “packets” it receives from one of the devices based
on local IP addresses that are assigned to each device by a router. The router being one of the devices connected
to the switch. Although routers commonly
include switch provisions themselves, supporting several ports.
Obviously you
don’t need that port expansion functionality in the application you’ve
described. But as you’ve seen I and Atdavid
have proposed explanations for why some audiophiles have reported finding that
inserting a network switch into the path between their router and their audio
system’s Ethernet port has been sonically beneficial.
In your case
my guess, and it’s just a guess, is that since your DAC communicates with the
upstream device it is connected to via I2S chances are that inserting a network
switch further upstream won’t be worthwhile.
But as a very inexpensive experiment you might consider purchasing a
metal-enclosed network switch, such as the Netgear GS305, and inserting
it into either of the two upstream Ethernet connection paths you described. A similar predecessor of that model was
reported by two members in the thread I linked to in my initial post
in this thread to have provided significant sonic benefit when inserted between
their router and their Bricasti DAC.
Best regards,
--Al
|
@atdavid, thanks for your thoughtful and knowledgeable comments on my post.
Almost exclusively, the claims are that Cat-6/7/etc. "sounds better". While that claim may not be accurate, Cat 6/7 will allow much higher signal edge speeds, which would lead to more noise injection by your proposed method.
Very true, of course. But I would expect that once the spectral components corresponding to those edge speeds reach high enough frequencies, whatever “high enough” may be in a specific case, increased amplitude of coupled “noise” would be outweighed by decreased ability of the circuitry to which it may couple to respond to those frequencies.
Even for these custom designs, they would use off the shelf ethernet drivers to ensure compatibility and they are forced into a specific impedance. I would expect most use off the shelf ethernet transformers as well.
Putting my response simply, none of this stuff is perfect :-)
Putting it less simply, I have no specific knowledge of the differences in impedance (and also bandwidth) that may exist between various off-the-shelf Ethernet drivers and transformers, e.g., what the +/- tolerances on those parameters usually are. But I would assume it likely that the +/- tolerances on impedance are wide enough to potentially affect the spectral characteristics of VSWR-related waveform distortion, with the length and impedance tolerance of the cable connecting the network switch to the audio system probably also factoring into those characteristics. And consequently the spectral characteristics of “noise” corresponding to that distortion that may couple into susceptible circuit points may vary as a function of the particular network switch, the cable, and the receiving transformer and its surrounding circuitry. With variations in the internal physical layouts of different designs conceivably also having significant consequences.
Outside of the high frequencies, which can get in, but are also the most likely to be filtered at some point, the subharmonics which could be in the audio band or modulated down are going to be mainly a function of the data itself.
If I understand your point correctly you are implying that coupling of data-dependent “noise” into susceptible circuitry has a greater likelihood of being audibly significant than the contributors I mentioned, namely spectral components corresponding to risetimes/falltimes, waveform distortion, and noise per se. And if so the likelihood of there being audibly significant differences between network switches is lessened (or perhaps even eliminated) since the data would be the same regardless of what switch is being used.
That’s an interesting point. In typical circumstances, though, eight-bit bytes are being communicated in a matter of just a few nanoseconds, and most or all data bits are presumably toggling much of the time. So if, as I would presume, the edge speeds of those toggles, and the susceptibility of downstream circuitry to the injected “noise” corresponding to those edge speeds, as well as waveform distortion resulting from less than perfect impedance matches, as well as noise introduced by the network switch and its power supply, are all likely to vary significantly among different systems, cables, and network switches, it’s probably anyone’s guess as to which of the four contributors we have mentioned is likely to be most significant in a given application.
The bottom line, IMO, is simply that the reported anecdotal evidence supporting the notion that network switches can affect sonics to an audibly significant degree (examples being the two cases I described in my initial post in this thread, which were provided by members for whom I have developed considerable respect over the years) does not seem to me to be beyond the bounds of technical plausibility.
In any event, welcome to the forum, and thanks again for your thoughtful and well stated inputs.
Regards, -- Al |
The packet either arrives there or it doesn’t or a resend is tried. A switch is not going to modify or enhance the data residing in the application layer of a packet.
Agreed, of course. But that has no relevance to what I have said in my previous posts. What the switch will modify are the spectral characteristics of the signal that is provided to the audio system, which may result in differing effects on ostensibly unrelated circuitry that is downstream of the system’s Ethernet interface. I don’t know how to say that any more clearly than I already have, and I’m not sure why those who contend that a network switch cannot affect sonics keep focusing only on delivery of the data. Regards, -- Al |
@jnorris2005, I’m not sure if you read my previous post before submitting your post just above. But the risetimes, falltimes, noise characteristics, and distortion characteristics of the signal received by the audio component from the network switch or router that is immediately upstream of that component are almost exclusively a function of the network switch or router that is immediately upstream of that component. The "hundreds of regular old switches" you referred to have nothing to do with those characteristics. Also, the explanation I stated has nothing to do with "mangled bits." For example, differences in risetimes and falltimes do not constitute "mangling," or lack thereof. They are just differences, that may or may not have different effects on downstream circuitry. As I said in the first paragraph of my post: ... it is very conceivable to me that a network switch can make a difference sonically. Not because it affects the accuracy with which 1s and 0s are received; not because it affects the timing with which those bits are received; and probably not because of most of the reasons that are likely to be offered in the marketing literature of makers of audiophile-oriented switches. Regards, -- Al |
As someone having extensive experience in digital (and analog) design, although not for audio, it is very conceivable to me that a network switch can make a difference sonically. Not because it affects the accuracy with which 1s and 0s are received; not because it affects the timing with which those bits are received; and probably not because of most of the reasons that are likely to be offered in the marketing literature of makers of audiophile-oriented switches. The likely reason relates to differences in waveform characteristics such as signal risetimes and falltimes (i.e., the amount of time it takes for the signal to transition from its lower voltage state to its higher voltage state and vice versa); differences in noise that may be riding on the signal; and differences in distortion of the waveform that may be present. In other words, things that affect the spectral composition of the waveform. Those differences in waveform characteristics in turn may, IMO, affect the degree to which some of the RF energy present in the signal may bypass, i.e., may find its way around, the ethernet interface circuitry in the receiving component and affect circuitry that is further downstream. Perhaps affecting timing jitter at the point of D/A conversion, and perhaps affecting analog circuitry further downstream via effects such as intermodulation or AM demodulation. One thing that became abundantly clear to me in my experience as an electrical engineer is that signals and noise don’t necessarily just affect or entirely follow only their intended pathway. And the waveform and noise characteristics of the signal that enters a circuit can affect the degree to which RF energy present in that signal may find its way via unintended pathways to unintended circuit points "downstream" of the intended circuit. "Unintended pathways" may include things like grounds within the receiving device, parasitic capacitances, power supply circuitry, or even radiation through the air within the component. For example, in the following thread ... https://forum.audiogon.com/discussions/bricasti-m1-dac-vs-ps-audio-direct-stream-dac?page=9 ... two members reported that inserting an inexpensive Netgear switch between their router and the ethernet interface in their audio system resulted in significant sonic improvement. One of those members, whose system is of exceptionally high calibre, was extremely skeptical initially, but ended up saying "I can’t believe it." None of this is to say, though, that a given switch will provide benefits that are consistent from system to system, or that there will necessarily be much if any correlation between the cost of a switch and the benefits it may provide. Regards, -- Al |