Do I really need an " Audio Grade Network Switch "?


I think it's been a quite controversial topic for years, like what's the difference between normal network switch and an audio grade network switch, the price difference is certainly obvious though...
Anyway, I've done some researches, most audio reviewers say that under this " new digital streaming era " that an network switch is a must for an audio system, which is understandable for me, I mean because if I wanna play TIDAL or Qobuz or Spotify, I gotta use network so I can stream these online services, so yeah I get that if the network quality is good enough, it can possibly level up the music performance.

But anyhow, I'm new to this area, so I don't like to spend big bucks on my first purchase hahaha... there's a very wide range of the prices though, the top one is Ansuz Power Switch I think, the inner circuit and design look pretty sharp, and surely over my budget lol

So I'm choosing between Bonn N8 and SW-8, these two both got good reviews, and the prices seem so darn much friendly to me as I'm looking for an entry level switch now, do any of you have any insights to share?
or should I just go for the higher level ones?

Best,

preston8452

Showing 8 responses by astolfor

I do not believe you do, not even network cables as long as they are the appropriate CAT. I have looked in side a couple of AURALiC, Mark Levinson, and Aurender, the NICs I saw and cross-referenced are nothing but basic NICs, and that is all you need. What is the most throughput you can see on any of these devices 1-2Gbps if that? Your packets arrive to the NIC, the NIC sends them onto the next component up the hardware stack. What does an "audiophile switch" do different to a regular network switch, does it add any barbeque sauce to the Ethernet frame that other regular switches don’t ?

What does an audiophile network cable do different than a regular and compliant cable? Do they add any magic sauce to the payload?

Unlike many other components in HiFi, there is an Ethernet international standard by which NICs, switches and cables must comply to meet the different "CATs".

The only place where you might notice any difference in cables is when you are at or very near the max cable length or bending the cables beyond their bending radius, where cheaper cables might have a little more db loss.

If my memory is correct, Cat5e, Cat6, and Cat6a have a maximum 100meters, Cat7 does too but Cat7 gets advertised for its 100 Gbps speed, and will only work for distances up to 15 meters. Beyond that, it drops to the same 10 Gbps speed of Cat6 and Cat6a (although it still retains its superior 850 Mhz bandwidth).

In 1 or 2 hops with 1 or 2 10Gbps switch you will not be introducing any jitter or packet loss unless there is something bad with the cable, or some other hardware problem.

Now if your network math does not work, because oversubscription for example, then there isn’t any switch or cable can’t do anything about it.

I would love to hear the electrical explanation and sustaining documentation for anything else.

Furthermore, do you think that banks and other financial institutions, where trillions of dollars are to be made in milliseconds, armed forces and other critical and life sustaining services would not be using "special cables or switches" if they provided a performance advantage?

Yes there are switches that can process more switching and routing functions faster than others, but that is processing other more complex functions that are not required for a basic home network, unless you are telling me that you will be doing something like packet inspection among your network switches; but even then they would not do anything to improve the "quality" of the frame or payload.

The fact is that for the most part, unless environmental and security requirements, CAT cables are CAT cables. Either they meet the desired CAT spec or not.

Just my 2 cents.

Spend the money on music, speaker cables, different tubes, but network switches and network cables? I would not.

 

I completely agree, we all listen different and glad you improve your music investing in a network switch.

I was not trying to convince you or suggest that you don't hear the difference. Just trying to explain the technical reason for those that in addition to make decisions based on their ears want to understand where technically makes sense to invest.

 

@antigrunge2 are you saying that the clock in a switch makes a difference?

In a home network, the clocks in a switch will make absolutely no difference, even if one of the switches’ clock has shitted there is no implication on the frame ordering because the synchronization happens at the physical layer by means of Ethernet synchronization messaging channel more specifically in the 3bit SSM field. Once this is stablish and negotiated, then you have the Adaptive Clock that adjusts based on the receive buffers.

If you want more information about how timing affects networks ITU-T G.8261 and IEEE 802.3ay are a good place to start.

In no way or shape, a switch running as a switch, and not performing some higher layer operations, will touch the the data.

If you are running a home network with hardware that meets the Ethernet specs, there isn’t anything you can do to improve timing, especially if it is wired. In the case wireless you could do a number of things in the signal but then this is also regulated and specified.

Even if a switch could provide 1^3 precision, the additional timing stamp will be dropped, not even rounded, because there isn’t space in the SSM field to accommodate higher precision; and the reason why there isn’t an accommodation for higher precision it is because it is not needed in any use case to date.

as @ghdprentice correctly points out it is the streamer work, after the NIC has forwarded the data to the upper stack, to fill its receive buffers and then dispose of them to the processor in a precise and timely manner so the music plays in a perfectly timed way. I have not studied the hardware stack on a streamer, but I would be hard pressed to believe that a streamer needs to be more time accurate than direct memory transfers among cluster nodes. If streamers and DACs need more time accurate than what Ethernet can provide they would be using a complete different protocol, and network adapters. Maybe IB, but then since they are receiving the data from an Ethernet network, there would be no more timing precision.

Maybe if the streamer had an IB network adapter and was connected to the the streaming server via IB, you could benefit for a more precise reference clock, and even then it is not the network’s work on timing how data gets processed in the receiving hardware.

Although I know that I over simplifying , all a network needs is common and negotiated clock good enough to deliver packets in as an orderly manner as possible, it is the NICs to reassemble, if needed, the frames and packets it receives and move them to the upper stack.

I hope my English on this is good enough to explain.

@djones51 I am honesty trying to understand your post.

Is this jitter measured in the Ethernet link? How jitter in the link could affect sound if there is no "Ethernet jitter" to speak of from the NIC to the PCIe bus/refclk?

Ethernet jitter ceases to exist at the PCIe layer, it just can’t because of the electrical design.

I am honestly trying to understand, because once "out" of the "the Ethernet stack in the NIC" and hits the PCIe bus the only jitter there is, is the PCIe jitter from the different REFCLK clock architectures, and now we are talking 100MHz for PCIe 3 and 4, PCIe5 has a different refclk architecture and there we are talking phase jitter on the ranges 12 kHz to 20 MHz and 10 kHz to 50 MHz.

Needless to say that a NIC binds to the bus’s refclk and nothing else, it is just electrically impossible.

In the case that the streamer/DAC uses different protocol than PCIe for their bus architecture whichever clock they use for reference, this would be "internal-linked" to the component refclk by which all the bus timing functions would be derived from.

Again, I am tryin to understand how jitter in the Ethernet link can affect anything in the component bus refclk.

it is my understanding that if anything it is the component’s refclk which would affect the music timing, and this refclk has absolutely nothing to do with the Ethernet SSM, as a matter of fact I am not aware of any protocol for bussing that is aware of any other ref clock that does not participate in the bus and there I am certain that there is no way an SSM would be even accurate to even to execute any of the Link Inspector Commands in the bus.

I know this might be boring but I want to understand, so I kindly ask you if you can teach/explain the point of measuring jitter at the Ethernet layer and how it affects anything beyond the NIC/bus adapter.

Are we saying the same thing?

Thank you

 

@djones51 I see what you are trying to communicate, thank you for explaining..

Next thing we should be hearing is that an audiophile switch can fix network problems at the physical layer by means of a patent pending proprietary microcode...🤣

It would be a great thing to have webcast with any of the manufacturers and have open questions. It will never happen because anyone that knows the basic concepts of Ethernet, IB, PCIe or another bussing protocol and EE would very easily be able to debate that what they are saying is the very least technically wrong and completely impossible to do.

 

 

@cakyol If I am the un-informed.. maybe I am uneducated and my English is not good but I wander what these specs are for then.. 

ITU-T Rec. G.8261

ITU-T Rec. G.8262

ITU-T Rec. G.8264 

ITU-T G.8264 Ethernet Synchronization Message Channel (ESMC) protocol data unit rec, which defines a background or heart-beat message to provide a continuous indication of the clock quality level

IEEE 802.x

IEEE 1588 v1,, v1.1, v1.2, v2, SyncE and Ethernet symbol clock

G.8262/Y.1362

IEEE 802.3ay and newer revisions

At no point I said that Ethernet is synchronous or asynchronous because it depends on the flavor, and it is not relevant. 

Most Ethernet flavors have framing bits, that establish both the start of the frame and that prime the bit timing recovery circuitry of the receivers. The only one that comes to mind now that does not is Ethernet over RS-232, using UART to transfer data and  leveraging SLIP to transfer IP packets over a serial interface instead of Ethernet interface but that was many many years ago.

Quote from the spec explanation "Ethernet between a MCU and PHY typically uses the MII bus, which is synchronous interface. It even has two clocks, one for transmitting data, and one recovered for receiving data. The RMII combines the clocks into one which means the PHY has to have a data FIFO to tolerate clock differences between devices." 

These clocks are also used in the training, quote from the spec "The operation of the maxwait_timer requires that the PHY complete the startup sequence from states PMA_Training_Init_M or PMA_Training_Init_S to PMA_Fine_Adjust in less than 2000mS to avoid link_status being changed to FAIL by the link monitor state machine"

So, does Ethernet have a clock "line"? Yes at the PHY MII level, no at the medium, as it is embedded in the data. Is this synchronous or asynchronous? It depends on the definition. The clock must and will be recovered at the receiving end to receive the data symbols correctly.

Maybe we are talking about different things? and I failing to communicate correctly?

At the end, claiming that an ethernet switch can make any difference on how a streamer decodes into music is just not true, because by the time data bits get to the streamer processor they have been in multiple buffers, transformed and refclked outside of the Ethernet domain.

I think we are all making the same point and my English is not good enough and I got too technical.

All I wanted to say was that even the little timing information provided by RMII could not be used for anything else than to ensure the RX processes the the data symbols correctly.

Thus claiming that an "audiophile" switch can enhance how the encoder/decoder process data is non-sense because the only refclk reference the encoder/decoder has it is in its bus.

Need to get ready to work, good night to the USA.