Network Switches


david_ten
I was just doing some back of the envelope math to estimate the narrowest of PWM pulses from redbook.

(2^16) *44KHz~3GHz. Thats a 4 inch wavelength!

If spectral content of the PWM wave exceeds the bandwidth of cable or connectors (impedance and loss stable over bandwidth) then VSWR and loss induced wave distortion is entirely possible.

Thank you @almarg for the RF insight. That kind of thought is entirely appropriate when trying to define potential sound changes due to the handling of analog waveforms representing 1s and 0s. Ethernet switches should have a far more stable source and load connection impedances to handle the advertised bandwidths. Cable loss and bandwidth also needs to be sufficient.

Normally I'd jump on a Sotm Neo or Sonore Ultrarendu but there doesn't seem to be a way to include Dirac Live in the PC since windows treats it like a virtual soundcard. Anyone have any luck with that? Having Dirac in the PC does eliminate an A/D conversion and has always largely improved my sound, whether improvements surpass what a Sotm Neo network player would provide remains to be seen.

Perhaps a USB to I2S converter and a short high speed USB cable will keep me going. Sticking to a DAC with I2S may also be wise.

Brody




@brotw, thanks for the mention, and for your comments.

However while I of course agree that spectral components at very high frequencies are present in Ethernet signals when audio data is being conveyed, certainly extending up to hundreds of MHz and probably to a significant degree into the GHz region, I would have to disagree with your analysis.

The factor of 2^16 (or 65,536) in your analysis, which of course corresponds to the number of possible signal levels that can be defined by Redbook data, is not being used correctly. What is being conveyed for each sample (for each channel) is simply 16 bits, not 65,536.

So to be precise, given also that the 8b/10b encoding used by Ethernet increases the number of bits by 25%, your equation should be:

16 bits/sample x 44,100 samples per second x 2 channels x 1.25 = 1.764 MHz

The reason spectral components can be present at hundreds of MHz or more can be analyzed approximately as follows:

My understanding is that both 100 mbps and 1000 mbps Ethernet transmit packets of data at clock rates of 125 MHz, which corresponds to a clock interval of 8 ns (nanoseconds). The risetimes and falltimes of the signal must therefore be significantly less than 8 ns. Let’s say 2 ns. If we assume first order rolloff, risetimes and falltimes of 2 ns correspond to 3 db of rolloff at 0.35/2 ns = 175 MHz. First order rolloff corresponds to 20 db/decade, so the spectral content of those risetimes and falltimes would only be down 20 db at 175 MHz x 10 = 1.75 GHz!

Thanks again. Regards,

--Al


This is a data sheet from a typical Ethernet pulse transformer for 10/100/1000-T


https://product.tdk.com/info/en/catalog/datasheets/090007/trans_alt_en.pdf?ref_disty=digikey

While common mode rejection can be 30-40db, the insertion loss, i.e. losses in sending an actual signal, are quite low, 1-2db, even at 100KHz. 1 transformer at either end, so 2-4db attenuation at 100Khz, which is not a lot.


The question is, how good were the circuit designers at either end in ensuring noise didn't get onto or coupled from the Ethernet "signal"?
... losses in sending an actual signal, are quite low, 1-2db, even at 100KHz.

... and I would emphasize that losses are even a good deal less than 1-2 db at 100 MHz, as can be seen in the graphs.

The question is, how good were the circuit designers at either end in ensuring noise didn’t get onto or coupled from the Ethernet "signal"?

+1.

Thanks for providing the link.

Best regards,
-- Al

@almarg , absolutely. I got PWM mixed up with PCM. Thank goodness, otherwise we would have needed coax to make this work. 1.76 MHz is still up there, especially when a fourier series is considered to accurately shape the square wave so the decoder can due it's job,