How can different CAT5/6 cables affect sound.


While is is beyond doubt that analog cables affect sound quality and SPDIF, TOSlink and AES/EBU can effect SQ, depending on the buffering and clocking of the DAC, I am at a loss to find an explanation for how different CAT5 cables can affect the sound.

The signals over cat5 are transmitted using the TCP protocol.  This protocol is error correcting, each packet contains a header with a checksum.  If the receiver gets the same checksum then it acknowledges the packet.  If no acknowledgement is received in the timeout interval the sender resends the packet.  Packets may be received out of order and the receiver must correctly sequence the packets.

Thus, unless the cable is hopeless (in which case nothing works) the receiver has an exact copy of the data sent from the sender, AND there is NO timing information associated with TCP. The receiver must then be dependent on its internal clock for timing. 

That is different with SPDIF, clocking data is included in the stream, that is why sources (e.g. high end Aurenders) have very accurate and low jitter OCXO clocks and can sound better then USB connections into DACs with less precise clocks.

Am I missing something as many people hear differences with different patch cords?

retiredaudioguy

Do Ethernet cables matter per chatGPT: 

Short Answer: Yes — but only within reason.

Ethernet cables can make a difference in high-end audio systems, but not due to digital data loss — it’s about electrical noise.

📡 Why Digital Bits Still Matter (But Aren’t the Problem)

  • Ethernet uses packet-based transmission. If a packet is corrupted, it’s re-sent — so you still get perfect data.
  • Timing (jitter) is not carried through Ethernet like in SPDIF or AES. Your streamer/DAC reclocks the signal.
  • Therefore, sound quality differences are usually not from bit errors or timing, but from noise entering sensitive gear via Ethernet shielding or ground planes.

A very logical conclusion. 

@panzrwagn 

They are trying to.apply analog issues and parameters in the digital domain, a complete non-sequitur. 

Completely agree!

TCP/IP guarantees bit perfect delivery 100% of the time. All 'streaming' done withe TCP/IP is buffered multiple times 

TCP/IP can only guarantee 100% bit-perfect transmission after the full transmission has completed.  Qobuz seems to implement a sort of "running" TCP/IP which is bit-perfect for the completed packets already received, but who knows what the internet will regurgitate in the future?

"I don't pretend to understand the science." Perhaps the understatement and flag banner of this century. 

Whatever differences people are hearing have nothing whatsoever to do with what happens at the transport layers. They are trying to.apply analog issues and parameters in the digital domain, a complete non-sequitur. 

TCP/IP guarantees bit perfect delivery 100% of the time. All 'streaming' done withe TCP/IP is buffered multiple times b

Better shielding.  That stands out as the biggest improvement. Data capacity is not the issue. Unless it's a video

Streamers usually use UDP which does not have error correction, so a really bad patch cord could cause data errors.

That's a pretty broad statement and not all streaming services work the same. Netflix, for example, is a different scenario than Qobuz or Tidal.

In the case of the audio-oriented services such as Qobuz and Tidal - they use TCP/IP and you are getting bit-perfect data delivered to your streamer's input.

richardbrand

... streaming requires a near constant stream of packets ...

Oh no, not at all, at least not when we're talking about the limited bandwidth needed for audio. On my network, my streamer will load minutes worth of hi-res music into its buffer in a matter of seconds. That is easy to test.

OOPS.  Thanks Richard brand.  Streamers usually use UDP which does not have error correction, so a really bad patch cord could cause data errors.

My point is actually stronger than TCP being error free, it is that the submission of the buffered data to the DAC chips is totally isolated from the nature of the patch cords.  The data is stored in a RAM buffer and is fed to the DAC circuits by a clock in the DAC, so I am a loss as to what is causing people to hear sonic differences.

I just did an experiment, I started PRESTO streaming through my entry level Bluesound device which is wired to my LAN.  After playing the stream for a few minutes  - I pulled the LAN cable from the Bluesound node.  The music continued for perhaps 20 seconds.  The streamer is buffering about 20 seconds worth of bits, I tested that it is the streamer by repeating the exercise but pulled the TOSLink, there was an almost immediate cessation of music.  

 

@retiredaudioguy 

The signals over cat5 are transmitted using the TCP protocol

They don’t have to be!  There is another Internet Protocol called User Datagram Protocol (UDP) which like TCP runs on Internet Protocol (IP).  UDP is often used for streaming where it is more important to keep something flowing than to ensure accuracy.  Note that TCP cannot guarantee how long it will take to get a correct packet to its destination.  Think about that!

So TCP/IP is perfect for file transfers, and is the reason that software transmitted over the internet retains the exactly the same number of bugs at each end, provided you are prepared to wait!

Moving down the chain, Ethernet is a low-level protocol that by itself guarantees neither delivery nor timing, which is unsurprising because it does not guarantee delivery of any data packet at all.  In effect it just throws packets into the air and hopes that the right receiver catches them.

Ethernet is a development of the Aloha radio network built for data communication between the Hawiian islands before the advent of satellites and undersea cables.  It is an example of Carrier-sense multiple access with collision detection (CSMA/CD).  Multiple Access means there is no central controller, any device can blast the airwaves with data packets.  To avoid two devices obliterating each other’s packet, each device must make sure the airwaves are clear before transmitting (Carrier-sense). 

But this alone is not enough. Two devices on distant islands can each sense that the airwaves are free, and transmit simultaneously with the result that the signal is scrambled in between. Two conditions must be satisfied to correct for this.

Firstly, after transmitting, each device must also listen to ensure the airwaves are still clear.  If not, there has definitely been a collision (collision detection) and the device must wait a randomised time before trying again.  This randomised time is initially 0 or 1 periods, but if another collision is detected, the number of possible wait periods is doubled and so on in exponential progression.

The second condition is that every message must be long enough to ensure collisions are detected even by the most separated, furthest flung islands.

There is no way of knowing if the intended receiver is even on-air unless a higher-level protocol like TCP/IP is used on top of Ethernet.

So do audio protocols always use TCP/IP?  A big no.

I2S for example was designed in 1957 to allow two chips on a board to pass 2-channel 16-bit PCM data.  It has no, that is zilch, error detection let alone error correction.

How about USB then?  While USB can carry TCP/IP it has a special mode for streaming. Remember, streaming requires a near constant stream of packets. So in streaming mode, USB does not implement re-transmission of faulty packets.

Unlike Ethernet, USB does have a central controller, which polls every connected device to see if it wants to transmit.  As I understand it, there can only be one root USB controller per box which polls every device.

Qobuz claims to use TCP/IP but to do this with streaming content, the Qobuz app(lication) must itself implement the computer code for acknowledging packet receipt, waiting doe missing packets and assembling the received packets back into the correct order.  Qobuz must therefore have an app installed on the last digital device in your chain to ensure accuracy.  Even then, it cannot guarantee timing across the mess of the Internet in order to avoid dropouts.

There is a properly engineered networking stack called the Open Systems Interconnect (OSI) which defines seven protocol layers.  The Internet on the other hand has grown like topsy and only has four layers.  Most of its ’standards’ are just Requests for Comment (RFC).

Silver disks and HDMI for me!

Once, one of those I-trust-my-ears guys was going on about how he didn’t like the sound of fibre Ethernet cables:

He claimed they sounded "glassy".

Glassy as in... fibre? Fiber? Fiberglass?? You can’t make that up.

@retiredaudioguy to answer your question, no, you are not missing anything - any components, cables, tweaks, etc. in the Ethernet chain (that is, everything upstream of the network streamer) have zero bearing on sound quality; because, as you correctly stated, the TCP protocol itself is error-free.

So error-free, as a matter of fact, that your streamed music travelled thousands of miles from Qobuz or Tidal servers, through countless data farms from which twee audiophile accoutrements are conspicuously absent, yet arrived at your home thoroughly unscathed and without a single bit out of place.

It is still advisable to use SFP (fiber) for the last run of cable into the streamer, to ensure proper galvanic isolation of the audio system.

At the risk, of course, of making it sound "glassy"! 😂🤣🤣

 

Post removed 

I don't pretend to understand the science.  I was running an entry level Ethernet cable from my router to my streamer (about a 12' run) and just for fun decided to get a very high quality and much more expensive cable.  I got a noticeable improvement in the sound.  More smooth and pleasing sound.  To me it was worth the cost of the premium cable so I kept it and it's still in the system.  Just my 2c

Per usual arguments (from the above link), theory and measurements vs subjective listening. I've tried many ethernet cables and lengths over the years and I hear differences with certain cables. So am I to believe the science or trust my senses. The measurement crowd will say my senses are not to be trusted, expectation bias clouding my senses. Usual retort is sometimes science hasn't yet formulated the right questions to ask, in other words fails to devise measurement protocol to account for what we hear. And then we go around and around ad nauseam. 

 

In the end, on this one I go with trusting my senses.  As for sound differences, I've only experienced issues with tonal balance (highs attenuated), this probably has to do with improper or excessive shielding. The only other change has to do with resolution/transparency, I've found silver content to be important for digital cables, the higher the silver content the better. Anyone reporting changes in something like timbre is imagining things.

 

In the final analysis, both sides free to present their arguments, up to the individual to choose who to believe. 

Post removed