How can different CAT5/6 cables affect sound.


While is is beyond doubt that analog cables affect sound quality and SPDIF, TOSlink and AES/EBU can effect SQ, depending on the buffering and clocking of the DAC, I am at a loss to find an explanation for how different CAT5 cables can affect the sound.

The signals over cat5 are transmitted using the TCP protocol.  This protocol is error correcting, each packet contains a header with a checksum.  If the receiver gets the same checksum then it acknowledges the packet.  If no acknowledgement is received in the timeout interval the sender resends the packet.  Packets may be received out of order and the receiver must correctly sequence the packets.

Thus, unless the cable is hopeless (in which case nothing works) the receiver has an exact copy of the data sent from the sender, AND there is NO timing information associated with TCP. The receiver must then be dependent on its internal clock for timing. 

That is different with SPDIF, clocking data is included in the stream, that is why sources (e.g. high end Aurenders) have very accurate and low jitter OCXO clocks and can sound better then USB connections into DACs with less precise clocks.

Am I missing something as many people hear differences with different patch cords?

retiredaudioguy

Showing 3 responses by richardbrand

@retiredaudioguy 

The signals over cat5 are transmitted using the TCP protocol

They don’t have to be!  There is another Internet Protocol called User Datagram Protocol (UDP) which like TCP runs on Internet Protocol (IP).  UDP is often used for streaming where it is more important to keep something flowing than to ensure accuracy.  Note that TCP cannot guarantee how long it will take to get a correct packet to its destination.  Think about that!

So TCP/IP is perfect for file transfers, and is the reason that software transmitted over the internet retains the exactly the same number of bugs at each end, provided you are prepared to wait!

Moving down the chain, Ethernet is a low-level protocol that by itself guarantees neither delivery nor timing, which is unsurprising because it does not guarantee delivery of any data packet at all.  In effect it just throws packets into the air and hopes that the right receiver catches them.

Ethernet is a development of the Aloha radio network built for data communication between the Hawiian islands before the advent of satellites and undersea cables.  It is an example of Carrier-sense multiple access with collision detection (CSMA/CD).  Multiple Access means there is no central controller, any device can blast the airwaves with data packets.  To avoid two devices obliterating each other’s packet, each device must make sure the airwaves are clear before transmitting (Carrier-sense). 

But this alone is not enough. Two devices on distant islands can each sense that the airwaves are free, and transmit simultaneously with the result that the signal is scrambled in between. Two conditions must be satisfied to correct for this.

Firstly, after transmitting, each device must also listen to ensure the airwaves are still clear.  If not, there has definitely been a collision (collision detection) and the device must wait a randomised time before trying again.  This randomised time is initially 0 or 1 periods, but if another collision is detected, the number of possible wait periods is doubled and so on in exponential progression.

The second condition is that every message must be long enough to ensure collisions are detected even by the most separated, furthest flung islands.

There is no way of knowing if the intended receiver is even on-air unless a higher-level protocol like TCP/IP is used on top of Ethernet.

So do audio protocols always use TCP/IP?  A big no.

I2S for example was designed in 1957 to allow two chips on a board to pass 2-channel 16-bit PCM data.  It has no, that is zilch, error detection let alone error correction.

How about USB then?  While USB can carry TCP/IP it has a special mode for streaming. Remember, streaming requires a near constant stream of packets. So in streaming mode, USB does not implement re-transmission of faulty packets.

Unlike Ethernet, USB does have a central controller, which polls every connected device to see if it wants to transmit.  As I understand it, there can only be one root USB controller per box which polls every device.

Qobuz claims to use TCP/IP but to do this with streaming content, the Qobuz app(lication) must itself implement the computer code for acknowledging packet receipt, waiting doe missing packets and assembling the received packets back into the correct order.  Qobuz must therefore have an app installed on the last digital device in your chain to ensure accuracy.  Even then, it cannot guarantee timing across the mess of the Internet in order to avoid dropouts.

There is a properly engineered networking stack called the Open Systems Interconnect (OSI) which defines seven protocol layers.  The Internet on the other hand has grown like topsy and only has four layers.  Most of its ’standards’ are just Requests for Comment (RFC).

Silver disks and HDMI for me!

@panzrwagn 

They are trying to.apply analog issues and parameters in the digital domain, a complete non-sequitur. 

Completely agree!

TCP/IP guarantees bit perfect delivery 100% of the time. All 'streaming' done withe TCP/IP is buffered multiple times 

TCP/IP can only guarantee 100% bit-perfect transmission after the full transmission has completed.  Qobuz seems to implement a sort of "running" TCP/IP which is bit-perfect for the completed packets already received, but who knows what the internet will regurgitate in the future?

@cleeds 

It isn't clear what your claim is here. Qobuz uses TCP/IP - that's standard Internet Protocol. There's nothing unusual about it. It delivers bit-perfect data to your streamer.

I'll try to make my claim a bit clearer. The most important point about digital is that when done properly, extra information is encoded so that errors can be detected and corrected.  The original digital content is preserved no matter how many times it is copied or transmitted, provided the bit error rate does not exceed the maximum correction capability.  

What do I mean by done properly?  I mean that sufficient extra information is encoded to cover all likely error rates. In computer memory, where error rates are low, it is common to provide a capability to detect two bit errors per word, and to detect and correct all single bit errors. 

Much higher error rates were envisaged during the design of Compact Disks, where scratches, fingerprints and other damage had to be taken into account.  The brilliant Reed-Solomon encoding scheme allows up to 4,000 consecutive bit errors to be detected and corrected.

Many people claim to hear differences when streaming music, which are put down to differences in the digital domain.  If digital transmission is bit-perfect, differences can only be in the digital to analogue conversion, or in the analogue domain.  Admittedly, digital devices may inject analogue noise which affects the analogue domain.

Is digital always bit perfect?  Definitely not if I2S is used - I2S does no error checking or correction whatsoever.  Ethernet does check for errors, but on its own does not require packets to be corrected.  And when used in streaming mode, neither does USB.

By the way, TCP is not "Standard" Internet Protocol, it is one of two widely used protocols that run over Internet Protocol, the other being User Datagram Protocol, which was designed to support streaming and does not guarantee bit-perfect delivery.

For some reason people hear internet and think TCP/IP when they could just as easily think UDP/IP.

There are higher-level Internet Protocols such as File Transfer Protocol (FTP) which was modified to run over TCP/IP and returns a bit-perfect file transfer or an error state.

For every packet sent, TCP requires the receiver to send a return message to acknowledge receipt of the packet, and to give the number of the next packet it expects.  The sender can tell when packets go missing, and resend them.  All this can take several seconds and if the packets are being consumed as a stream, lead to dropouts which one might expect to be audible if not totally obvious.

TCP is not bit-prefect if, for example, the network goes down halfway through a stream.  It cannot possibly be.