Difference in sound between copper and silver digital cables?


Is there a difference in sound between copper and silver digital cables, or purely in the implementation?
pmboyd

Showing 4 responses by danip

If there is a audible difference between 2 digital cables it simply means that one of them is not doing it's "job" (could also be your devices).
There is no "warmer sound", "more depth" , "more whatever" in digital audio. All you need is to have every send bit being received correctly, once you reached that goal there is nothing more to achieve even if you spend a fortune on your digital cables.

Maybe if you spend $1000 on a ethernet cable your video and audio streams will improve.
Ethernet cables are really cheap cables and most them manage to transmit 250Mbit signals (that's nearly 100 times faster than S/PDIF) on each twisted pair without any packet loss.
Williewonka, the first layers of ethernet don't have checksum and you still need a shitty cable to fail at the physical or data link layer.

Statistics on my server (current uptime 27 days) : 
Iface     MTU   RX-OK RX-ERR RX-DRP RX-OVR   TX-OK TX-ERR TX-DRP TX-OVR Flg
lo     65536 2420653     0     0 0     2420653     0     0     0 LRU
eth0   1500 95001512     0     0 0     122699833     0     0     0 BOPRU
It's probably a bit messy to read here, but you can see there are no TX or RX errors at interface level (eth0 is the physical interface, ignore the 'lo' interface, that's the loopback aka 127.0.0.1).
Anyway, no fails on +200 million packets (RX and TX combined) on a 1000baseT connected with a 5$ Cat5e cable (length 5m).

PS : USB has a hardware checksum test, but USB audio/video devices use isochronous communication to avoid latency (thus no hardware checksum for that).
Williewonka, the TCP/IP stack is software not hardware.

I used to be CCNA certified (never renewed because it costs money for nothing and I don't work in ICT anymore).
I am not very good at this but I will try to explain the important things (related to this topic).

Ethernet is made out of layers (check OSI model) and only the first layers are hardware layers.

The first layer is the physical layer (interface port and the copper).

The second is the data link (ethernet frames are created and use of a mac address, it's the last hardware layer) while this layer has CRC it will NEVER resend a package, lost is lost. The server interface stats I linked come from this layer. Typical layer 2 devices are ethernet switches.

Starting at layer 3 it's all software. It's here that TCP/IP packages are created and eventually resend. If you would look at the TCP/IP statistics you will see that packages are dropped, blocked, ... etc. This doesn't mean there was a error in the hardware communication, if it reaches this layer "there were no errors on the cable" but packages were dropped/blocked for a different reason (unexpected or unwanted packages)

I am not 100% sure about netstat in Windows but if I am not mistaken "netstat -e" shows the hardware stats and "netstat -s" the TCP/IP part.
There you can see that the first one has no errors (unless you are on wifi, have a bad cable or interface) while the second one has errors, dropped packages ... etc (again those are non physical reasons).

Yes I lied (partly) because I didn't want to explain the whole thing in depth. Layer 2 has a CRC but It never resends like what most ppl here would expect from a system using a checksum. But in 27 days, my server never had a bad ethernet frame (every '0' and '1' has arrived correctly to the next node, the switch, and not single CRC trip).

PS : ethernet is asynchronous there is no clock signal between nodes.


Williewonka, to keep it very simple.

The network interface sends ethernet frames which are composed of a header, a payload and a CRC. At this stage it uses mac adresses to communicate.

A TCP/IP packet is a software created packet which will be the payload of a ethernet frame. TCP/IP is a little more complicated to explain but it uses a 3 way handshake to start a connection. Sequence numbers are used in those packets so that the a missing packet can be detected (if sequence number unexpectedly increases) and a resend can be asked with it's sequence number. For example if a package with sequence number 65 is received while there was no 64, a resend of 64 will be asked (it will actually go back to 64 which means that 65 will also be resend ... etc)

If you send a TCP/IP packet that packet will stay the same until it reaches it's goal (rare cases where it doesn't like a router using NAT), while every node will "repack" the TCP/IP packet into a new frame with the according mac addresses (a switch is not a node).
When a TCP/IP packet is lost it almost certainly happened "far away" (for example due to lack of bandwidth)

To get back on topic. I think the major problem with SPDIF is that each device generates it's own clock and older devices often generated a crappy clock.
Today's devices generate more consistant and accurate clocks which means less jitter problems.