We Need To Talk About Ones And Zeroes


Several well-respected audiophiles in this forum have stated that the sound quality of hi-res streamed audio equals or betters the sound quality of traditional digital sources.

These are folks who have spent decades assembling highly desirable systems and whose listening skills are beyond reproach. I for one tend to respect their opinions.

Tidal is headquartered in NYC, NY from Norwegian origins. Qobuz is headquartered in Paris, France. Both services are hosted on Amazon Web Services (AWS), the cloud infrastructure services giant that commands roughly one third of the world's entire cloud services market.

AWS server farms are any audiophile's nightmare. Tens of thousands of multi-CPU servers and industrial-grade switches crammed in crowded racks, miles of ordinary cabling coursing among tens of thousands of buzzing switched-mode power supplies and noisy cooling fans. Industrial HVAC plants humming 24/7.

This, I think, demonstrates without a doubt that audio files digitally converted to packets of ones and zeroes successfully travel thousands of miles through AWS' digital sewer, only to arrive in our homes completely unscathed and ready to deliver sound quality that, by many prominent audiophiles' account, rivals or exceeds that of $5,000 CD transports. 

This also demonstrates that digital transmission protocols just work flawlessly over noise-saturated industrial-grade lines and equipment chosen for raw performance and cost-effectiveness.

This also puts in perspective the importance of improvements deployed in the home, which is to say in the last ten feet of our streamed music's multi-thousand mile journey.


No worries, I am not about to argue that a $100 streamer has to sound the same as a $30,000 one because "it's all ones and zeroes".

But it would be nice to agree on a shared-understanding baseline, because without it intelligent discourse becomes difficult. The sooner everyone gets on the same page, which is to say that our systems' digital chains process nothing less and nothing more than packets of ones and zeroes, the sooner we can move on to genuinely thought-provoking stuff like, why don't all streamers sound the same? Why do cables make a difference? Wouldn't that be more interesting?

devinplombier

How many times have you entered $50 to deposit in your bank account and it became $5000 because the banks server was 3000 miles away?

We are just talking music here, not a lot of data and not hard to do no matter the miles away the disk is. When you watch (stream) an F1 race 10,000 miles away in 4K, how many times does Hamilton’s red Ferrari gets displayed in blue? 4K streaming is much more difficult than streaming a 16/44 song or a hires song. 
I setup very large enterprise databases and worked with Facebook/Apple/Yahoo/Comcast/banks and these companies have servers all over the world on AWS/Oracle/Azure/their own server farms and every transaction done is bit perfect, with thousands/millions of people concurrently accessing these servers and every transaction has to be bit  perfect or we would have some big issues. And this occurs over dirty WiFi or using a $1 Ethernet patch cable. Now you take an audiophile that uses a $1000 Ethernet cable going to a galvanized streamer going to a $10000 dac, I’m not worried that a 1500 byte packet will make it to my system perfectly.

 

 

"The digital data rides on an analog signal that can pick up noise"

I think you're trying to explain Wi-Fi, which is a Layer 2 Data Link Protocol, as is Ethernet.  Layer 3 Network Protocols, like TCP/IP, handle all the logical addressing, routing, forwarding, fragmentation and reassembly, error correction, and buffering, and diagnostics. Any 'noise' accumulated at Layer 2 simply isn't recognized at Layer 3, unless it corrupts a packet, in which case a checksum error is generated, the entire packet is discarded, and another packet retransmitted. Remember, thi is happening at 100 Mbit or Gigabit speeds, hundreds or thousands of time faster than even the highest audio signals. From a network challenge perspective, even 192KhzX24-bit audio is small beans.

 

 

I keep reading the same old stance from folks that either don’t stream or streaming for fun with their fancy analog or CD players. You know who you are :-)

It’s not about the data but about the context (emotional, physical, psychological) in which it’s delivered. Think about it before you eager to call out the cult-like belief in well designed streamers with premium parts without understanding why something works and works well over mass produced streamers. 

I have compared streamers ranging from $500 to $25K. You don’t have to spend $25K to get a great sound, but pick a streamer that is well engineered to deliver bit-perfect digital output by implementing low-noise design, stable clocking, robust power supply and isolation (ethernet noise or power rail interference). 

So why some of the streamers sound different? Because they prioritize  aforementioned underlined elements that ultimately impacts how bitstream is distributed to your DAC. 

The message should be; don’t fall for over-priced pseudo tech without due diligence be it a DAC, Streamer or Switches. 

A complex topic when allied to digital music reproduction, made more difficult by S/PDIF.

Assuming sound engineering the bits at the output of the receiver in your device at the end of the Cat 5 cable are precisely the same as the bits stored at the source.  If they were not the modern world would collapse as no transaction could be relied upon.

Setting aside the linearity of the DAC chip(s) there are then, however, two (digital) issues that arise; what data gets presented to the DAC processor (the component(s) inside the "DAC" box that generate the analog signal) and when is that data presented. 

Unlike the TCP/IP ethernet or WiFi connection the S/PDIF has error detection but not correction so a truly terrible digital interconnect could, I suppose, result in bit errors, resulting in data loss.

My suspicion is that any unwanted artifacts that are caused by changing between decent digital cables are caused by jitter being introduced and not eliminated by processing before the data is presented to the actual DAC chip(s).

I did an experiment with different cables between my Aurender N100SC and the Esoteric K-01XD.  My baseline is the Cardas Clear USB connection, and I do have an external Stanford Systems Research rubidium clock.

The three cables that I tested were an MIT from the 90's, a "Trubutaries" Video Cable - not high end, not even a specific digital interconnect, and a two meter audio junk interconnect from who knows when (or why).

I did not listen extensively but the MIT cable and the Video cable were indistinguishable.  The junk interconnect however was clearly not up to the task, the sonic quality sounded like hard clipping was occurring!  

The Esoteric K series DACs have buffering and clock synchronization before the actual DAC processor, that appears to have taken care of any jitter issues.

I suspect that the audio cable did not have the bandwidth to support the 1.2MHz 24 48 S/PDIF signal and so there were data errors. (S/PDIF has a parity bit, hence 25x48,000  = 1.2MHz).

BTW, I take exception to a total dismissal of switch mode power supplies.  OK regarding wall-warts, but, for example, Benchmark have transitioned to use SMS to reduce noise.  It all depends on the quality of the power supply.

Once again, my apologies if I am a bit off-topic, but an interesting question makes my mind diverge onto allied topics.