There are a lot of great points made here on both sides of the argument but I think as with anything many people are looking at things too microscopically and need to zoom out and take a macro view at what is actually happening with "streaming" on a network.
One thing no one seems to bring up in these back-and forths is the essential architecture of a network which consists of layers. Sometimes you might hear about these layers in jargon like "stack" "full stack", or other such lingo.
If one happens to be a competent "full stack" software engineer, then the challenges of figuring out whether or not the data are/is good is largely irrelevant.
What audiophiles fail to realize is that vast amounts of opportunity for mishandling/misinterpretation of "1's and 0's" can, and often do happen at the final few stages in the process (presentation, application) of translating the "digital" signal information into meaningful use by your device.
The fact is that not all "streamers", let alone music player software, are created equal, and there are many poor ways of going about it in fact.
From a technical standpoint, the data received by a streamer from a network is the exact same data another streamer can receive on a network.
Where the conversation gets tricky is what you are using to render the data and how it handles the various software processes to de-code the information (1's and 0's).
As an analogy, gamers spend varying amounts of dough on better graphics processors. No computer engineer would argue that the raw game data being received by two different GPUs is different; you can perform a hash/checksum to verify the data is all there.
Equally, no one will argue that two different GPUs will and can produce different results when finally rendered to your monitor (not to mention the monitor has it's own internal processing to deal with to receive the data).
What's so funny to me about "science only" audiophiles is they don't tend to think about the actual science much.
One thing no one seems to bring up in these back-and forths is the essential architecture of a network which consists of layers. Sometimes you might hear about these layers in jargon like "stack" "full stack", or other such lingo.
If one happens to be a competent "full stack" software engineer, then the challenges of figuring out whether or not the data are/is good is largely irrelevant.
What audiophiles fail to realize is that vast amounts of opportunity for mishandling/misinterpretation of "1's and 0's" can, and often do happen at the final few stages in the process (presentation, application) of translating the "digital" signal information into meaningful use by your device.
The fact is that not all "streamers", let alone music player software, are created equal, and there are many poor ways of going about it in fact.
From a technical standpoint, the data received by a streamer from a network is the exact same data another streamer can receive on a network.
Where the conversation gets tricky is what you are using to render the data and how it handles the various software processes to de-code the information (1's and 0's).
As an analogy, gamers spend varying amounts of dough on better graphics processors. No computer engineer would argue that the raw game data being received by two different GPUs is different; you can perform a hash/checksum to verify the data is all there.
Equally, no one will argue that two different GPUs will and can produce different results when finally rendered to your monitor (not to mention the monitor has it's own internal processing to deal with to receive the data).
What's so funny to me about "science only" audiophiles is they don't tend to think about the actual science much.