Point of higher priced streamer?


Hello,
Assuming I have separate DAC, and I just want to play songs from iPad by Airplay feature.
In this case, I need a streamer to receive music from my iPad -> DAC.

What’s the point of high price streamer? I’m bit surprised that some streamers are very high priced.
From my understanding, there should be no sound quality difference.
(Streaming reliability and build quality, I can see it but I do not see advantages in terms of sound quality.)

Am I missing something? If so, please share some wisdom.
128x128sangbro

Showing 20 responses by ironlung

Most of the technical discussions about why streamers can sound different focus on issues of noise, jitter, memory buffers, and software implementation, and it is critical to recognize that existing methods for measuring equipment may not do a good job of evaluating differences in streamer sound quality.
I agree with this sentiment, I would also add that the final litmus test is (and must be) the listener, so for people to go on measuring things without listening to them, and somehow being able to extrapolate how the component will sound, is a futile approach. 

However, I will say that there are absolutely no audio components I've personally owned which would not be able to withstand the scrutiny of detailed measurements, whatever those are worth. I've listened to plenty of components which were merely ho-hum and boring with no musical enjoyment, and upon inspecting these components' measured results, over time it wasn't surprising to find the pattern - components which measure well do tend to sound better than components which do not.

That being said, just because a component measures well, does not mean it will sound good in your particular application or environment. Too many audiophiles/enthusiasts pay little attention to synergy and integration between components (particularly the room). I laugh every time I see gigantic loudspeakers in rooms entirely too small to support them. The purpose of this pursuit is to enjoy music - the gear is a means to that end. I've listened to countless high end systems and the few that actually play music always stand out - and it's usually because an industry guru spent hours fine-tuning each and every detail in a methodical, logical manner. 

As a tangent, next time someone is trying to convince you of a particular direction to take your audio system (i.e. sales person, reviewer, industry maven) I'd suggest asking that individual if they have ever spent any significant time either performing live music as a musician (surprisingly, many have) or has any experience with professional audio engineering (again, many have). In particular, an audio engineer should have some live music experience with sitting behind a mixing desk/console and running a live show (I know some recording guys who are deathly afraid of this which is why I mention it). 

The reason I mention this is that the professional audio world tends to have a deeper knowledge of the inner workings of questions like this, which the consumer and "high-end" audio world tend to obfuscate and avoid. The answers are there for those willing to put in the time and the work; if you really want to learn, consider spending the money to obtain an AES membership and delve into the plethora of white papers which can "technically" describe what is happening under the hood with all of these various processes. Unfortunately, what I've seen is that most enthusiasts are simply not interested in this level of research and would prefer to regurgitate misleading or plain wrong information. It's not surprising to me when I've listened to the systems many of these types of folks assemble have absolutely no meaningful communication of the program material and instead represent some fanciful interpretation which seems to suck out the very soul of the music, leaving an anemic shell of a presentation.

Meandering back to the point I originally wished to make - when people provide blanket statements such as "digital is digital" or "as long as it achieves this and that measured spec, it will be fine" simply put their ignorance on display to those who actually know how to assemble a proper music playback system. If they wish to continue to enjoy piss-poor music quality while spending loads of cash on nonsensical ideas, it is their prerogative, but they disservice others wishing to enjoy music at a truly elevated level. I've seen it at each and every level of the HiFi industry, and I won't be surprised when many of the "me-too" brands fade over the next few years while the innovators and pole position brands continue to press ahead and leave others in the dust.
@arafiq Apologies accepted. In my experience, there aren't too many folks posting around on forums capable of answering the kind of questions OP posted, so I would hope my content differs wildly from the opinions of others who have tried and failed.
It seems like for a bunch of enthusiasts, no one really knows what they are talking about around here...

Allow me to provide a hint - anyone who thinks that AirPlay is "indistinguishable" from other methods of data delivery on a network has absolutely no idea what they are talking about. You should not listen to these people, they have no insight to offer, and your enjoyment of music will suffer as a result (I'm speaking as someone who listens to music. not gear).

To also slightly illuminate your awareness as to why purely rational, knowledgable individuals are willing to part with hard-earned cash to improve their listening experience in this particular arena, I offer the following sliver of technical analysis as to what is really happening with audio and streaming.

AirPlay is flawed because it is using kernel-layer audio processing on the device one is using to play back the audio. This alone means the signal itself is suspect as it may not be bit-perfect (and usually isn't). Not to mention, sample rate conversion on the device and in the player/playback software (which itself may be considered as a "component" in your digital audio system, much like a CD transport - it's just virtual so most pay no attention to it) can dramatically affect sound quality before it is delivered over the network to the endpoint. Sure, it works (meaning, the 1's and 0's sent by the device arrive at the endpoint intact as they were sent) but most have no idea of flaws in the playback software (application layer) they are using, let alone the fact that the device uses kernel audio post-software (so there are two competing processes at work).

This is why so many "audiophiles" use bit-perfect programs such as Audirvana on their Mac or PC (I also remember Amarra, and personally used programs such as this one when I was using a Windows machine to play digital music: https://sourceforge.net/p/playpcmwin/wiki/PlayPcmWinEn/)

The answer for the OP should be - please don't play music from your iPad at all. Use your iPad as a control point, not a playback device, and buy a computer or device that provides bit-perfect output to your DAC (or better yet, has a high-quality DAC built in). Those suggesting a Raspberry Pi are on the right track, but much of their reasoning is flawed (digital is digital).

As an analogy, very few in the digital video or imaging world would have any problem acknowledging the dramatic affects software processes (embedded or otherwise) can have on the final image results. The idea that audio is somehow different is simply ludicrous.

It’s called expectation bias.   I saw a good friend show himself up in front of a room full of guys comparing his £6000 mains cable Vs a £3.50 kettle lead.
The lad doing the switching had double bluffed.   My mate was claiming “night & day, you’re all deaf”.    Embarrassing
I've experienced utter dismay on the part of cable salesmen when I've proven to them their product doesn't deliver. I've also been pleasantly surprised by actual improvements with very expensive, high end cables, but those experiences are rare and only happen when the entire system is thought out. I've also experienced expensive cables work well with certain components while sounding like total trash with others. Some products simply sound better with the manufacturer supplied leads - they engineered it that way after all. 

It's possible your friend has heard such an improvement with this particular cable in his own system that he never bothered to consider it may not always provide such an improvement when used in a different scenario - and I wouldn't be surprised if it could very well sound/perform worse.
 
Too many of the high end power cables have issues connecting to mains supplies properly (I'm speaking of a proper grip with the connectors on the outlet itself) that it's pointless to upgrade the cable if you don't also upgrade the mains outlet to a receptacle that can support the type of "hospital grade" connectors many of those cables feature. 

At the end of the day the guys who typically spend a lot on cables have either already maxed their respective budget with the rest of their system components and are having a bit of fun, are trying to compensate for a flaw present within the system because it wasn't set up properly to begin with, or have had an experience with someone who can properly demonstrate, implement, and prescribe the correct cable pairing for the application. The last is quite possibly a unicorn in this hobby!
@ironlung I for one am enjoying your posts and learning quite a bit. Thank you.
Great to hear. There's just so much confusion about all of this stuff I finally decided to try to do my part to elucidate some of the nuances involved.

^^ Are we seeing yet another reincarnation of the famous poster who keeps popping up like hydra's heads?
 

The 14 posts I've written thus far are the only posts I have ever submitted to Audiogon forums, period. 
you must run usb to get the Best out of a dac only usb ofers asyncronous data transfer which cits down on zransmittteed jitter
Actually this is not entirely true. While most do, some DACs do not use Asynchronous USB, but isosynchronous; also not all DACs sound better from a USB connection depending on how the grounding and power rails for the USB interface have been implemented with the rest of the DAC board, not to mention variations in the source of the USB stream.

There are plenty of DACs with SPDIF/BNC/AES inputs, which when matched with a properly clocked (low-jitter) SPDIF output, will perform better with the same files compared to a USB delivery. It really depends on the DAC and application.
Listening to local music files on a USB memory stick attached directly to the Pro-ject streamer sounds smoother, clearer and with better musical flow than the same track via Qobuz.
If the files are actually the same, then this is likely due to the chipset implementation being used on the ethernet/LAN side, particularly in the final processing stages before the packets are delivered to the memory buffer.

However it's also possible that the provenance of the files on your USB flash media are closer to the original (or just better sounding) master. It may not have anything to do with the chipset implementation at all. 
Could be none of the above. Unless the two were compared with a controlled blind test it doesn't tell us anything. To me Quboz sounds better than a flash drive connected to the streamer, again unless I did this comparison blind it tells us nothing to further our knowledge.
Please explain exactly what you mean by "controlled blind test" and how one would be able to adequately make such a comparison using a file streamed from Qobuz, versus a file on a USB flash memory.

Unless you can prove the files are identical using a checksum hash such a comparison is actually impossible. I don't think you've thought this through much.

@vinylshadow
But there aren’t many "inexpensive" options where I can use my phone as the controller.

Actually, there are quite a few. If you do a little research you'll find there are a lot of apps for Android or iPhone which offer the ability to use a phone or tablet to act as a control point for software residing on a Windows PC, Mac, or Linux machine. For example, there are a number of different UPnP servers/media software (many are free and/or cheap) which you can access with programs like Bubble DS, Plug Player (not sure if that is still around), etc. to control a simple web-browser based media player.

Heck, I used to run an old Windows PC with Foobar 2000 and an iPhone App called MonkeyMote: https://www.monkeymote.com/home

They have Android versions of the App and you can use a number of player software (JRiver, foobar, AIMP, Winamp). 

You don't need anything particularly powerful for this type of application, a pretty basic PC will work just fine and with the right player software will likely sound quite a bit better using your USB>SPDIF convertor.

I'd guess one could put something like this together for less than $200 if being savvy and patient to find the right stuff. 

To top it off you can then also install whatever streaming service's app you happen to like (or want to try) - like Spotify, Tidal, or Qobuz - and control the app on the PC from the service's phone app as well (most seem to be doing this nowadays).

To the thoughtful posts on power cables, it is incredible how expensive some of them are. I respect Shunyata but can a $6000 Omega mains sound $4000 better than a $2000 Vertere Acoustic HB mains?
When you put it in those terms, not really. But if $4,000 is a drop in the bucket for you, and you have the inclination to spend a lot of money on HiFi, I'm sure there are plenty of people who can convince others that the difference is worth the extra $4K

The issue I find with the expensive cables is that only some of them offer synergy and improvement with certain equipment while sometimes actually detracting from the performance of certain other gear. 

And to make things more complicated. if one is considering a $6,000 Shunyata mains lead, one is doing themselves a disservice by not evaluating Transparent, Nordost, AQ, Cardas, Kubala-Sosna, Chord, Furutech, MIT, or any other number of high quality brands I've failed to include on that list - and evaluating those next to their supplied leads or OEM offerings from quality suppliers like Volex.
I was not expecting the AirPlay2 to sound so close sonically to the microRendu.
@yyzsantabarbara 

If you are using USB from a microRendu into a USB DAC, the DAC is treating the data differently than through a network connection.

The technical reasons for this will make most people's eyes bleed but in a nutshell the protocol(s) being used to deliver the stream data are quite a bit different under the hood.

AirPlay2 and Roon endpoints are essentially "capturing" real-time playback occurring on an auxiliary device over the network. 

In this scenario what you are comparing is the Matrix DAC's ability to "capture" an AirPlay2 stream from the network and deliver it directly to the DAC circuit, to the Sonore's ability to "capture" the same stream (assuming from AirPlay2, or Roon, it wasn't exactly clear) from the network and convert it to a USB digital stream for use by the USB input on the DAC.

Looking at it in this light you can see there is an additional process in the signal path using the Sonore microRendu which is unnecessary if Matrix audio have done their job correctly on the network side. It's likely why you are surprised at the similar fidelity.
I am not sure if WiFi is also a contender for this. Other that Aurelic no one else is pushing Wifi.
WiFi should only be used when absolutely necessary. I know there are Auralic guys who claim the WiFi input on their streamers is the better connection to make to the network, but I think if that's the case then the ethernet implementation must be poor.

WiFi is totally fine to use for the control point but in terms of connection between the server (Roon Core) and the client (Roon RAAT/Ready device) the connection should be hardwired with sufficient detail paid to the rest of the network to minimize possible conflicts/bottlenecks.

There may be people with minimal networks and a robust enough WiFi connection that Roon will work over a wireless connection to an endpoint, but my guess is most people using Roon are hardwired ethernet. 
This is audio not rocket science. Even 24/192 is minimal data and and any number of bit perfect options exist. Wireless networks are more than sufficient to carry audio error free with retry. Again not rocket science. Data rates are way faster than needed to recover lost packets without breaks.


Your posts are conjecture, no more.

Respectfully, it seems to me that your post is coming from an arena of conjecture and lack of comprehension.

The question I was replying to was with specific regard to Roon. Roon uses an architecture that relies heavily on a robust network and anyone investing $500 in Roon software to manage their music library should consider the small investment to get the necessary infrastructure to hardwire their endpoint and Roon Core to the same network switch.

While from a pure data perspective you are not wrong, you are ignoring a bunch of factors, including the delivery protocols which are used to stream audio on the network in the first place, not to mention the inherent challenges of WiFi.

If a server (Roon Core in this case) is delivering real-time audio data over the network, it is using some method of UDP encapsulation (which they term RAAT) to send the audio to the endpoint. UDP can and does experience dropouts if sufficient attention is not paid to how the network is configured as well as what other devices may also be using the same datagram protocols on the network.

The above paragraph applies to a wired network. Wireless adds the complexity of an air-based physical medium (instead of a copper or fiber connector between the networked devices) which is prone to interference, signal loss, noise, improperly set timing thresholds, improperly configured channel bands, additional latency problems, etc.

Further, depending on the chipset in the device(s), WiFi can be limited to half-duplex transmission, meaning it can require twice as much network and processing resources to perform checksums of the packet data. This is less of an issue with 5GHz A/C bands and MIMO capable WiFi access points these days, but unless you are aware of what chipset your audio/streamer manufacturer is using for their WiFi implementation, it's certainly possible to still run into this limitation.

For someone with a single Roon (or other network endpoint) with an off-the shelf Asus/Netgear/TP Link router, WiFi should be fine for the endpoint, but the Core should still be hardwired.

Things get complicated really quick as soon as you add more wireless devices to the network, especially if they are competing for the same resources on the network.

Also, WiFi chipsets inside network streamers can add unwanted noise, depending on who designed the thing.

I had to laugh at your comment because I've fixed so many problems for people by switching them from a consumer-based WiFi connection for their audio streamer/endpoint to a hardwired connection because they were experiencing dropouts, and had many satisfied listeners. It's simply less hassle if you can hardwire when you can.

WiFi should be used for tablets, laptops, phones, etc. Permanently located, non-mobile devices with an ethernet connection should be hardwired whenever possible. It's best practice type stuff.
On Shunyata, you make it sound like they only make $6,000 power cords. Which is far from reality. In fact, their current lineup, Venom NR v12 is an excellent power cord, at only $398. Obviously less, if you know the right people. Give it try, you will see (hear) for yourself.
I am fully aware that they make "entry-level" products. I don't have any issue with their products and think they are quite good. All of the other brands I mentioned also make lower priced products in the same category. My intent was to suggest that folks who do want to spend $6k on a power cord should take a listen to a few different brands first before making a conclusion for their application.
Streamer does one thing, instruct DAC how to create the audio.
@pc997 Yes, and if you bothered to read through my technical explanation about how different streamers can and do provide DACs with varying "instructions" dependent on multiple factors, you'd be able to comprehend how the streamer itself is just as important as the DAC.

But then I see why you can't hear the difference - your amplifier is applying a filtered buffer in it's gain stage from the upstream components. Your system sounds like your amplifier designer intended it to.



You are distorting reality to suit your desired outcome. IF there were packet sized dropouts in audio due to UDP/WiFi then you absolutely would hear clicks and pops and breaks. If that happens you know you have a problem. It’s not a matter of well maybe my WiFi does not sound as good. It works or it does not.
Actually I'm not distorting reality at all, you are doing so by completely failing to address anything I've stated in any comprehensive way. Again. my entire point was, that if @yyzsantabarbara was considering WiFi for a Roon endpoint because of purported SQ differences (not my claim, but is claimed by Auralic owners and other Roon endpoints owners with WiFi connections as indicated by @yyzsantabarbara ) not to mention a supposed "better" connection to the LAN, and he is already using a hardwired connection - he should ignore using WiFi and stick with his hardwired connection precisely to avoid what you describe above - packet loss, audio dropouts, pausing of the stream by the renderer/endpoint, etc., all of which can and does happen to all sorts of different products depending on the application and environment.

My comment comes from years of experience with this stuff and so if you want to continue arguing about minutiae while ignoring the actual content of my post then I'm happy to continue providing you with an education.

Noise in chipsets is just noise in this discussion.

Again, your lack of comprehension with respect to how wireless radio-wave based transmissions occur and the potential to cause unwanted noise or interference in other devices has no basis in reality.

I'm sure there are plenty of people who have experienced the phenomena of mobile phone tower interference causing bursts of audible noise through poorly designed audio equipment. And most of the devices I've experienced this with didn't even have an aerial antenna of any kind.

From a pure data standpoint, again I won't belabor the point that the data arriving at the WiFi chipset itself is somehow changed or affected by the noise, but the noise generated by a WiFi chipset with aerial antenna can certainly affect other devices in the component itself or in other components in the overall system. 

If you had a dropout you would have clicks and pops just like your have stuttering on video streaming or block artifacts from incomplete data. It is very obvious when data is lost.

Thank you for yet again confirming my point. This is precisely my reason for recommending not using WiFi if you can avoid it, and use a hardwired connection if it is available. I have had plenty of experience with end users of a myriad of different wireless streaming technologies and this is exactly what would happen to them and why they would be looking for help.

I laugh at your comment ironlung, because you telegraphed the thinness of your knowledge and position when you said you switched out WiFi for wired due to DROPOUTS. Not a perceived loss of quality but DROPOUTS.
Who is laughing now? I think it's plain that it's your knowledge which is quite thin and your thought process and analysis of my comments needs to be checked.

It's apparent you have very little if any real world experience with the stuff, I simply cannot comprehend why an individual would continue to insist to counter my advice to @yyzsantabarbara with respect to this topic. 

It's like you want him to increase the complexity and undermine the integrity of his current configuration just to prove a point. It's foolish.
For both, outputting at the same volume I could not hear any difference at all. This even though the external laptop is upsampling to 192K while the direct connection from the 2008 Macbook Pro via its optical audio is not.
That's not surprising, since the receiving interface in your DAC (Toslink which is SPDIF) is treating the incoming data stream the same. Sample rate may have no affect if the internal SRC on the DAC itself overrides what it is receiving from the interface. I'd imagine Schitt go through some great lengths to reduce jitter (word clock timing variations) on the SPDIF/Toslink interface, which is why there's no perceptible difference.

If you compared the Toslink to the USB interface though, my guess is that with the same source (MacBook Pro and Audirvana) you might hear some different results.
I think the comment that ROON RAAT is using UDP is rather important here. Back in the day when I was programming Tibco RV messaging we used UDP for broadcast messages in one part of the system and I recall it was unreliable. It did not have to be reliable in that instance because it was just for a debug dashboard and the next broadcast would update accordingly. So packets could be dropped and I was OK with that.
Almost everything people "stream" uses UDP, unless the file is being transferred to an internal memory buffer using TCP/IP or SCTP (100% CRC error-checked).

While I hesitate to rely on them as a reliable source for all things, I find this comment on Wikipedia regarding UDP to be of interest:


"It has no handshaking dialogues, and thus exposes the user's program to any unreliability of the underlying network; there is no guarantee of delivery, ordering, or duplicate protection. If error-correction facilities are needed at the network interface level, an application may use Transmission Control Protocol (TCP) or Stream Control Transmission Protocol (SCTP) which are designed for this purpose.

UDP is suitable for purposes where error checking and correction are either not necessary or are performed in the application; UDP avoids the overhead of such processing in the protocol stack. Time-sensitive applications often use UDP because dropping packets is preferable to waiting for packets delayed due to retransmission, which may not be an option in a real-time system.[1]"

To be clear my contention isn't that UDP itself somehow affects sound quality. My reason for bringing it into the conversation was based around reliability reasons, in terms of how important the network architecture is.

For example someone wishing to use Roon to run a multiroom distributed audio system on a network where kids are going to be playing online video games and the TV is on in the kitchen streaming cooking shows, should consider paying close attention to how the network is implemented to avoid problems.

An analogy an industry friend has sometimes used is that your network is like a freeway with lanes. UDP would be similar to a lane on the freeway, but designed only for a particular kind of car. What happens when you flood the lane with a bunch of that particular car? Crash!




Okay, I'll take another stab at helping some of the folks around here comprehend what is going on under the hood with digital audio.

First off, the guys who are in the camp of "no perceivable difference" when it comes to streaming seem to have huge gaps in their thought process.

And I've run many people over the years who work for various manufacturers and think along the same lines. They are engineers. I've even personally witnessed the CEO of a  very well known high-end audio music server company tell an individual who had no interest in audio, he didn't understand why people spent so much money on his reference product. 

As a side note, when it comes to digital audio on the software side (software engineers), I know of many who work for these audio streaming companies who have very little experience with high performance audio systems in real-world situations. Depending on the brand, they may not even be involved in listening to anything as a part of their process, and leave that up to the guys further down (or is it up?) the chain. I've sat in a room full of DTS engineers for example, and none of these guys had any experience with reference level systems in domestic, real world environments. I've had audio recording engineers who produce award winning music tell me point blank they've never run a live event and wouldn't feel comfortable doing so. My point is, there are many people who work within the industry (let alone consume within it) simply parroting information they've gathered from colleagues and such who haven't actually spent any real time with audio systems to be able to make the claims they do.

Anyway, back to the gaps, and the topic at hand -

A network streamer is a computer. This computer can be a typical PC/Mac or what have you, an integrated chipset as a part of a DAC board, a Raspberry Pi, etc.

The computer requires an operating system to run. This can be a software suite like Mac OS or Windows, or a headless OS like Linux. It could be a customized Linux software designed specifically for music (such as Roon), or an off the shelf distribution like Ubuntu or Archlinux.

Next you have the audio playback software. If it's a PC or Mac, it's any number of programs one can download and install. If it's Roon, it's Roon. If it's MPD on a headless Linux box, it's MPD. if it's Sonos, it's whatever Sonos is using for this process (likely MPD or something similar). If it's Pro Tools, it's Pro Tools.

At this stage any number of processes can be applied such as bypassing the kernel audio in the case of a traditional OS. This is also where the metadata information is gathered into usable information by the application. 

Also the digital audio information can be treated in a number of different ways by the player software before sent out as packet data over whatever interface one is using. Up-sampling/oversampling, DEQ, compression, etc.

This is where things get tricky. At this point the player software needs to send the audio data downstream to the DAC. It needs to do this over an interface. The interface can be serial packet data sent over USB, Firewire, Ethernet, or other proprietary serial links. The interface can also be SPDIF stream data, or I2S linked data.

How the player software delivers the data to the interface, and what interface is being used by the streamer to the DAC, can dramatically affect performance. Please consider the following. 

In order to link the data stream to the DAC the DAC needs to properly determine the clock signal (word length and sampling rate) embedded in the data. The DAC can clock the data to itself using an Asynchronous transfer method in which the DAC clock is the master clock (this is how most USB and Firewire interfaces function). If using SPDIF, the clock in the DAC needs to use a phase-locked-loop to synchronize the frequencies of the sending clock with itself. If the DAC is well designed for use with SPDIF it will typically have independent PLL circuits for each sampling rate. If using I2S, a master clock is shared between the DAC and sender. 

In the case of USB Asynchronous transfer there is potential for data loss within the transmission which can result in intersample-overs or just plain dropouts/scratchiness in the signal. I've experienced this with so many different varieties of USB DACs over the years I'm surprised it is still such a popular interface for audio enthusiasts.

Firewire might still be used in studios and was a better link at the time for professionals in most applications but it's mostly irrelevant now so not going to spend time on that much.

In the case of Ethernet transmission on a typical local area network, the streamer can send the real-time audio data out over the network using a number of different methods. As far as I am aware, nearly all of these methods are UDP-based protocols - AES67, Dante, Ravenna, AirPlay, Roon, Songcast, etc. 

Alternatively, if the streamer (computer and player software) is embedded on the device containing the receiving DAC, then the streamer can use TCP/IP to download the file information into a memory buffer where it can be managed better. Essentially, this is what Sonos, BluSound, Marantz, Yamaha, Sony, Linn, Naim, etc. are doing when you are using their native applications to "stream" audio from any number of services, or from your local content library. In the case of real-time internet radio streams, or services like Pandora, the streams are "captured" using UDP protocol from the sending player. This is why so many users of any of the above products I've mentioned may experience dropouts and/or interruptions in the stream.

This brings us to the topic of - where is the streamer (computer/player software) obtaining the file from? The files can come from a network share on the local or wide area network, a local storage attachment (USB, SATA, SD Card, etc.),  or a "captured" stream of a file being played on a remote server somewhere. All of these different solutions have potential for degradation of the audio depending on how the data is sent out/obtained from the server by the client. For example, Spotify's entire library is at the very least lossless (I happened to have had a conversation with someone who manages their storage arrays), but the audio is transcoded to 320kbps OGG before it is received by their API on a client.

This all becomes far more complicated when you introduce other devices on the network all competing for the same protocol traffic. 

Anyone who wishes to still believe, after this thorough analysis, that "all streamers are the same", simply hasn't done their homework.
How the data got from source/cloud/drives/whatever via streamer to the output is somewhat moot. All streamers are going to deliver bit-perfect digital audio output.

With respect to your knowledge of TCP/UDP, I offer the following consideration.

I'd suggest delving a bit more into computational theory and architecture before making this claim. I.E., "the network is the computer" and processes within the computer are analogous to/similar to the OSI layer system used to deploy WANs/LANs.

I've mentioned it before but people seem to think there is no difference between the content of the data from a server.

If this were true then why do companies like Kaleidescape write proprietary file systems to store specific types of data (movies and music) on their servers?

Further, why do we have different file formats for audio in the first place? If it's all just 1's and 0's shouldn't everyone be using the same exact file type, conversation done, over?

To break this down a bit more, consider what I mentioned about Spotify.

If I happened to be an insider beta tester for lossless FLAC streaming (I'm not but I've heard rumors of those who are) from Spotify, the App I would be using would include the necessary code to tell the server that the client API was authorized to receive a FLAC encoded file.

A "normal" Spotify use requesting the same exact file from the same server on the same exact network would receive a 320kbps OGG file because the FLAC files are being transcoded before they leave, for use within the "normal" Spotify API.

In other words, not only can the server API and subsequent processes change the data through encoding/encapsulation/delivery on a particular bus, but the client API and how it is integrated with the server can also affect the data, while still remaining "bit-perfect".

In either case, if I sent the signal to a DAC, the signal is arriving bit-perfect.

Bit-perfect does not preclude that something has not happened to the data further up the chain. It's a misconception amongst digital audio "experts".

I can prove this on a smaller scale. I could set up two identical network streamers (I'll use UPnP) on the same exact network. I can configure a NAS device with multiple LAN ports to operate on two different VLANs with two different IP addresses. I can assign each UPnP network streamer to it's own VLAN.

I can then configure the UPnP/DLNA server software on the NAS with a huge file library of FLAC files, and tell the software to send native FLAC over one LAN while transcoding the FLAC to MP3 or OGG over the second VLAN.

If you tested both UPnP streamers with a DAC that can indicate "bit-perfect" output, they would both show as bit perfect after the streamer performs the necessary decode of the encapsulated file. 

It's this encode/decode process which seems to befuddle audio guys from realizing that yes, the file data can indeed (and sometimes is) be altered by the server/client in many more ways than one.

The rest of your post is otherwise spot on, I'd just encourage people who really want to enjoy their music at a higher level to keep an open mind with respect to some of this stuff and realize that yea, better hardware and software can provide better results in the right circumstances.