Is optical mostly a waste of time versus Ethernet?


The only value I see with a fiber optical cable is if you have a long long run.

All the noise coming into an optical fiber is preserved and comes out the other side. I guess there is a value in not creating more noise while it is traveling through the optical cable. But if it's a short run of two Feet then is it really worth it.  Seems a well shielded Ethernet cable would do just as fine without all the hassle of converting to optical which is a pain in the ass.

I always thought there was value with optical but it seems they're really may not be. Maybe I'm wrong.  It seems a switch likely produces a lot of noise and inserting an audio grade switch is very prudent and going optical really doesn't solve switch noise problem.  The benefit of re-clocking offered by a decent switch to clean up the signal is worthwhile.

jumia

Showing 7 responses by theaudiomaniac

My my my. Does one jump into the hornets nest on their first post?

 

Concerning the "expert", I very much have doubts about their expertise. When you have been around true experts, you notice things. Clear, concise, few necessary references, talk in pros and cons, specifics, etc.  I see some of that, but not a lot. Experts rarely arbitrarily say they are the best. They let their words speak for themselves.

Audio over an IP network needs to be broken down into at least 3 types.  Real-time audio such as VOIP, conferencing, screen sharing, even in large real time studio applications. Compressed, streamed, and buffered music. Uncompressed, streamed and buffered music.

Different protocols are used for real time audio versus streamed and buffered. With real time audio, packets can be permanently lost. This is acceptable in the protocol as latency is more important than anything.

With streamed and buffered audio, which is what streaming services would fall into, lost packets will be re-transmitted with the net result being all packets, except very rare circumstances, will be delivered to the receiving end. I can't speak for the internals off all the apps/Win/OSX programs out there, but I have seen comparisons showing streamed and CD were the same. I would expect those streaming services in conjunction with their receiving programs use a variety of error correction and re-transmit techniques to minimize overall bandwidth where possible.  With compressed, and I don't mean FLAC, the service does have the option of dropping information to reduce bandwidth.

Do IP networks drop packets to manage bandwidth? Absolutely. That does not mean they are never delivered though. That just means that particular sent packet was not delivered. It can be resent till it gets through.

It is guaranteed that the protocol between your local server and end-point is using a fully recoverable protocol, i.e. not total packet loss?  If the product is using a standard protocol like DLNA and approved you can be sure of how it behaves. I am sure our expert can chime in on that. If proprietary, no comment.

UDP has no guaranteed delivery mechanism. TCP does. If you are using a TCP protocol, there is inherent functions built in to guarantee delivery no matter what the application does. UDP does not guarantee that. TCP allow setting up secure connections. UDP does not. You can make your own conclusions about paid streaming services and use (obviously TCP). UDP does support broadcast messaging, but normally only used for discovery.

We know that TCP is used by all major streaming services, and that pretty much guarantees in most cases all packets arriving, with application layer taking care of the rest using buffering.  What is happening on your local network?  Could it be UDP? Maybe.  I do know in my own network, I have pretty no losses, so it really does not matter to me.

 

 

Almost forget what this was about.  Ethernet ports are transformer isolated. They won't pass low frequency noise. They can pass high frequency noise, but its a transformer. It will reject most common mode noise. How hard can it be to isolate that part of the circuit from the DAC and analog?  Seems every piece of electronic test equipment has an Ethernet port today. They don't seem worried.  Someone mentioned jitter. I know this crowd is anti Audioscience, but tests are tests emotions aside. Jitter shows up as distortion. Tests I have read have been laptops and basic router/switches into DACs. I don't seen any added distortion from ethernet.

 

I see a lot of expensive Cat-6/7 cables. Those are shielded. The shields are connected at both ends. That does sound like a recipe for a ground loop that did not exist before.

Or they switched because they were not providing high level correction of lost packets, realized it was an issue in some networks, and went the TCP route to fix it since most home networks today have more than enough bandwidth unlike when they developed their product.I don't know the answer. I don't think you do either. Roon networking is for local connectivity, though it does provide management and access for external services.  Most do not put a wrapper around UDP to guarantee transport. If you are going to do that, you go TCP. For UDP, any number of feed-forward and error correction schemes are used for media to provide coverage when packets are lost and some minor retransmit schemes have been used where low latency can be maintained, but full guaranteed delivery makes little sense with UDP when TCP exists.

 

If you felt called out, perhaps consider not making absolute statements that are incorrect nor holding yourself up as absolute.

@ghasley 


There is a potential mechanism for noise transfer and this would be system dependent. I have my doubts, but the mechanism exists. An argument could be made about the impact of that noise on jitter in a clock on the DAC device. I find that argument unreliable and not supported by available measurements. Similarly, I find the argument about jitter in the Ethernet as an issue unreliable. All these things would be too easy to demonstrate. My spidey sense goes off then easily demonstrated things are not.  Been in this hobby long enough that I don't trust what I thought I heard any more, proven myself wrong too many times. I am not about to start taking other people's words for it. Used to do that. Poorer for it.

I won’t derail this thread any further @fredrik222, but I think I have posted enough information to clearly illustrate that the absolutes you discuss are not. Roon, as a server, specifically does not support DRM. It has very limited support, and only supported services that use DRM (Tidal/Qoboz) after their change to TCP. Prior to TCP, it was UDP and I will go back to my statement that you do not know if they guaranteed delivery. If I had to guess, I would say they did not. I wonder what other local streaming implementations are still using UDP and susceptible to packet loss? I have no worries about web-based, they are TCP based, but for local, I would want to be verifying what is being used is guaranteed delivery before assuming it is.

A packet loss indicator or VOIP quality indicator is not really a wrapper as it is not implementing any manipulation of the underlying data (typically), though people seem to call almost anything a wrapper if it interacts with the data, even if it does not manipulate it.

Where reliability wrappers are used in gaming platforms is custom implementations for ensuring controls information is transmitted, i.e. press a key, move a mouse, etc. The small amount of data does not lend itself well to TCP, but you need to ensure the data gets there. Gaming platforms write a custom wrapper around UDP (just like UDP is essentially a wrapper itself) to accomplish this. This does not make sense for streaming where the data is relatively large, the latency unimportant, and a suitable method already existing.

 

You can respond, but I will not. I would like to respect the topic of the thread. I don't doubt you have some networking experience, but I also doubt that you lack application specific knowledge in this area.

If using Roon none. If using and of the streaming services which are all TCP based with buffering and retry, none. If using something locally like HEOS, DLNA, etc. not sure all would need to be evaluated (on paper). I remember not long ago seeing a test with JRiver (not sure what protocol) and there were lost packets. That’s not a JRiver issue, just what protocol was used for that connection. In my local network, I have done tests wired and lost no packets over 30+ minutes. WiFi you lose packets but I don’t think anyone would use a protocol with WiFi that loses packets.

Reading something and understanding what it means in application are not the same. TCP ensures all packets are delivered including the logic to resend if lost.

 

 

Network transport protocols such as TCP provide endpoints with an easy way to ensure reliable delivery of packets so that individual applications don't need to implement the logic for this themselves. In the event of packet loss, the receiver asks for retransmission or the sender automatically resends any segments that have not been acknowledged.[3]: 242  Although TCP can recover from packet loss, retransmitting missing packets reduces the throughput of the connection as receivers wait for retransmissions and additional bandwidth is consumed by them.