Is optical mostly a waste of time versus Ethernet?


The only value I see with a fiber optical cable is if you have a long long run.

All the noise coming into an optical fiber is preserved and comes out the other side. I guess there is a value in not creating more noise while it is traveling through the optical cable. But if it's a short run of two Feet then is it really worth it.  Seems a well shielded Ethernet cable would do just as fine without all the hassle of converting to optical which is a pain in the ass.

I always thought there was value with optical but it seems they're really may not be. Maybe I'm wrong.  It seems a switch likely produces a lot of noise and inserting an audio grade switch is very prudent and going optical really doesn't solve switch noise problem.  The benefit of re-clocking offered by a decent switch to clean up the signal is worthwhile.

jumia

@jumia 

 

You are correct, better clocking and jitter control are equally as important. Once again, its one of those things in the hobby that once you hear it, or better stated, once you "don't hear" the negative effects of noise/clocking/jitter is when there are some rather obvious aha moments.

 

These debates don't really change the thinking of either "camp". Those who have experienced positive effects are considered converts or ill-informed (depending on who is doing the considering). Those who are certain there is nothing to be gained from the effort, gain nothing by not going through the effort. Hey, maybe there is nothing to solve in their system. I do, respectfully, disagree with @fredrik222 because I have experienced the positive effects of certain cables, switches, etc and I have also experienced no effect from some while others actually adversely impact the quality of the sound. It doesn't make anyone right or wrong, it does however make you think a little about how so many rational people can be on opposite sides of topic.

 

​​​​​​@fredrik222 is right, maybe "my budget" system required some assistance whereas his system is fully fleshed out. Who knows? It would help us all get better performance if people like @fredrik222 would list his gear. I dont typically list mine because it is always changing slightly and I dont want to embarrass myself.

My my my. Does one jump into the hornets nest on their first post?

 

Concerning the "expert", I very much have doubts about their expertise. When you have been around true experts, you notice things. Clear, concise, few necessary references, talk in pros and cons, specifics, etc.  I see some of that, but not a lot. Experts rarely arbitrarily say they are the best. They let their words speak for themselves.

Audio over an IP network needs to be broken down into at least 3 types.  Real-time audio such as VOIP, conferencing, screen sharing, even in large real time studio applications. Compressed, streamed, and buffered music. Uncompressed, streamed and buffered music.

Different protocols are used for real time audio versus streamed and buffered. With real time audio, packets can be permanently lost. This is acceptable in the protocol as latency is more important than anything.

With streamed and buffered audio, which is what streaming services would fall into, lost packets will be re-transmitted with the net result being all packets, except very rare circumstances, will be delivered to the receiving end. I can't speak for the internals off all the apps/Win/OSX programs out there, but I have seen comparisons showing streamed and CD were the same. I would expect those streaming services in conjunction with their receiving programs use a variety of error correction and re-transmit techniques to minimize overall bandwidth where possible.  With compressed, and I don't mean FLAC, the service does have the option of dropping information to reduce bandwidth.

Do IP networks drop packets to manage bandwidth? Absolutely. That does not mean they are never delivered though. That just means that particular sent packet was not delivered. It can be resent till it gets through.

It is guaranteed that the protocol between your local server and end-point is using a fully recoverable protocol, i.e. not total packet loss?  If the product is using a standard protocol like DLNA and approved you can be sure of how it behaves. I am sure our expert can chime in on that. If proprietary, no comment.

UDP has no guaranteed delivery mechanism. TCP does. If you are using a TCP protocol, there is inherent functions built in to guarantee delivery no matter what the application does. UDP does not guarantee that. TCP allow setting up secure connections. UDP does not. You can make your own conclusions about paid streaming services and use (obviously TCP). UDP does support broadcast messaging, but normally only used for discovery.

We know that TCP is used by all major streaming services, and that pretty much guarantees in most cases all packets arriving, with application layer taking care of the rest using buffering.  What is happening on your local network?  Could it be UDP? Maybe.  I do know in my own network, I have pretty no losses, so it really does not matter to me.

 

 

Almost forget what this was about.  Ethernet ports are transformer isolated. They won't pass low frequency noise. They can pass high frequency noise, but its a transformer. It will reject most common mode noise. How hard can it be to isolate that part of the circuit from the DAC and analog?  Seems every piece of electronic test equipment has an Ethernet port today. They don't seem worried.  Someone mentioned jitter. I know this crowd is anti Audioscience, but tests are tests emotions aside. Jitter shows up as distortion. Tests I have read have been laptops and basic router/switches into DACs. I don't seen any added distortion from ethernet.

 

I see a lot of expensive Cat-6/7 cables. Those are shielded. The shields are connected at both ends. That does sound like a recipe for a ground loop that did not exist before.

@theaudiomaniac so, you called me out. What is wrong in any of my posts?

 

to your posts, I would add while UDP does not provide reliability at the transport layer, it leaves that to the higher level protocols. Which is huge headache, and probably why Roon ultimately switched to TCP.

Or they switched because they were not providing high level correction of lost packets, realized it was an issue in some networks, and went the TCP route to fix it since most home networks today have more than enough bandwidth unlike when they developed their product.I don't know the answer. I don't think you do either. Roon networking is for local connectivity, though it does provide management and access for external services.  Most do not put a wrapper around UDP to guarantee transport. If you are going to do that, you go TCP. For UDP, any number of feed-forward and error correction schemes are used for media to provide coverage when packets are lost and some minor retransmit schemes have been used where low latency can be maintained, but full guaranteed delivery makes little sense with UDP when TCP exists.

 

If you felt called out, perhaps consider not making absolute statements that are incorrect nor holding yourself up as absolute.