Does Digital Try Too Hard?


Digital glare. A plague of digital sound playback systems. It seems the best comment a CD player or digital source can get is to sound “analog-like.” I’ve gone to great lengths to battle this in my CD-based 2-channel system but it’s never ending. My father, upon hearing my system for the first time (and at loud volumes), said this: “The treble isn’t offensive to my ears.” What a great compliment.

So what does digital do wrong? The tech specs tell us it’s far superior to vinyl or reel to reel. Does it try too hard? Where digital is trying to capture the micro details of complex passages, analog just “rounds it off” and says “good enough,” and it sounds good enough. Or does digital have some other issue in the chain - noise in the DAC chip, high frequency harmonics, or issues with the anti-aliasing filter? Does it have to do with the power supply?

There are studies that show people prefer the sound of vinyl, even if only by a small margin. That doesn’t quite add up when we consider digital’s dominant technical specifications. On paper, digital should win.

So what’s really going on here? Why doesn’t digital knock the socks off vinyl and why does there appear to be some issue with “digital glare” in digital systems.
mkgus

Showing 7 responses by terry9

All you have to do is draw a sine wave. Then make 250 equally spaced marks on the x axis, starting with 0 and ending at 2pi. Use each mark as the step boundary of a step function. Just like elementary calculus.

Now calculate the mean square difference between that step function and the sine wave, and divide by the sine wave area - it’s about 5%. You may infer that 250 samples per waveform delivers about 5% distortion. Now, how many samples per 20KHz waveform?

That’s where your digital glare comes from - at least, part of it.
@asvjerry 

Don’t quite see how smoothing is going to help all that much with 2 samples per waveform. Or even 4 (for example when samples are taken at pi/4, 3pi/4, 5pi/4, and 7pi/4).

And never mind phase distortion (by which I mean the distortion inherent when subsequent waves are sampled at different places on the x axis). But is that not the underlying insight of Nyquist/Shannon, that frequency information can be recovered to arbitrary precision at the (temporary) expense of phase information? But real-time sampling still leaves us with this ’temporary’ problem, or so it seems to me.
@cleeds 

Thank you for your friendly and helpful response. I'll take a look and we can discuss.
@cleeds 

First, I learned something useful from the presentation you referenced, that step functions are no longer used in DAC. Thanks for that. However, I still have concerns, which may arise from misunderstanding, which I submit for comment and correction.

It follows that some form of interpolation is being used to convert the discrete sample values taken at discrete time intervals into an analogue signal. The alternative, a smooth perfect fit to the data, appears from the presentation to require the SW to know which frequency it is dealing with  (although it might be able to guess, using FFT on previous segments for example, so informing that interpolation). This is important because the talk continues to use this result (a perfect, smooth waveform) as proven, which I do not grant, but also appears to assume mathematically perfect observation (else how could there be a unique waveform which fits the data?).

There seem to me to be only two alternatives: (1) stick with a safe linear interpolation, or (2) guess. But with a guess, sometimes the SW is going to guess wrong (perhaps on transients?), and then the output is going to be far more distorted than a simple linear interpolation would suggest.

Therein lies information loss. What is known comprise the samples and intervals - the rest is processing. I hypothesize that the success of one processing algorithm over another represents digital's progress. Is this correct Cleeds? 

Oddly enough, I was just reviewing uniqueness theorems concerning representations of ordered semi-groups, which, assuming perfect information, is pretty much what we are dealing with here. A few points occur to me: (1) samples are taken in finite time, and are therefore averages of some kind  (2) samples are taken at intervals of finite precision, therefore there is temporal smearing (3) samples are taken with finite precision, hence further uncertainty is built into each (averaged) sample.

In physics, data are always presented with error bars in one or more dimensions. It leads one to ask, why does this engineer think he has points? Is he confusing this problem with talk of S/N ratio?

These considerations lead us, contrary to the presentation, to the conclusion that we do not have lollypop graphs of points, we have regularly spaced blobs of uncertainty, which are being idealized. However, this also shows that, regardless of the time allowed for sampling and reconstruction, there is an infinity of curves which fit the actual imperfect data. Not a unique curve by any means.

Again, I have Cleeds to thank for refining my understanding of digital. I agree that we can't discuss digital intelligently unless we understand how it works and how it doesn't. Please correct that which you find to be in error.



I gave my conclusions and my reasons for all to see and criticize. That's the way of math and science. I suggest that you re-read my post and then decide who it is that is "Just announcing".
I'm talking about the graph of an arbitrary waveform, not any specific wave form. At the point where he says something like, "There are no steps" and refers to "lollypops". The general case.
Perhaps I should have said "abstract" instead of "arbitrary". Does that make it clear?