How Science Got Sound Wrong


I don't believe I've posted this before or if it has been posted before but I found it quite interesting despite its technical aspect. I didn't post this for a digital vs analog discussion. We've beat that horse to death several times. I play 90% vinyl. But I still can enjoy my CD's.  

https://www.fairobserver.com/more/science/neil-young-vinyl-lp-records-digital-audio-science-news-wil...
128x128artemus_5

Showing 6 responses by terry9

I think we should be asking the question, "How many samples per waveform are required to reduce the RMS error to below, say, 5%, which is the sort of error achieved during the heyday of the vinyl years?"

Some types of error may be more or less objectionable, but let's start simple. Let's just find out how much RMS error there is for a given sampling scheme.

Surprisingly enough, it's not that hard to calculate. But shockingly, nobody seems to bother.

To calculate, begin with observing that the Fourier theorem shows that all periodic functions are built up as a sum of sine waves, so that to consider music, all we have to consider are sine waves (aka pure tones). Further, it is not hard to compute the difference between a sine wave and its sampled value at any point, for any fixed number N of samples per wave form. You can approximate by just slicing the waveform into N intervals and then calculating the difference at the midpoint of each interval.

It is also easy to square these differences and add them up. You could use calculus, but the above is an adequate approximation.

That is the essence of a computation yielding the RMS error of the sampling scheme per waveform.

Returning to our question, the answer I get is 250 samples per waveform for step-function decoding. At 20 KHz, that means sampling at 5MHz - with infinite precision, of course.

Exotic decoding algorithms can improve on this for pure tones, but how well do they work for actual music? I doubt if anyone knows - certainly I've never seen discussed, even the first question about samples per waveform. I think we should.
David, I don't quite follow your third last paragraph.

Your "first statement" is indeed a statement, capable of being true or false, but is it true? It needs justification, it seems to me. It is not the same at all as the sentence following, "Let me state that another way." And the sentences following "Let me state that a 3rd way ..." do not convince me that the phenomenon is independent of sampling rate.

The nature of the signals is irrelevant. It is the relative timing of the encoding that matters. If the sampling rate is not high enough, or the jitter rate not low enough, then two signals differing by 1 microsecond will be encoded as identical.

Perhaps an example will help you to understand my confusion. It seems to me that if sampling is done at a frequency of 1Hz, and two signals differing by 1 us are detected, they will be encoded in the same pulse about 999,999 times out of 1,000,000. Which logically implies that sampling rate is intrinsic to the issue. 

Perhaps you could point out the source of my confusion.
@erik_squires "Microtime, as the article envisions it, is not a thing."

Don't agree. It seems to me that he defines it quite clearly in terms of microsecond (neural) phenomena. And also, it seems to me that someone with a Ph.D. in this area is likely to know something about this area.

Where he could be clearer is about the relationship between math and science - like how to not screw it up when applying math to the physical world. But that's a highly technical subject all on its own (for access to the literature see Theory of Measurement by Krantz et al, Academic Press, in 3 volumes), and surprise, many scientists get it quite wrong. Let alone engineers.
Thank you David. I will have to think about that.

As for the Nyquist-Shannon theorem, yes, I am familiar with it, and am not convinced that it says what some engineers think it does. For one thing, it involves a limit in terms of an infinite series (or integration over infinite time), and infinite time is available for relatively few signals.

My reference, A Handbook of Fourier Theorems by Champeney (Cambridge University Press), is a little too dense for casual reading, but I’ll persevere for a time.

Thanks again for the discussion, and also for re-igniting an interest in that branch of mathematics.
Thanks David. Will have to think some more. Haven't use Octave - Maple is my poison.
Thank you, David. I've obviously got some reading to do - but after I get my newest amplifiers working!