Which is more accurate: digital or vinyl?


More accurate, mind you, not better sounding. We've all agreed on that one already, right?

How about more precise?

Any metrics or quantitative facts to support your case is appreciated.
mapman

Showing 5 responses by terry9

Orpheus10, I don't think that a 'scope is the best way to investigate this.

I can clearly hear differences between waveforms that look identical on a scope, such as small differences in IM distortion. The resolution is simply not there. That may be because a scope's display is painted on a phosphor-coated screen, and it cannot react very fast. Only specialized phosphors are likely to react faster than 1KHz, much less 20 KHz. Knowing this, the scope's manufacturer is likely to have embedded averaging routines, so that one does not observe an event, but an average of events. Therefore, the question reduces to the temporal resolution of the instrument's display, as distinct from it's electronic frequency response to periodic waveforms repeated over thousands or millions of cycles. Alternately, specialized electronics "freezing" the action would work - but not a garden variety scope. Not being an expert, I may have got this part wrong - if so, please correct me.

I also note that consistency is a poor substitute for accuracy.

Second, digital representations of waveforms near the Nyquist criterion (half the sampling frequency) are aperiodic, except over several waves. To see this, consider a 20KHz sine wave being sampled at 44KHz. It is sampled, on average, at a rate of 44,000/20,000 = 2.2 times per wave. Since the wave evolves over a period of 2pi, the distance between two samples is 2pi/2.2 ~ 2.856. Without loss of generality, assume that the first sample is taken at point 0, the second at 2.856, the third at 5.712, etc. Then

Point Sin
0 0
2.8 .28
5.7 -.54
8.6 .76
11.4 -.99
14.3 .99
17.1 -.99
20 .91
22.8 -.76
25.7 .54
28.6 -.28
31.4 0

Which then repeats.

A linear interpolation of these points is the best a digital algorithm can do, unless it makes assumptions about the character and frequency of the waves. That linear interpolation results in asymmetrical triangular waveforms with peaks ranging from an absolute minimum of .28 to an absolute maximum of .99. The result is a waveform periodic over 5 of the original 20KHz waves, or 4KHz. Thus a 20KHz signal is rendered into a highly complex waveform which waxes and wanes over a 4KHz period. Furthermore, the waveform must be triangular and asymmetric, with attendant beats, unless heroic processing is invoked. And even if it is, that 20KHz tone must wax and wane over a 12dB range.

Clearly the effect worsens as one approaches the Nyquist frequency. The brick wall filters which prevent signals higher than Nyquist also impose their own distortions and phase shifts at lower frequencies, but that is another matter.

Finally, thank you Learsfool, for supporting my point about different types of distortion, especially those which have yet to be characterized. I think you may have said it better.
Analogue. Do the mathematics:

By the Fourier Theorem, we must only consider sine waves. A sine wave must be sampled 250 times to achieve 5% RMS distortion or less (the bear is when they cross zero, if I remember my simulations correctly). Undamaged adults can hear to 20KHz. Therefore a signal must be sampled at 250 x 20,000 = 5MHz to achieve less than 5% distortion throughout the accepted bandwidth.

And I will shriek if I hear Shannon's Information Theorem (mis) quoted again. That theorem requires a continuous Fourier Transform - i.e. has been infinitely repeating since minus infinity, through the present, and on into plus infinity, whereupon the samples may be reassembled to give good results. But the universe is only 13,000,000,000 years old - a long way from infinity (infinitely long way, actually).

So digital will rival a Revox A77 when sampling frequency exceeds 5MHz. As for rivalling a Studer ... no way.
Hello Almarg.

Reciprocating your respect, truly, I wouldn't expect you to have encountered it before.

I bought a UNIX box in 1999 to do simulation research with Maple (the pro math package). Then I found that, sadly, the journals don't like results which don't parrot the mainstream "wisdom". So I did recreational things like investigating this. In any case, it's unpublished, so you'll have to do it yourself.

The algorithm is quite simple: set the number N of samples per waveform, calculate the step functions appropriately, and calculate the difference squared between that and a sine wave. Divide by the area under the sine. That gives you RMS distortion.

Let N increase. At about N=250 you will see the distortion falling towards 5%.

Oversampling does not help much. Unless the original signal is also processed this way, you merely end up with a curve that more closely approximates a distorted sine.

Regards,
Terry
Hello Al.

Thanks for the note, but I find the arguments unconvincing. While it is easy to speak of step functions being "smoothed out", it is imprecise. To make the statement precise, the smoothed function must be measured from real devices rather than theoretical, if for no other reason that every RC filter introduces its own distortion. Once an empirical function is obtained with adequate precision, it may be possible to fit the curve analytically, or, at worst, as an approximation using some technique such as cubic splines. Then, when an expression for the smoothed function is obtained, the analysis can be re-run, and an amended error figure derived. In the absence of such a Herculean effort, which should, of course, be borne by those who market the technology, I think that we are entitled to simplify the problem as I have done (see below).

Furthermore, I hold little hope that this effort will much reduce the distortion figure. Perhaps this is why we have not seen it reported. I alluded to the problem in my previous post - the smoothed curve will lag the sine except at the peak and trough. Hence the smoothed curve will closely approximate another smooth function, albeit one with two higher frequency distortion components, both of which will be some function of frequency. That other smooth function will not be a sine, having a (relative) hollow on the left edge and a bulge on the right. The RMS error, being referenced to a true sine function, will remain high.

As for your riposte, that a 176 Hz tone would be 5% distorted, that is not implausible to me. I find even the mid-range on CD's to be unclear compared to analogue (Linn Unidisk source into electrostatics). You are absolutely right to make the calculation and challenge me on it, but I have already made that calculation and found it plausible, so I suppose we must agree to disagree on that point.

If you would like to proceed as I suggest in the first paragraph, and achieve a better approximation, I applaud your devotion to science. And I will modify my opinions with a dose of humble pie if you prove me wrong.

Thanks for engaging.

Terry
Hello Al.

Thank you for your expert and thoughtful response. I find myself agreeing with your premises while disagreeing with your conclusions.

I agree with your aside concerning filtering, but, would you not agree that every capacitor introduces distortion? And that therefore we should be concerned with physical measurements rather than idealizations? I hope that this does not misrepresent your point.

I also agree that the spectral components are all above 20KHz. Would you not agree that this creates a very rich ultrasonic environment? And further, that this is mainly generated from harmonies in a fairly narrow 4 octave range, suggesting that the ultrasonics are also clustered? I note that different frequencies "beat" against each other; e.g. 33KHz and 34KHz signals beat to form their difference, or 1 KHz. Further, these beats will be related to the fundamentals in no simple respect, producing distortions which have not been characterized. If they were especially irritating, only a small audio component would be required to render digitally processed signals unpleasant. Which is what some of us observe.

Were it true that ultrasonic distortion was inaudible, SACD would be no improvement on CD, which is not observed. Therefore, I stand by the assertion that total distortion is what is important, until it is proved otherwise.

Having said that, I agree with your (implicit) point that another useful simulation would use linear interpolation between subsequent sample points. Then it would be an empirical question of which method better approximated the physical effects, and whether the ear responded as the approximation would lead us to expect. A Ph.D. dissertation there.

Your point about a periodic waveform of infinite duration is absolutely correct. I was restricting myself to waveforms which are physically possible. Since physical possibility precludes the use of the Shannon Sampling Theorem to justify reasoning, I stand by my assertion.

I also suspect that many will disagree with me, for whatever reason. I respect your reasons, but nevertheless must disagree.

Thank you for an enjoyable and enlightening discussion. Respectfully,

Terry