It isn't the bits, it's the hardware


I have been completely vindicated!

Well, at least there is an AES paper that leaves the door open to my observations. As some of you who follow me, and some of you follow me far too closely, I’ve said for a while that the performance of DAC’s over the last ~15 years has gotten remarkably better, specifically, Redbook or CD playback is a lot better than it was in the past, so much so that high resolution music and playback no longer makes the economic sense that it used to.

My belief about why high resolution music sounded better has now completely been altered. I used to believe we needed the data. Over the past couple of decades my thinking has radically and forever been altered. Now I believe WE don’t need the data, the DACs needed it. That is, the problem was not that we needed 30 kHz performance. The problem was always that the DAC chips themselves performed differently at different resolutions. Here is at least some proof supporting this possibility.

Stereophile published a link to a meta analysis of high resolution playback, and while they propose a number of issues and solutions, two things stood out to me, the section on hardware improvement, and the new filters (which is, in my mind, the same topic):



4.2
The question of whether hardware performance factors,possibly unidentified, as a function of sample rate selectively contribute to greater transparency at higher resolutions cannot be entirely eliminated.

Numerous advances of the last 15 years in the design of hardware and processing improve quality at all resolutions. A few, of many, examples: improvements to the modulators used in data conversion affecting timing jitter,bit depths (for headroom), dither availability, noise shaping and noise floors; improved asynchronous sample rate conversion (which involves separate clocks and conversion of rates that are not integer multiples); and improved digital interfaces and networks that isolate computer noise from sensitive DAC clocks, enabling better workstation monitoring as well as computer-based players. Converters currently list dynamic ranges up to∼122 dB (A/D) and 126–130 dB(D/A), which can benefit 24b signals.

Now if I hear "DAC X performs so much better with 192/24 signals!" I don't get excited. I think the DAC is flawed.
erik_squires

Showing 10 responses by erik_squires

Yep, looks like I was wrong, and a little surprised.

I'm still not all that excited about either 36 or 64 bit math here. :)

So: I was mistaken, my bad.

Best,

E

The dual mono DAC increased 64 bit processing power from the prior 36 bit and resulted in a gain of 268 billion higher resolution.

I think you meant prior 32 bits, but maybe not.

It's also, most likely, unnecessary.  I have Roon and it does convert everything to 32 bit numbers (floating point I think) before doing any DSP manipulation on it.  I'm pretty sure that's good enough, but sure, 64 bits is a lot more than 32. :)
But looking more at the (poorly) written paper linked attempting to compare a readily used term, over-sampling, to one practically made-up at least for this case (upsampling),


No, not at all.  This is not the only paper, and to claim it is is selective reading.

Upsampling and oversampling have long been quite clearly understood in the industry to mean two different approaches to the filtering problem. Only the poorly informed believe otherwise.

The former (upsampling) attempts to extrapolate new data points, whether by linear interpolation or by curve fitting. The latter replicates the data, to the rate at which data is received is now higher, but the amplitude is identical. That is, with 4x oversampling, you duplicate the same 16 bits.  With upsampling you do not.  Neither requires ASR.

And... I'm done. :) While you make good cases for the filter behavior being similar, and it is, this argument alone has already gotten us far from fact based, and my patience for that is now zero.

Buh bye.
Here’s an interesting article I ran across at Benchmark Media, I quoteth the relative part for this conversation:

An examination of converter IC data sheets will reveal that virtually all audio converter ICs deliver their peak performance near 96 kHz. The 4x (176.4 kHz and 192 kHz) mode delivers poorer performance in many respects.


The full article:

https://benchmarkmedia.com/blogs/application_notes/13127453-asynchronous-upsampling-to-110-khz

This again supports my hypothesis, that the converters themselves perform differently, it’s not just the data.
@heaudio123

You said:

Not really a curve-fitting but okay to think about it that way.

Actually that's exactly how it works for upsampling, but different upsampling algorithms work differently. With the advent of cheap compute, Bezier curves are cheap and easy to do. 


Oversampling shifts the effective sample rate so that the base spectra (which does not change), shifts from being centered around 44.1Khz to centered around 384Khz. Being say only 20Khz wide, a digital filter can easily remove most artifacts over 20Khz, with a simple analog filter taking out the rest.


I didn’t say "oversampling."

I said "upsampling" and they are not the same thing, which is why your post is arguing against something that was not actually argued.

Please see this primer:

https://www.audioholics.com/audio-technologies/upsampling-vs-oversampling-for-digital-audio


Best,

E
That upsampling can be useful doesn't necessarily mean that more data won't improve results.


@cleeds

But that's just it, with upsampling, you are not generating more data. There's no more clarity or resolution, or harmonics. There's not yet an AI that is listening to a trumpet and saaying "oh, I know how a trumpet sounds at 384k, I can fill in those gaps. "

At best, upsampling is curve fitting. If we say that upsampling for a particular dac is a significant improvement, then it's not the data contents, because it is largely the same, it is how well the DAC chip performs with more of it.
I realize now I did not include my own experiences for brevity.

I had the chance to own simultaneously an ARC DAC 8, along with a Mytek Brooklyn. I streamed to both of them.

I also used a Wyred4Sound Remedy asynchronous sample rate converter).

To make a long story short, and leave myself open to uninformed nit-picking, I’ll say the following:

The gap in playback quality between Redbook and 96/24 was wide in the ARC 8. It was very very narrow in the Brooklyn. In all cases, the Brooklyn was superior. The ARC 8 benefited from the ASR a great deal. The Brooklyn did not.

Given that the Mytek was better in all ways AND also had such a slim difference in performance I concluded that maybe the problem was not the data, as we have so often thought, but how well the DACs behaved with Redbook. I've had similar experiences with a number of modern DACs. Some very inexpensive. The playback gap has all but vanished over the last 15 years. What was once obvious is now gone.

Yet, despite this, I have seen many times people take my experience as evidence of data missing in Redbook. No matter what evidence is presented to the contrary.

Anyone who relies on upsampling in my mind is also taking advantage of a DAC simply performing better with high resolution data, even though upsampling CANNOT under any circumstances add information that was not there before. If upsampling works, at all, then it means the DAC does not perform equally at all resolutions. It has nothing to do with missing data.
I should point out that in the experiments I was able to do, with 2 different DACs of different ages, I exclusively used a streamer. No CD Fluttering was there.

Best,

E
@millercarbon

I'm grateful for your thoughtful consideration of the facts and ability to share your great insight with all of us.

I'm so glad you aren't a knee-jerk reactionary that just follows someone he does not like to be a negative sour puss. Happy to see that those days are in the past and only the adults are left in the pool.

Best,

E