Well, at least there is an AES paper that leaves the door open to my observations. As some of you who follow me, and some of you follow me far too closely, I’ve said for a while that the performance of DAC’s over the last ~15 years has gotten remarkably better, specifically, Redbook or CD playback is a lot better than it was in the past, so much so that high resolution music and playback no longer makes the economic sense that it used to.
My belief about why high resolution music sounded better has now completely been altered. I used to believe we needed the data. Over the past couple of decades my thinking has radically and forever been altered. Now I believe WE don’t need the data, the DACs needed it. That is, the problem was not that we needed 30 kHz performance. The problem was always that the DAC chips themselves performed differently at different resolutions. Here is at least some proof supporting this possibility.
4.2 The question of whether hardware performance factors,possibly unidentified, as a function of sample rate selectively contribute to greater transparency at higher resolutions cannot be entirely eliminated.
Numerous advances of the last 15 years in the design of hardware and processing improve quality at all resolutions. A few, of many, examples: improvements to the modulators used in data conversion affecting timing jitter,bit depths (for headroom), dither availability, noise shaping and noise floors; improved asynchronous sample rate conversion (which involves separate clocks and conversion of rates that are not integer multiples); and improved digital interfaces and networks that isolate computer noise from sensitive DAC clocks, enabling better workstation monitoring as well as computer-based players. Converters currently list dynamic ranges up to∼122 dB (A/D) and 126–130 dB(D/A), which can benefit 24b signals.
Now if I hear "DAC X performs so much better with 192/24 signals!" I don't get excited. I think the DAC is flawed.
The D-02X uses 32 (16 + 16) DAC circuits to achieve its low-noise and phenomenal linearity. Its AK4490 chipsets are produced by Asahi Kasei Microdevices. In addition to directly processing DSD signals, a new 36-bit D/A processing algorithm performs analog conversion of the PCM signal at 36-bit resolution for smooth, delicately detailed high-resolution sound
Sorry, but I forgot 268 billion TIMES higher resolution. And yes, I thought 32 bit processing would be correct but the rep specifically stated 36 bit processing. Maybe he made an error in his presentation. At 2:15
https://www.youtube.com/watch?v=Kb_LBRSefvE I've heard this system using my own LP and CDs. The best system I ever heard and my wife said it was uniquely musical. Of course, it was $1.4 million.
The dual mono DAC increased 64 bit processing power from the prior 36
bit and resulted in a gain of 268 billion higher resolution.
I think you meant prior 32 bits, but maybe not.
It's also, most likely, unnecessary. I have Roon and it does convert everything to 32 bit numbers (floating point I think) before doing any DSP manipulation on it. I'm pretty sure that's good enough, but sure, 64 bits is a lot more than 32. :)
Allow me to shift the conversation in a different direction. In the upsampling/oversampling discussion, why is it that when using an R2R ladder DAC most prefer a non-oversampling setting?
I just saw a 3 month old YouTube video with Esoteric rep at a show explaining their new transport and DAC. The dual mono DAC increased 64 bit processing power from the prior 36 bit and resulted in a gain of 268 billion higher resolution. This is the PX1 model DAC. All processing is done with discrete components rather than chips. This is supposed to get the most out of the CD. If true, then I hope the trickle down to less expensive DACs/players will eliminate the need for streaming at lower quality except for convenience.
Heaudio I wonder if my EAR Acute from 2006 has an adequate transport as I read it was a standard Sony. The unit was originally Adcom, not an audiophile level unit sonically. That's why I am questioning whether an exotic/high end transport would improve my digital end enjoyment. Actually, the COS D2 DAC was a 2018 engineered product so it is relatively current. Both units (and my entire system) uses GoverHuffman Pharoah level cabling and differences in power cables were immediately noticed on both the DAC and EAR Acute (as a transport).
Stabilizing the disc is important, glad someone is doing that. But what’s that, 1% of the players? Hel-loo! Moreover, careful vibration isolation of the player from seismic type vibes is also important and separate from stabilizing the disc and requires an aftermarket solution. I’m not trying to set the world on fire, just start a flame 🔥 in a few hearts 💕
What is with the scattered light psychosis? Is this an attempt at humor?
Any CD player can pretty much with correction extract a bit perfect stream. If you don't buffer you have jitter. Buffer and reclock and jitter disappears. No scattered light psychosis to worry about. Check the calendar. Is 2020 not 1999. You missed the millennium and the 20 years after.
Are you trying to intentionally mislead people?
That's directed at geoffkait, but fleschler, mechanical vibration and impact on jitter disappeared on audiophile CD players over a decade ago. Modern players buffer and reclock. What happens at the mechanism is almost meaningless.
I read several current high end player and transport manufacturers who specifically cite their vibration isolation and freedom from wobble/precise laser readers from a mechanical reference point. As to light scatter, I read several manufacturers who maintain that their units are totally black. However, you have mentioned that black-out conditions are insufficient as there is unseen light frequencies which are detectable by the laser but not the eye.
I still have Kyocera CD players from about 1985 and they sound quite nice. I kept my EAR Acute as it sounds musically interesting but not as highly resolving. Hence, I purchased a 2016 engineered DAC which is the cat’s meow for the price (COS Engineering D2). I will try a superior transport to see how much it will add to my enjoyment with my new DAC. Over the years, I have made extreme upgrades in my cabling, which accounts for the DAC and EAR maximizing their potential.
What I previously stated is that when mechanical vibration is eliminated as a source of jitter, etc., and only infrared light scatter remains an issue, CD playback can be extremely enjoyable, comparable to high end analog playback.
Everything is relative. As I oft say, audiophiles are prone to making declarations such as the ones you just made, I.e., that somehow modern players are superior by buffering, etc. While it may be true that some CD players are more innovative than others in dealing with these issues or other issues, all modern players don’t address the issues I mentioned, especially the scattered light issue.
I hate to judge too harshly but I don’t think any CD player manufacturer has even mentioned scattered light is a problem much less offers a solution. Self inflicted CD wobble and flutter is another issue very few manufacturers mention. The Green Pen is an example of a partial solution. Isolation is also a partial solution. Other older players stabilized the CD - e,g., Sony SACD PLAYER used a brass weight to hold the disc firmly, so that idea’s not new. It should be mentioned that a relatively inexpensive Tweak for an existing player must be weighed against great expensive of a “modern player,” assuming the modern player even addressed the problems, which it probably doesn’t.
A comment was made concerning Esoteric's finest player which uses their
VRDS-NEO VMK-3.5-20S transport, This transport reportedly hugs the CD in an exact position for the laser reader and eliminates wobble. It was designed to be vibrationally isolated from the mechanics of moving the disc. I don't think that 2 of geokaitt's complaints concerning transport reading problems are valid for this design. The remaining problem of scattered light is still valid although with accurate laser tracking, this problem should be reduced by the lens superior focusing. I also read that Luxman's transport is also designed for superior vibration and tracking capabilities. These units are extremely superior to the 1980s CD players which I detested for the most part (mostly due to jitter and their DACs). Maybe CD and DVD players of today are still imperfect (so is analog playback) but it's damn great!
No one is saying a DVD or CD won’t work. What I’m saying is the way the system was designed DVDs and CDs and Blu Ray appear to be working 100% but are actually working less than 100%. How much less less than 100% depends on many factors. Its not as if there are “data dropouts” that are audible or visible. It’s more subtle. It’s a subtle degradation of the sound or picture. It’s not the disc per se but how the disc is read. The disc has all the data, the system can’t read/interpret it accurately or completely. Think of it like an 8 cylinder car running on 7 cylinders. It will still run OK.
Now that this thread has gone off the rails, I'd like add on my thought on spinning disks. If the data was not fully retrieveable, would a DVD work? Video signals I belive are more critical than audio signals in that we would see errors from the disk, possibly similar to the Compression, micro and macro blocking, banding and posterization seen on streaming video/cable tv.
In this limited industry somewhat exclusively does upsample mean resample by async sample rate converter (and it was pretty much a made-up term), which most people know, but what they don’t know is that the underlying technology which is pretty much exclusively some form of synchronous oversampling in the form of fractional delay filters, which provides an underlying shift upwards in the spectra from the original sample rate, coupled with a time compensated curve-fit which provides for the final sample rate and provides the jitter attenuation (something not needed with async streaming sources of course). So when claiming advantage of upsampling to a higher frequency, is it the inherent oversampling, the jitter reduction, or the pick of final sample rate, or some combination of?
As Cleeds pointed out, you can’t make a generalization to all cases based on one example.
That Benchmark found that "performance" based on data sheet and some simple performance metrics was better at 24/96 than 24/192 is not at all surprising. At lower speeds, you have less contribution to the output from the switching CMOS switches, less dynamic power (and less glitch energy) contributing to a quieter environment, and even simply more time to settle to the final value. Perhaps in Benchmark’s specific case, decoupling the output frequency from the input also removed sources of synchronous noise. Of course, they are running in 2x mode anyway, so technically the DAC is running close to 192KHz internally, so what exactly does that 96KHz even mean in their argument ... not to mention that it then runs into a much higher rate sigma-delta modulator. Does their product match the data, or does the data match the product ?
However, the items they cited w.r.t the data sheets and their tests, are not guarantees of excellent perceived sonic performance which would trade off high sample rate, system noise, with analog filtering. The article is also 10 years old, so what was best 10 years ago, may have shifted up 2x or more in terms of what was best.
I don’t think you have well made your point, simply because tests have been done with 24/96 native, and 24/96 down-sampled mathematically to 16/44.1 and then upsampled to 24/96 so that the playback path was identical, all that was changed was the information rate.
But looking more at the (poorly) written paper linked attempting to
compare a readily used term, over-sampling, to one practically made-up
at least for this case (upsampling),
No, not at all. This is not the only paper, and to claim it is is selective reading.
Upsampling and oversampling have long been quite clearly understood in the industry to mean two different approaches to the filtering problem. Only the poorly informed believe otherwise.
The former (upsampling) attempts to extrapolate new data points, whether by linear interpolation or by curve fitting. The latter replicates the data, to the rate at which data is received is now higher, but the amplitude is identical. That is, with 4x oversampling, you duplicate the same 16 bits. With upsampling you do not. Neither requires ASR.
And... I'm done. :) While you make good cases for the filter behavior being similar, and it is, this argument alone has already gotten us far from fact based, and my patience for that is now zero.
Here’s an interesting article I ran across at Benchmark Media, I quoteth the relative part for this conversation:
An examination of converter IC data sheets will reveal that virtually all audio converter ICs deliver their peak performance near 96 kHz. The 4x (176.4 kHz and 192 kHz) mode delivers poorer performance in many respects.
From a purely technical standpoint oversampling can apply to DA conversion, not just ADC, so from that standpoint, you can use upsampling, oversampling or sample rate conversion to a higher frequency all interchangeably. Feel free to validate that with DAC data sheets that discuss oversampling.
But looking more at the (poorly) written paper linked attempting to compare a readily used term, over-sampling, to one practically made-up at least for this case (upsampling), and then to actually not really give any definition to upsampling except to define it pretty much exactly as asynchronous sample rate conversion, another well understood term, I am not surprised by the confusion.
erik_squires: "Actually that’s exactly how it works for upsampling, but different upsampling algorithms work differently. With the advent of cheap compute, Bezier curves are cheap and easy to do. "
I think you are missing a key element of how a typical asynchronous sample rate converter with inherent over-sampling works, namely that the first step would be an implementation of oversampling (typically fractional delay filters), which provides a smoother curve for the curve-fit which works over a smaller number of samples. Doing this keeps the spurious frequency components higher up allowing for easier final filtering.
Not really a curve-fitting but okay to think about it that way.
Actually that's exactly how it works for upsampling, but different upsampling algorithms work differently. With the advent of cheap compute, Bezier curves are cheap and easy to do.
Oversampling shifts the effective sample rate so that the base spectra (which does not change), shifts from being centered around 44.1Khz to centered around 384Khz. Being say only 20Khz wide, a digital filter can easily remove most artifacts over 20Khz, with a simple analog filter taking out the rest.
I didn’t say "oversampling."
I said "upsampling" and they are not the same thing, which is why your post is arguing against something that was not actually argued.
Not really a curve-fitting but okay to think about it that way. In a digital representation, the spectra is reflected around 0Hz, and the sampling rate. Oversampling shifts the effective sample rate so that the base spectra (which does not change), shifts from being centered around 44.1Khz to centered around 384Khz. Being say only 20Khz wide, a digital filter can easily remove most artifacts over 20Khz, with a simple analog filter taking out the rest. The other benefit is spreading out quantization noise reducing the noise floor.
That upsampling can be useful doesn't necessarily mean that more data won't improve results.
@cleeds
But that's just it, with upsampling, you are not generating more data. There's no more clarity or resolution, or harmonics. There's not yet an AI that is listening to a trumpet and saaying "oh, I know how a trumpet sounds at 384k, I can fill in those gaps. "
At best, upsampling is curve fitting. If we say that upsampling for a particular dac is a significant improvement, then it's not the data contents, because it is largely the same, it is how well the DAC chip performs with more of it.
Given that the Mytek was better in all ways AND also had such a slim
difference in performance I concluded that maybe the problem was not
the data, as we have so often thought, but how well the DACs behaved
with Redbook.
That's certainly possible. There are some other variables - which some here have noted - including the synergy of the DACs and your subjective observation that the Mytek "was better in all ways."
If upsampling works, at all, then
it means the DAC does not perform equally at all resolutions. It has
nothing to do with missing data.
It isn't clear how you've made the leap from, "maybe the problem was not the data" to, "It has nothing to do with missing data." That upsampling can be useful doesn't necessarily mean that more data won't improve results.
It's risky to form an absolute conclusion from just a single test.
The best system I’ve heard - by far - is one for which the Reed Solomon error correction code subsection was disabled. Yes, I know what you’re thinking - is he out of mind? And once you stabilize the CD there is almost no need for the CD laser servo feedback system. Once you fix the underlying problems in the CD transport there is no need for all the patchwork fixes. The original designers obviously knew they had some problems with CD playback, they just didn’t know what all of the problems were or they ignored them. Do modern CD players just wish the scattered light problem away? I’ve never heard anyone even address the issue. If you could hear what I’ve heard with my ears.
I think you have a fundamental lack of knowledge of the architecture of a modern audiophile CD player and hence why your confusion w.r.t. what happens after the data is read off the disc, pretty much every error corrected in the error correction block, before it hits the buffer and re-clocker. Unfortunately I can't post images here and it would be easier to explain with pictures.
No. I am using the proper definition for jitter as it applies to the output of the CD player, not what comes off the disc which is meaningless in a buffered and reclocked player, i.e. modern audiophile players. I have no idea what you are using.
Buffering doesn't stop any errors, it provides the mechanism to eliminate all jitter.
No one ever said no uncorrectable errors, though when manufactured, based on many industry tests, uncorrectable errors are quite rare. These are test pretty easily recreated on any computer with a CD-rom as well, so it is no "secret".
From a practical standpoint, if you treat your CD the way the average audiophile does, uncorrected errors are not going to impact your listening experience. However, if you are just going to make stuff up, then I am not sure why you are participating in the conversation?
“There are no uncorrectable errors.” That’s precisely what the industry said ever since day one. Hel-loo! “Perfect Sound Forever.” The Reed Solomon codes and the laser servo feedback mechanism were supposed to take care of any errors. What a joke. Buffering doesn’t stop all laser reading errors, by the way, only certain specific ones. If it did portable Walkman CD players would be perfect. Buffering only puts off the inevitable for a few seconds.
I realize now I did not include my own experiences for brevity.
I had the chance to own simultaneously an ARC DAC 8, along with a Mytek Brooklyn. I streamed to both of them.
I also used a Wyred4Sound Remedy asynchronous sample rate converter).
To make a long story short, and leave myself open to uninformed nit-picking, I’ll say the following:
The gap in playback quality between Redbook and 96/24 was wide in the ARC 8. It was very very narrow in the Brooklyn. In all cases, the Brooklyn was superior. The ARC 8 benefited from the ASR a great deal. The Brooklyn did not.
Given that the Mytek was better in all ways AND also had such a slim difference in performance I concluded that maybe the problem was not the data, as we have so often thought, but how well the DACs behaved with Redbook. I've had similar experiences with a number of modern DACs. Some very inexpensive. The playback gap has all but vanished over the last 15 years. What was once obvious is now gone.
Yet, despite this, I have seen many times people take my experience as evidence of data missing in Redbook. No matter what evidence is presented to the contrary.
Anyone who relies on upsampling in my mind is also taking advantage of a DAC simply performing better with high resolution data, even though upsampling CANNOT under any circumstances add information that was not there before. If upsampling works, at all, then it means the DAC does not perform equally at all resolutions. It has nothing to do with missing data.
I should point out that in the experiments I was able to do, with 2 different DACs of different ages, I exclusively used a streamer. No CD Fluttering was there.
>>>>>One assumes that is pure speculation or maybe wishful thinking.
Or, contrary to the post the reply was too, it was factual knowledge, and not an opinion or an unproven and evidence lacking hypothesis. The CD can flop around like a beached whale in a tsunami, but unless there are unrecoverable errors, a buffered and reclocked modern audiophile player's output is not going to be effected. This isn't the 80's and 90's when due to cost and limited functionality of player mechanisms you were beholden to the recovery of data impacting the clock PLL, and jitter due to the variable error recovery pipeline.
Still, no matter how expensive it cannot address the self inflicted fluttering of the CD itself. As for the other problems I mentioned, it remains to be seen whether the Esoteric addresses any of them.
You must have a verified phone number and physical address in order to post in the Audiogon Forums. Please return to Audiogon.com and complete this step. If you have any questions please contact Support.