Over the past couple of decades my thinking has radically and forever been altered.Sorry to hear. Still, the fact you were able to change your mind once, however many decades it took, maybe some day you could do it again?
It isn't the bits, it's the hardware
I have been completely vindicated!
Well, at least there is an AES paper that leaves the door open to my observations. As some of you who follow me, and some of you follow me far too closely, I’ve said for a while that the performance of DAC’s over the last ~15 years has gotten remarkably better, specifically, Redbook or CD playback is a lot better than it was in the past, so much so that high resolution music and playback no longer makes the economic sense that it used to.
My belief about why high resolution music sounded better has now completely been altered. I used to believe we needed the data. Over the past couple of decades my thinking has radically and forever been altered. Now I believe WE don’t need the data, the DACs needed it. That is, the problem was not that we needed 30 kHz performance. The problem was always that the DAC chips themselves performed differently at different resolutions. Here is at least some proof supporting this possibility.
Stereophile published a link to a meta analysis of high resolution playback, and while they propose a number of issues and solutions, two things stood out to me, the section on hardware improvement, and the new filters (which is, in my mind, the same topic):
Now if I hear "DAC X performs so much better with 192/24 signals!" I don't get excited. I think the DAC is flawed.
Well, at least there is an AES paper that leaves the door open to my observations. As some of you who follow me, and some of you follow me far too closely, I’ve said for a while that the performance of DAC’s over the last ~15 years has gotten remarkably better, specifically, Redbook or CD playback is a lot better than it was in the past, so much so that high resolution music and playback no longer makes the economic sense that it used to.
My belief about why high resolution music sounded better has now completely been altered. I used to believe we needed the data. Over the past couple of decades my thinking has radically and forever been altered. Now I believe WE don’t need the data, the DACs needed it. That is, the problem was not that we needed 30 kHz performance. The problem was always that the DAC chips themselves performed differently at different resolutions. Here is at least some proof supporting this possibility.
Stereophile published a link to a meta analysis of high resolution playback, and while they propose a number of issues and solutions, two things stood out to me, the section on hardware improvement, and the new filters (which is, in my mind, the same topic):
4.2
The question of whether hardware performance factors,possibly unidentified, as a function of sample rate selectively contribute to greater transparency at higher resolutions cannot be entirely eliminated.
Numerous advances of the last 15 years in the design of hardware and processing improve quality at all resolutions. A few, of many, examples: improvements to the modulators used in data conversion affecting timing jitter,bit depths (for headroom), dither availability, noise shaping and noise floors; improved asynchronous sample rate conversion (which involves separate clocks and conversion of rates that are not integer multiples); and improved digital interfaces and networks that isolate computer noise from sensitive DAC clocks, enabling better workstation monitoring as well as computer-based players. Converters currently list dynamic ranges up to∼122 dB (A/D) and 126–130 dB(D/A), which can benefit 24b signals.
Now if I hear "DAC X performs so much better with 192/24 signals!" I don't get excited. I think the DAC is flawed.
64 responses Add your response
@millercarbon I'm grateful for your thoughtful consideration of the facts and ability to share your great insight with all of us. I'm so glad you aren't a knee-jerk reactionary that just follows someone he does not like to be a negative sour puss. Happy to see that those days are in the past and only the adults are left in the pool. Best, E |
Post removed |
It's not the accuracy, it is the lack of jitter and that has been possible and relatively cheap for some time. If you are feeding an async data stream, i.e. USB, wired ethernet, wifi, hard-driver, etc. then a basic oscillator with a decent power supply is effectively jitter free and rather inexpensive, practically free by audiophile standards. It is when you start feeding synchronous data with varying data rates and trying to sync up two clock domains and you enter the realm of PLLs that it gets harder and a lot more expensive, and/or you get into techniques such as ASRC where you are beholden to the underlying math (and resolution) to convert between the two sample rate domains that performance gets far more variable (as does cost). |
P05 sounds like you’re using the DIrectstream, as am I. I recently did some listening tests with some albums that were released at 16 bits and then later at 24 bits (both at 44.1 and seemed like the same mastering) The differences were very similar to looking at a jpeg with different color depths. The 24 bit had more tonal colors than the 16. Very easy to hear for an experienced ear- the average person may not have been able to tell, or possibly only after someone pointed out the increased and more natural sound. BTW both files were converted to WAV. A 16 bit WAV or aif might have been better than 24bit flac. I was using jriver and a very powerful Mac with thunderbolt raid storage and EtherRegen. |
Actually, no matter how good the DAC might be the damage to the audio signal still mostly occurs in the CD transport. The damage occurs as soon as the laser reads the data on the nanoscale data spiral. The damage is primarily caused by 1) the effect of external vibration on the suspended laser assembly, (2) the effect of thr CD fluttering during play (thus overdriving the laser servo system) and (3) scattered light infecting the data by getting into the photodetector. There are others reasons, too. Yes, I know what you’re thinking, “But I thought Reed Solomon was supposed to correct all that stuff.” No matter how much you have in the end 🔚 you would have had even more if you started out with more in the beginning. 🔙 - Old audiophile axiom |
People still use CDs for live playback in 2020? On a serious note I doubt there would be many audiophile CD players in the last decade(s) that are not reading ahead if not retrying and buffering on digital output rendering jitter at the player level non existent and if you treat your CDs in any half decent fashion like I assume most audiophiles do, then uncorrectable errors are rare, and as per previous statement as the data is reclocked, any timing issues from pipeline processing of the error correction isn't an issue. Oh, and this thread is not about CDs. |
heaudio On a serious note I doubt there would be many audiophile CD players in the last decade(s) that are not reading ahead if not retrying and buffering on digital output rendering jitter at the player level non existent and if you treat your CDs in any half decent fashion like I assume most audiophiles do, then uncorrectable errors are rare, and as per previous statement as the data is reclocked, any timing issues from pipeline processing of the error correction isn’t an issue.Enter your text … >>>>>One assumes that is pure speculation or maybe wishful thinking. |
This is expensive but it might address all those "problems" with CD playback. http://www.esoteric-usa.com/Products/audio-players/Grandioso-K1.php |
>>>>>One assumes that is pure speculation or maybe wishful thinking. Or, contrary to the post the reply was too, it was factual knowledge, and not an opinion or an unproven and evidence lacking hypothesis. The CD can flop around like a beached whale in a tsunami, but unless there are unrecoverable errors, a buffered and reclocked modern audiophile player's output is not going to be effected. This isn't the 80's and 90's when due to cost and limited functionality of player mechanisms you were beholden to the recovery of data impacting the clock PLL, and jitter due to the variable error recovery pipeline. |
I realize now I did not include my own experiences for brevity. I had the chance to own simultaneously an ARC DAC 8, along with a Mytek Brooklyn. I streamed to both of them. I also used a Wyred4Sound Remedy asynchronous sample rate converter). To make a long story short, and leave myself open to uninformed nit-picking, I’ll say the following: The gap in playback quality between Redbook and 96/24 was wide in the ARC 8. It was very very narrow in the Brooklyn. In all cases, the Brooklyn was superior. The ARC 8 benefited from the ASR a great deal. The Brooklyn did not. Given that the Mytek was better in all ways AND also had such a slim difference in performance I concluded that maybe the problem was not the data, as we have so often thought, but how well the DACs behaved with Redbook. I've had similar experiences with a number of modern DACs. Some very inexpensive. The playback gap has all but vanished over the last 15 years. What was once obvious is now gone. Yet, despite this, I have seen many times people take my experience as evidence of data missing in Redbook. No matter what evidence is presented to the contrary. Anyone who relies on upsampling in my mind is also taking advantage of a DAC simply performing better with high resolution data, even though upsampling CANNOT under any circumstances add information that was not there before. If upsampling works, at all, then it means the DAC does not perform equally at all resolutions. It has nothing to do with missing data. |
“There are no uncorrectable errors.” That’s precisely what the industry said ever since day one. Hel-loo! “Perfect Sound Forever.” The Reed Solomon codes and the laser servo feedback mechanism were supposed to take care of any errors. What a joke. Buffering doesn’t stop all laser reading errors, by the way, only certain specific ones. If it did portable Walkman CD players would be perfect. Buffering only puts off the inevitable for a few seconds. |
Buffering doesn't stop any errors, it provides the mechanism to eliminate all jitter. No one ever said no uncorrectable errors, though when manufactured, based on many industry tests, uncorrectable errors are quite rare. These are test pretty easily recreated on any computer with a CD-rom as well, so it is no "secret". From a practical standpoint, if you treat your CD the way the average audiophile does, uncorrected errors are not going to impact your listening experience. However, if you are just going to make stuff up, then I am not sure why you are participating in the conversation? |
I think you have a fundamental lack of knowledge of the architecture of a modern audiophile CD player and hence why your confusion w.r.t. what happens after the data is read off the disc, pretty much every error corrected in the error correction block, before it hits the buffer and re-clocker. Unfortunately I can't post images here and it would be easier to explain with pictures. |
The best system I’ve heard - by far - is one for which the Reed Solomon error correction code subsection was disabled. Yes, I know what you’re thinking - is he out of mind? And once you stabilize the CD there is almost no need for the CD laser servo feedback system. Once you fix the underlying problems in the CD transport there is no need for all the patchwork fixes. The original designers obviously knew they had some problems with CD playback, they just didn’t know what all of the problems were or they ignored them. Do modern CD players just wish the scattered light problem away? I’ve never heard anyone even address the issue. If you could hear what I’ve heard with my ears. |
erik_squires Given that the Mytek was better in all ways AND also had such a slim difference in performance I concluded that maybe the problem was not the data, as we have so often thought, but how well the DACs behaved with Redbook.That's certainly possible. There are some other variables - which some here have noted - including the synergy of the DACs and your subjective observation that the Mytek "was better in all ways." If upsampling works, at all, then it means the DAC does not perform equally at all resolutions. It has nothing to do with missing data.It isn't clear how you've made the leap from, "maybe the problem was not the data" to, "It has nothing to do with missing data." That upsampling can be useful doesn't necessarily mean that more data won't improve results. It's risky to form an absolute conclusion from just a single test. |
That upsampling can be useful doesn't necessarily mean that more data won't improve results. @cleeds But that's just it, with upsampling, you are not generating more data. There's no more clarity or resolution, or harmonics. There's not yet an AI that is listening to a trumpet and saaying "oh, I know how a trumpet sounds at 384k, I can fill in those gaps. " At best, upsampling is curve fitting. If we say that upsampling for a particular dac is a significant improvement, then it's not the data contents, because it is largely the same, it is how well the DAC chip performs with more of it. |
Not really a curve-fitting but okay to think about it that way. In a digital representation, the spectra is reflected around 0Hz, and the sampling rate. Oversampling shifts the effective sample rate so that the base spectra (which does not change), shifts from being centered around 44.1Khz to centered around 384Khz. Being say only 20Khz wide, a digital filter can easily remove most artifacts over 20Khz, with a simple analog filter taking out the rest. The other benefit is spreading out quantization noise reducing the noise floor. |
@heaudio123 You said: Not really a curve-fitting but okay to think about it that way. Actually that's exactly how it works for upsampling, but different upsampling algorithms work differently. With the advent of cheap compute, Bezier curves are cheap and easy to do. Oversampling shifts the effective sample rate so that the base spectra (which does not change), shifts from being centered around 44.1Khz to centered around 384Khz. Being say only 20Khz wide, a digital filter can easily remove most artifacts over 20Khz, with a simple analog filter taking out the rest. I didn’t say "oversampling." I said "upsampling" and they are not the same thing, which is why your post is arguing against something that was not actually argued. Please see this primer: https://www.audioholics.com/audio-technologies/upsampling-vs-oversampling-for-digital-audio Best, E |
From a purely technical standpoint oversampling can apply to DA conversion, not just ADC, so from that standpoint, you can use upsampling, oversampling or sample rate conversion to a higher frequency all interchangeably. Feel free to validate that with DAC data sheets that discuss oversampling. But looking more at the (poorly) written paper linked attempting to compare a readily used term, over-sampling, to one practically made-up at least for this case (upsampling), and then to actually not really give any definition to upsampling except to define it pretty much exactly as asynchronous sample rate conversion, another well understood term, I am not surprised by the confusion. erik_squires: "Actually that’s exactly how it works for upsampling, but different upsampling algorithms work differently. With the advent of cheap compute, Bezier curves are cheap and easy to do. " I think you are missing a key element of how a typical asynchronous sample rate converter with inherent over-sampling works, namely that the first step would be an implementation of oversampling (typically fractional delay filters), which provides a smoother curve for the curve-fit which works over a smaller number of samples. Doing this keeps the spurious frequency components higher up allowing for easier final filtering. |
Here’s an interesting article I ran across at Benchmark Media, I quoteth the relative part for this conversation: An examination of converter IC data sheets will reveal that virtually all audio converter ICs deliver their peak performance near 96 kHz. The 4x (176.4 kHz and 192 kHz) mode delivers poorer performance in many respects. The full article: https://benchmarkmedia.com/blogs/application_notes/13127453-asynchronous-upsampling-to-110-khz This again supports my hypothesis, that the converters themselves perform differently, it’s not just the data. |
erik_squires ... with upsampling, you are not generating more data ...Correct. There’s no more clarity or resolution, or harmonics ...Not necessarily, although if present, it would not be a consequence of more data, but more likely attributable to filtering, as others have noted. Your mistake here is confusing correlation with causation, a common audiophile logical error. |
But looking more at the (poorly) written paper linked attempting to compare a readily used term, over-sampling, to one practically made-up at least for this case (upsampling), No, not at all. This is not the only paper, and to claim it is is selective reading. Upsampling and oversampling have long been quite clearly understood in the industry to mean two different approaches to the filtering problem. Only the poorly informed believe otherwise. The former (upsampling) attempts to extrapolate new data points, whether by linear interpolation or by curve fitting. The latter replicates the data, to the rate at which data is received is now higher, but the amplitude is identical. That is, with 4x oversampling, you duplicate the same 16 bits. With upsampling you do not. Neither requires ASR. And... I'm done. :) While you make good cases for the filter behavior being similar, and it is, this argument alone has already gotten us far from fact based, and my patience for that is now zero. Buh bye. |
In this limited industry somewhat exclusively does upsample mean resample by async sample rate converter (and it was pretty much a made-up term), which most people know, but what they don’t know is that the underlying technology which is pretty much exclusively some form of synchronous oversampling in the form of fractional delay filters, which provides an underlying shift upwards in the spectra from the original sample rate, coupled with a time compensated curve-fit which provides for the final sample rate and provides the jitter attenuation (something not needed with async streaming sources of course). So when claiming advantage of upsampling to a higher frequency, is it the inherent oversampling, the jitter reduction, or the pick of final sample rate, or some combination of? As Cleeds pointed out, you can’t make a generalization to all cases based on one example. That Benchmark found that "performance" based on data sheet and some simple performance metrics was better at 24/96 than 24/192 is not at all surprising. At lower speeds, you have less contribution to the output from the switching CMOS switches, less dynamic power (and less glitch energy) contributing to a quieter environment, and even simply more time to settle to the final value. Perhaps in Benchmark’s specific case, decoupling the output frequency from the input also removed sources of synchronous noise. Of course, they are running in 2x mode anyway, so technically the DAC is running close to 192KHz internally, so what exactly does that 96KHz even mean in their argument ... not to mention that it then runs into a much higher rate sigma-delta modulator. Does their product match the data, or does the data match the product ? However, the items they cited w.r.t the data sheets and their tests, are not guarantees of excellent perceived sonic performance which would trade off high sample rate, system noise, with analog filtering. The article is also 10 years old, so what was best 10 years ago, may have shifted up 2x or more in terms of what was best. I don’t think you have well made your point, simply because tests have been done with 24/96 native, and 24/96 down-sampled mathematically to 16/44.1 and then upsampled to 24/96 so that the playback path was identical, all that was changed was the information rate. |
Now that this thread has gone off the rails, I'd like add on my thought on spinning disks. If the data was not fully retrieveable, would a DVD work? Video signals I belive are more critical than audio signals in that we would see errors from the disk, possibly similar to the Compression, micro and macro blocking, banding and posterization seen on streaming video/cable tv. |
No one is saying a DVD or CD won’t work. What I’m saying is the way the system was designed DVDs and CDs and Blu Ray appear to be working 100% but are actually working less than 100%. How much less less than 100% depends on many factors. Its not as if there are “data dropouts” that are audible or visible. It’s more subtle. It’s a subtle degradation of the sound or picture. It’s not the disc per se but how the disc is read. The disc has all the data, the system can’t read/interpret it accurately or completely. Think of it like an 8 cylinder car running on 7 cylinders. It will still run OK. |
A comment was made concerning Esoteric's finest player which uses their
VRDS-NEO VMK-3.5-20S transport, This transport reportedly hugs the CD in an exact position for the laser reader and eliminates wobble. It was designed to be vibrationally isolated from the mechanics of moving the disc. I don't think that 2 of geokaitt's complaints concerning transport reading problems are valid for this design. The remaining problem of scattered light is still valid although with accurate laser tracking, this problem should be reduced by the lens superior focusing. I also read that Luxman's transport is also designed for superior vibration and tracking capabilities. These units are extremely superior to the 1980s CD players which I detested for the most part (mostly due to jitter and their DACs). Maybe CD and DVD players of today are still imperfect (so is analog playback) but it's damn great! |
Everything is relative. As I oft say, audiophiles are prone to making declarations such as the ones you just made, I.e., that somehow modern players are superior by buffering, etc. While it may be true that some CD players are more innovative than others in dealing with these issues or other issues, all modern players don’t address the issues I mentioned, especially the scattered light issue. I hate to judge too harshly but I don’t think any CD player manufacturer has even mentioned scattered light is a problem much less offers a solution. Self inflicted CD wobble and flutter is another issue very few manufacturers mention. The Green Pen is an example of a partial solution. Isolation is also a partial solution. Other older players stabilized the CD - e,g., Sony SACD PLAYER used a brass weight to hold the disc firmly, so that idea’s not new. It should be mentioned that a relatively inexpensive Tweak for an existing player must be weighed against great expensive of a “modern player,” assuming the modern player even addressed the problems, which it probably doesn’t. |
I read several current high end player and transport manufacturers who specifically cite their vibration isolation and freedom from wobble/precise laser readers from a mechanical reference point. As to light scatter, I read several manufacturers who maintain that their units are totally black. However, you have mentioned that black-out conditions are insufficient as there is unseen light frequencies which are detectable by the laser but not the eye. I still have Kyocera CD players from about 1985 and they sound quite nice. I kept my EAR Acute as it sounds musically interesting but not as highly resolving. Hence, I purchased a 2016 engineered DAC which is the cat’s meow for the price (COS Engineering D2). I will try a superior transport to see how much it will add to my enjoyment with my new DAC. Over the years, I have made extreme upgrades in my cabling, which accounts for the DAC and EAR maximizing their potential. What I previously stated is that when mechanical vibration is eliminated as a source of jitter, etc., and only infrared light scatter remains an issue, CD playback can be extremely enjoyable, comparable to high end analog playback. |
What is with the scattered light psychosis? Is this an attempt at humor? Any CD player can pretty much with correction extract a bit perfect stream. If you don't buffer you have jitter. Buffer and reclock and jitter disappears. No scattered light psychosis to worry about. Check the calendar. Is 2020 not 1999. You missed the millennium and the 20 years after. Are you trying to intentionally mislead people? That's directed at geoffkait, but fleschler, mechanical vibration and impact on jitter disappeared on audiophile CD players over a decade ago. Modern players buffer and reclock. What happens at the mechanism is almost meaningless. |
Stabilizing the disc is important, glad someone is doing that. But what’s that, 1% of the players? Hel-loo! Moreover, careful vibration isolation of the player from seismic type vibes is also important and separate from stabilizing the disc and requires an aftermarket solution. I’m not trying to set the world on fire, just start a flame 🔥 in a few hearts 💕 |
Heaudio I wonder if my EAR Acute from 2006 has an adequate transport as I read it was a standard Sony. The unit was originally Adcom, not an audiophile level unit sonically. That's why I am questioning whether an exotic/high end transport would improve my digital end enjoyment. Actually, the COS D2 DAC was a 2018 engineered product so it is relatively current. Both units (and my entire system) uses GoverHuffman Pharoah level cabling and differences in power cables were immediately noticed on both the DAC and EAR Acute (as a transport). |