There's been a lot of misinformed babble on various audio forums about impulse response, digital filters, "time errors", "time correction", "time blurring", and similar pseudo science clap trap to convince audiophiles that suddenly in the year 2018, there's something drastically wrong with digital PCM audio - some 45 years after this landmark technology was developed by Philips Electronics engineers. Newsflash folks - it's a scam.
First, let's take a close look at what an impulse or discontinuity signal really is. The wikipedia definition actually is pretty accurate thanks to a variety of informed contributors from around the globe. It is a infinite aperiodic summation of sinusoidal waves combined to produce what looks like a spike (typically voltage for our purposes) in a signal. Does such a thing ever occur in nature or more importantly in our case - music? Absolutely not. In fact, the only things close to it are the voltage spikes that occur when a switch contact is thrown or an amplifier output stage clips because supply voltage to reproduce the incoming signal waveform has been exceeded. So if this freak of nature signal representation doesn't exist in nature or music, of what good is it in measuring the accuracy of audio equipment? The answer might surprise you.
In fact, impulse response, or an audio system's response to an impulse signal, is one of the most useful and accurate representations in existence of such a system's linearity and precision - or its fidelity to an original signal that is fed to it. A lot of focus has been placed on the pre and post ringing of these "discontinuity signals" but what you have to understand is that the ripple artifacts are nothing more than an analog system's (all electronics is analog -digital is just a special subset of analog) limitation in attempting to construct the impulse or discontinuity signal waveform. They are a result of the impact produced by the energy storage devices themselves in creating the signal. To create a large energy peak, you need large storage devices. The larget the capacitor for example, the longer in time it takes for it to absorb and discharge electric field energy. This is the same with inductors. One type stores electric field energy - the other magnetic. Smaller value capacitors can react to voltage changes very quickly but are limited in the peak value of energy that can be stored and dissipated. But if you combine a large number of high value and low value devices in a circuit and apply a voltage spike, you wind up with the kind of oscillations you see in an impulse response graph. Small capacitors for example, rapidly reach their charge capacity and can discharge into larger capacitors that are much more slowly building up charge in the transition from no input voltage to full spike value. This "sloshing around", if you will, or oscillation is what happens in circuits built to provide extreme voltage attenuations. In a linear, time invariant system, any rapid change in frequency response or time response - has these characteristics. So effectively the entire debate about ringing in digital audio is a misnomer - a hoax. The impulse response ripple is not something that happens in real world sounds or in a properly designed audio reproduction chain. Ever since digital oversampling was developed in consumer products in the early 1980s, there has been no need for steep analog filter circuits with their attendant ringing. The problem very simply DOES NOT EXIST. The ringing generated artificially in an impulse signal is useful in that it provides a very high frequency stimulus to linear audio systems as a means of measuring high frequency and transient response. IT IN NO WAY BY ITSELF, REPRESENTS THE TIME DOMAIN BEHAVIOR OF THE AUDIO REPRODUCTION CHAIN. An accurate audio reproduction system should fully render the impulse signal in all its pre and post ring glory without alteration. Any audio system that eliminates or significantly alters this pre/post ringing present in the signal that is fed to it is not truly "high fidelity" and is thus bandwidth limited.
Read cj1965’s original post several times and not sure I understand or agree with his central premise -- that "The impulse response ripple is not something that happens in real world sounds or in a properly designed audio reproduction chain."
Regardless, the new RME ADI-2 DAC allows you to select between 5 different Filtering settings that change DA Impulse Responses. These range from traditional oversampling ringing to NOS which is essentially perfect as regards impulse response (no ringing at all).
The 5 alternates sound audibly different and can be chosen to improve the musical genre: e.g., the "Slow" option opens up orchestral textures and allows more "breathing" room to make the texture more realistic.
The NOS option is extremely accurate, almost painfully clear, and not to my taste at all for longer listening sessions. It does provide a shockingly realistic sound for popular recordings.
BTW your system must be audiophile quality to distinguish between the various options.
Try it and see -- and, oh, it will be hard to find a ADI-2 DAC because they are new and so popular. But the same facility is available in the RME ADI-Pro 2 which has been available for several years. Perhaps other DACs on the market have this option and let me know if you know of others.
"
Of which my main point: I wonder why no real, total AD/DA loop measurements are shown anywhere. The
other being the measurement with "correct impulse responses", ie.
measuring a DAC not only a not existing, abstract sequence of (one)
sample.
" - pegasus
The above statement clearly demonstrates that you don't yet understand what an impulse response test really is. The folks at MQA have been banking on this problem to assist them with the smoke screen. Again, read the beginning of this thread. For emphasis ( I don't know how to use bold type on this interface)
IN ITS TOTALITY, AN IMPULSE RESPONSE IS THE FULL CHARACTERIZATION OF THE TIME AND FREQUENCY DOMAIN BEHAVIOR OF ANY LINEAR, TIME INVARIANT SYSTEM UNDER TEST.
Please read the above over in your head several times. If there is any term contained therein that is unclear or confusing, please let me know and I will do my best to try explain it to you. Audio systems are considered by most engineers who build them to be "linear, time invariant" systems - or at least - that is the goal. The impulse response plot posted by Stereophile of the MQA and non MQA DACs show latency distortion as well as added noise in the MQA file. Whether or not this is audible or audibly pleasing/objectionable to the average listener is and likely always will be a matter of endless debate. What is not in debate is that it IS DISTORTION. Any distortion you want to talk about in these kinds of linear system approximations has its origins in energy storage - whether its a standing wave in a speaker cavity or a simple phase delay in a first order crossover network. When a signal's voltage and current go out of phase, distortions result and are typically detected in the form of even and odd ordered harmonics. The more rapidly and intensely energy is stored, the more harmonics are produced regardless of the level of damping (resistance/loss) applied between the storage elements. LATENCY = ENERGY STORAGE = DISTORTION. Simple phase delay networks that involve linear phase changes may appear to be "distortion free" but that only depends on the "working bandwidth" or frequencies of interest. In a linear, time invariant system, time and frequency distortions are derived from one another - different representations of the same thing.
So your subsequent statement - "
Since when is latency a distortion...?It may be a limiting factor for
practibility reasons, or a simple inconvenience. But in replay audio it
is (AFAIK) of no concern at all.
"
represents further proof that your knowledge level is lacking. There are plenty of filtering tricks one can apply to reduce undamped oscillation in a circuit. Linkwitz-Riley crossovers come to mind. There is a faint reference to this technique in the original Sound on Sound BS article put out to promote Mr. Craven's "apodizing filters" - essentially cascading buffered linear phase filters to achieve rapid rolloff without some of the deleterious affects of single stage steep crossovers. ( I found no reference to Linkwitz in the original "Craven's a genius" article, btw.). But if you have actual experience with these types of circuits and have done distortion measurements on them, you will find that total harmonic distortion creeps up as the amplitude of the signal drops off in the transition band of the filter - buffered Linkwitz-Riley or not. There is no free lunch. And it looks like others are waking up to the fact that what Stuart and Craven are offering is more like reheated left over meatloaf than a miraculous "free lunch".
"Read cj1965’s original post several times and not sure I understand or
agree with his central premise -- that "The impulse response ripple is
not something that happens in real world sounds or in a properly
designed audio reproduction chain."
- craigl59
Impulse signals are neither causal nor stable. No amount of filtering can make them causal or stable - "apodizing", "apetizing", "deblurring", minimum phase, linear phase, etc...etc.... Please Google Dirac Delta or Discontinuity Signal and do some reading. You might learn something about what they actually are, what it takes to produce them, and how they fit in the context of sounds that are recorded for playback in music and broadcast. They have special features not present in any other form of "signal content" and only have purpose/usefulness in testing the response of linear systems to stimuli.
"No amount of filtering can make them causal or stable - "apodizing", "apetizing", "deblurring", minimum phase, linear phase, etc...etc.... Please Google Dirac Delta or Discontinuity Signal and do some reading. You might learn something about what they actually are, what it takes to produce them, and how they fit in the context of sounds that are recorded for playback in music and broadcast. "
And in return, cj1965, suggest you contact the engineers at RME and tell them that they cannot control filtering on their DACs, produce response curves as shown on page 55 of their ADI-2 DAC manual, and offer the kind of real-world audio responses that allows the listener to choose between filtering options.
@craigl59 You can choose all of the filtering options you want. I can guarantee your DAC doesn't have any triggering circuits that detect the onset of an impulse or Dirac Delta signal and magically filter out the pre and post ripple of said signal. As with the MQA garbage, simply raising the noise floor by adding dither eliminates that ripple from the impulse response graph. The DAC isn't "filtering" out anything as far as ripple goes. It's called "masking" - just bury the minor noise no one can really here anyway with more noise. And the problem magically disappears from the response graph. If you're going to quote someone, produce the full adequate context of the quote otherwise it just looks like you're trolling. No amount of filtering can make IMPULSE SIGNALS causal and stable. You left out (I'm guessing intentionally for trolling purposes) the primary subject of the misquoted sentence that happens to be the primary subject of the thread.
@pegasus. I find your comment “Contrary to many here on this thread I doubt that the ‘lost bits’ below bit 18 or 19 and below the already nicely sampled noise from the recording chain are audible at all.” to reflect many current attitudes. But, it misses entirely the point under discussion in this thread and others related to MQA ; which is that MQA claims to yield a recording “equal to” the Master Tape. That “Master Quality Authentication” claim has been proven to be a lie. That’s a “fact” as opposed to your “doubt”. As far as audibility of differing bit lengths; today, this is very system dependant, but 50 to 100 years from now it may not be as improvement in playback equipment will yield hopefully more transparency to the recorded music. Now we’re back to the thrust of the criticism of MQA. It simply does not provide “more transparency”.
Pegasus said: '
However... a few points about MQA are IMO brillant: - the
"information density" in the range above 22kHz is *way* below that in
the midrange or audio range. To double, quadruple or "octuple" (;-) the
sampling rate for *objecively* (measured and sampled) very small amounts
of information is not elegant. It is in a certain way an
idiocy.Thinking about how to "underfeed" this information into normally
sampled digital files is a brillant idea (IMO).
"
Actually, no. It's not brilliant. The brute horsepower behind digital audio has always lied in three distinct areas: 1) the precision of high speed switching circuits that affords greater bandwidth and linearity
2) the low noise that is possible with high bandwidth low voltage logic signaling
3) the accuracy (repeatability) of a high resolution (precision) discrete time and discrete amplitude system
Your comment above demonstrates a complete lack of understanding as to what actually has given digital audio the strengths it has always had over traditional analog approaches. Bandwidth (high sampling rate) for digital audio is an indispensable tool that serves as the foundation for high levels of linearity and accuracy. It essentially represents the point of the spear in the fight to overcome human hearing's ability to detect error. The fact that human hearing is limited to 20khz is what makes digital audio sound good. If we could hear at frequencies above the sampling rate - it would sound like the ones and zero trash that it truly is. Without a sampling rate well beyond human hearing, it would be impossible to create digital audio that appears to us to be completely linear and accurate. If there is anything that can and should be sacrificed in terms of improving efficiency of the standard - it is at the amplitude precision end. There has never been a need for playback dynamic range to far exceed the threshold for pain and rapid hearing damage/loss. Ask any physician and they will tell you - 145db is insane. I've heard a lot of stupid arguments saying we need well over 100db in dynamic range. In my experience however, even very elaborate well constructed audio systems struggle to produce full bandwidth dynamic peaks in excess of 120 db. In the real world that means at 120 db, sound you hear is about 80db above what is barely detectable in a completely silent room. Does anyone in this forum think they will be able to detect someone whispering right next to them if blindfolded and listening to music blaring at 120 db? This is just one example of how impractical the desire for 24 bit resolution really is.
Your comment above demonstrates a complete lack of understanding as to what actually has given digital audio the strengths it has always had over traditional analog approaches.
Does it?
There has never been a need for playback dynamic range to far exceed the threshold for pain and rapid hearing damage/loss. Ask any physician and they will tell you - 145db is insane.
Did I say something different? MQA is in one aspect based on the fact, that 24 Bit is a de facto standard, but leads to an "excess" dynamic range, ie. "not used bits". If file dimensions or data transfer rates are an issue. IMO they still matter. In principle keeping the whole bandwith but coding it more efficiently into the "data container" *is* an intelligent idea. If you look at the (peak) musical signals above 20kHz, their level is extremely low, but for every doubling of the sampling freuquency you double the file dimension, for a very small increase in coded information that might be important sonically. The whole coding into a lower datarate "container" has nothing to do at all with actaul sampling bandwidth. It's a form of intelligent lossless data compression basically - if the "only" information "thrown away" is below eg. -108 dB o/ 18 Bit resolution, or lower.My doubts creep in is, if 2 or 3 Bits of 16 Bits are thrown out for a doubling of coded bandwidth. And really critical listening and testing of different sampling rates / audio formats would have to prove that it really is "lossless".
Your continuing furor is amazing. I hope you can apply it to your daily tasks too :-) I'll leave it at that.
Forgot to mention: If MQA sounds worse at simlar file sizes (and it seems to have measurable and sonical issues) the whole process is indeed very questionable.Because an idea is only brilliant if proven in practice.
"
In principle keeping the whole bandwith but coding it more efficiently into the "data container" *is* an intelligent idea
" - pegasus
Ok, at least we can agree on that basic principle. The reason preserving high bandwidth (sampling rates) is more critical has always seemed obvious to me - since steep filters operating below Nyquist were creating problems for achieving reliable sound quality. This was known as far back as the early 1980s and was the primary reason my first CD player 34 years or so ago was the first generation Philips 4X oversampling unit. The laws of physics governing filter stability and distortion still haven't changed since those days of the first space shuttle flights. It is much easier to avoid signal degradation with a gradual roll off filter that effectively wipes out the signal well before the Nyquist frequency is reached. You don't need to employ multiple stage linear phase filters and the end result has been universally praised as being "superb" for the most part. On the amplitude precision side, I have never heard a cogent argument for dynamic range that significantly exceeded the original format - 16 bits.
Very interesting post and very interesting responses as well, thank you all for that.
Lots of people who have posted here seem to have a LOT of knowledge about this technology. So, I'd like to ask, could the reason those people who like vinyl and say they hear "glare" from digital be responding to the process used to take the extremely hi-res digital files from the studio, run them thru some sort of dithering algorithm to reduce their size to produce the resulting size being distributed to the public be causing that?
It's hard to say exactly what is causing perceived "glare" by some vinyl fans with respect to digital. In the early days of digital as we've acknowledged above, steep filters were used to accommodate the sampling rate that was marginally above the frequency limit of human hearing. These filters were vulnerable to component tolerance changes over time - an even greater source of potential sound quality problems beyond the large phase shifts they introduced. With the advent of widespread oversampling in the industry - that problem essentially disappeared. But the underlying "improvements" of digital technology I believe may be more the cause of the alleged "glare" some complain about. By virtue of its extremely high precision, bandwidth, and linearity capabilities, digital audio has the ability to accurately render extreme high and extreme low frequency source material like never before. Vinyl encoding - although pretty wide bandwidth, never could provide the same dynamic range -especially at the frequency extremes. The signal had to be compressed to keep distortion generated at the stylus from skyrocketing - particularly at high frequencies. There were a host of other problems that CD technology ameliorated like the high frequency loss created when the tone arm approached the center of the vinyl album - due to a substantial reduction in effective stylus-over-groove speed. Baked in tonearm tracking error was another problem fixed. CD's went way beyond what most perceive to be the primary advantage - no contact laser light eliminating wear and tear degradation altogether. If you want to learn more about the myriad of headaches and limitations of vinyl, you can read about them here: https://www.emusician.com/how-to/mastering-vinyl
The bottom line to my theory about perceived differences is that when you grow up listening to a technology that has all these limitations built in, when they are suddenly removed, the new changes (full capacity to render all dynamic high frequency content without measurable distortion) can be unsettling or "unwelcome". We tend to be creatures of habit that like what we're used to. Compounding this problem in the early days was that recording industry techniques were well established - you might even say entrenched. Added high frequency bias built into the recording approach could easily appear "hyped" in the new technology format. So it was important for recording engineers to find a new balance with the new technology and not stick with the same old mic /mixing techniques that worked before. This clearly didn't happen in all cases.
Pre-ringing is certainly unnatural and contrived. Whether it is audible depends on the amplitude and the quality of the system IME. Post-ringing is natural and expected if a system is not critically damped. Certainly better to minimize it though.
I have heard several DAC’s at shows that were touting their apodizing filters, which virtually eliminate pre-ringing, but always at the expense of adding higher amplitude post-ringing. Never liked the sound of any of them. I believe I can hear the post-ringing.
The more important aspect of impulse and step response of a DAC IMO is whether the impulse actually achieves the maximum amplitude or not. This is my problem with Jon Atkinsons impulse measurements in Stereophile reviews. The impulse plot never shows the amplitude scale. I suspect that the power subsystems of most DACs don’t allow the impulse to get to full amplitude.
The Overdrive SE and SX DACs do. The power subsystem is instrumental in achieving this and this is what sets some DACs apart from others. Here are some plots of the Overdrive SE. The SX is even better.
Thanks Steve for your input. I am a little confused by what you meant to say here, though:
"Pre-ringing is certainly unnatural and contrived. Whether it is audible
depends on the amplitude and the quality of the system IME. Post-ringing
is natural and expected if a system is not critically damped. Certainly
better to minimize it though."
My understanding with an ideal Dirac Delta signal approximation is that the harmonics prior to the signal peak have the same spectral content as those appearing post peak - demonstrating symmetry. If pre pulse harmonics are missing from the response via either filtering or the addition of masking noise, then the post peak ripple or harmonics should likewise be missing - leaving only the expected "natural" ringing from the device under test. I'm assuming that was what you were saying above.
In a normal digital or analog signal, there is no reason to have analog ringing prior to the signal making a transition. Ringing always occurs after the signal transition and only if the signal is not critically damped by some means, such as impedance matching. The energy of signal transition is what causes the ringing. That is what I meant by natural.
You must have a verified phone number and physical address in order to post in the Audiogon Forums. Please return to Audiogon.com and complete this step. If you have any questions please contact Support.