This has been a very interesting thread, and I've learned a lot. I have a question that bears on the value of high resolution audio formats, particularly the value of sampling rates higher than 44.1. Here is the question: Is the preference for high resolution audio formats (24/96, 24/192, etc.) partly attributable to the fact that those formats have better temporal resolution? I don't know the answer to this question, but it's been on my mind since reading a number of papers with passages like this: It has also been noted that listeners prefer higher sampling rates (e.g., 96 kHz) than the 44.1 kHz of the digital compact disk, even though the 22 kHz Nyquist frequency of the latter already exceeds the nominal single-tone high-frequency hearing limit fmax∼18 kHz. These qualitative and anecdotal observations point to the possibility that human hearing may be sensitive to temporal errors, τ, that are shorter than the reciprocal of the limiting angular frequency [2πfmax]−1 ≈ 9 μs, thus necessitating bandwidths in audio equipment that are much higher than fmax in order to preserve fidelity. That quote is from a paper by Milind Kunchur, a researcher on auditory temporal resolution. More can be read in this article from HIFI Critic. Kunchur's research is somewhat controversial, but I have found a number of other peer reviewed papers that seem to confirm that the limits of human temporal resolution is quite low, on the order of MICROseconds. If that is true, then part of the advantage of high resolution audio formats might be the fact that they have superior temporal resolution, thereby providing more information about very short alterations in the music, i.e., transients. Or so the argument goes. Anyone have an opinion about this? Bryon |
Byron,
There is no solid evidence for this - so it is indeed controversial. If a mere few microseconds were important then speaker and listener position would be dependent down to a millimeter or less than a tenth of an inch. It is generally accepted that 1 msec is the point at which time differences become audible (roughly 1 foot). Our ears are roughly 6 to 8 inches apart. Since temporal differences are detected by the difference in arrival at each ear - this all suggests that our "resolution" is close to that length which is about 0.5 msec in time ( at the speed of sound in air).
What these findings may be related to is "jitter" - it has been shown mathematically that non random time errors can produce audible "sidebands" around musical signals and that jitter of 1 microsecond can be quite audible due to our ability to hear these non-musical sounds or tones or sidebands. If you increase the sample rate then you will change the way jitter affects the sound - a significantly higher sample rate would likely reduce the deleterious effects of jitter. Some sample rates are noted for being better than others for reducing audible jitter. Benchmark found that 110 Khz worked better than other rates with the DAC chip they use. |
Hi Bryon,
Interesting question, and an interesting paper, which I read through. It strikes me as very intelligently and knowledgeably written, and I see no obvious flaws in the details he presents. And intuitively it does strike me as plausible that our ability to resolve timing-related parameters might be somewhat better than what would be suggested by the bandwidth limitations of our hearing mechanisms.
However, looking at his paper from a broader perspective I have several problems with it:
1)He has apparently established that listeners can reliably detect the difference between a single arrival of a specific waveform, and two arrivals of that waveform that are separated by a very small number of microseconds. I have difficulty envisioning a logical connection between that finding, though, and the need for hi rez sample rates. There may very well be one, but I dont see it.
2)By his logic a large electrostatic or other planar speaker should hardly be able to work in a reasonable manner, much less be able to provide good reproduction of high speed transients, due to the widely differing path lengths from different parts of the panel to the listeners ears. Yet clean, accurate, subjectively "fast" transient response, as well as overall coherence, are major strengths of electrostatic speakers. The reasons are fairly obvious very light moving mass, that can start and stop quickly and follow the input waveform accurately; no crossover, or at most a crossover at low frequencies in the case of electrostatic/dynamic hybrids; freedom from cone breakup, resonances, cabinet effects, etc. So it would seem that the multiple arrival time issue he appears to have established as being detectable under certain idealized conditions cant be said on the basis of his paper to have much if any audible significance in typical listening situations.
3)More generally, it seems to me that there are so many theoretical, practical, recording-dependent, and equipment-dependent variables that would have to be reckoned with and controlled in any attempt to make a meaningful comparison involving hi rez vs. redbook sample rates, that reaching a definitive conclusion about the degree to which this particular factor may be audibly significant under real-world listening conditions is probably not possible.
All best regards,
--Al |
I agree with Al.
This shows how good we are at hearing sounds and nothing to do with temporal resolution.
The wavelength at 7KHz is 5cm. Therefore in order to get the direct sound completely out of phase at the listener one need only move one speaker back by 2.5 cm (half a wavelength). This will result in the direct sound being Zero and will probably reduce the SPL level to be clearly audible. The fact that only a 2.9 mm movement was audible suggests that reflections may also have played a role here too.
The use of pure signal of a single tone with no (audible) harmonics can often gives surprising results! This is not reflective of musical instruments that have many harmonics so it is hard to draw any conclusion other than a test tone produces an audible result. Anyway my money is that there is enough of an amplitude difference here to make it audible in the case of a pure test tone. A pure test tone will fluctuate as you move around the room (you get peaks and suckouts depending on how it all adds up (reflection and direct sound). |
"Irv, keep in mind that it is generally accepted that signal can be perceived at levels that are significantly below the level of random broadband noise that may accompany the signal. 15db or more below, iirc. So amplifier noise floor is not really a "floor" below which everything is insignificant."
Maybe, but it is very difficult to believe this is the case when listening to music or other complex sounds, like movie dialog or foley. I've always been leery of effects 70db or more below the music level, regardless of the component in question. |
Hi Al and Shadorne. Thanks for your thoughtful responses. Everything you guys said makes sense to me, but I do have some additional thoughts... 07-04-11: Almarg 1)He has apparently established that listeners can reliably detect the difference between a single arrival of a specific waveform, and two arrivals of that waveform that are separated by a very small number of microseconds. I have difficulty envisioning a logical connection between that finding, though, and the need for hi rez sample rates. There may very well be one, but I dont see it. I believe Kuncher addresses this in this document, in which he says: For CD, the sampling period is 1/44100 ~ 23 microseconds and the Nyquist frequency fN for this is 22.05 kHz. Frequencies above fN must be removed by anti-alias/low-pass filtering to avoid aliasing. While oversampling and other techniques may be used at one stage or another, the final 44.1 kHz sampled digital data should have no content above fN. If there are two sharp peaks in sound pressure separated by 5 microseconds (which was the threshold upper bound determined in our experiments), they will merge together and the essential feature (the presence of two distinct peaks rather than one blurry blob) is destroyed. There is no ambiguity about this and no number of vertical bits or DSP can fix this. Hence the temporal resolution of the CD is inadequate for delivering the essence of the acoustic signal (2 distinct peaks). In essence, I understand him to be saying that the temporal resolution of human hearing is around 6μs. But the temporal resolution of the 44.1 sampling rate is around 11μs. Since the temporal resolution of human hearing is better than the temporal resolution of 44.1 recordings, those recordings fail to accurately represent very brief signals that are both audible and musically significant. For example, Kunchur says: In the time domain, it has been demonstrated that several instruments (xylophone, trumpet, snare drum, and cymbals) have extremely steep onsets such that their full signal levels, exceeding 120 dB SPL, are attained in under 10 μs
He also suggests that the temporal resolution of 44.1 recordings might be inadequate to fully represent the reverberation of the live event: A transient sound produces a cascade of reflections whose frequency of incidence upon a listener grows with the square of time; the rate of arrival of these reflections dN/dt ≈ 4πc3t2/V (where V is the room volume) approaches once every 5 μs after one second for a 2500 m3 room [2]. Hence an accuracy of reproduction in the microsecond range is necessary to preserve the original acoustic environments reverberation. Im not saying that these claims are true. Im just trying to give you my understanding of Kunchurs claims about the connection between human temporal resolution and the need for sampling rates higher than 44.1. 07-04-11: Almarg 2)By his logic a large electrostatic or other planar speaker should hardly be able to work in a reasonable manner, much less be able to provide good reproduction of high speed transients, due to the widely differing path lengths from different parts of the panel to the listeners ears. Yet clean, accurate, subjectively "fast" transient response, as well as overall coherence, are major strengths of electrostatic speakers. The reasons are fairly obvious very light moving mass, that can start and stop quickly and follow the input waveform accurately; no crossover, or at most a crossover at low frequencies in the case of electrostatic/dynamic hybrids; freedom from cone breakup, resonances, cabinet effects, etc. So it would seem that the multiple arrival time issue he appears to have established as being detectable under certain idealized conditions cant be said on the basis of his paper to have much if any audible significance in typical listening situations. I think perhaps Kunchur does his own view a disservice by emphasizing the deleterious time-domain effects of speaker drivers with large surface areas, e.g. electrostatic speakers. It seems to me that those deleterious effects might be offset to a large extent by the very characteristics you mention, viz., light mass, minimalistic crossover, etc.. But your objection does seem to cast doubt on the significance of the very brief time scales that Kunchur contends are audibly significant. Having said that, the putative facts about jitter bear on this point in a somewhat paradoxical way. According to some authorities, such as Steve Nugent, jitter is audible at a time scale of PICOseconds. For example, Steve writes: In my own reference system I have made improvements that I know for a fact did not reduce the jitter more than one or two nanoseconds, and yet the improvement was clearly audible. There is a growing set of anecdotal evidence that indicates that some jitter spectra may be audible well below 1 nanosecond. That passage is from an article in PFO, which I know you are familiar with. I bring it up, not to defend Kunchurs claims, but to raise another question that puzzles me: If jitter really is audible at the order of PICOseconds, does that increase the plausibility of Kunchurs claim that alterations in a signal at the order of a few MICROseconds are audible? Again, I dont quite know how to make sense of all this. Id be interested to hear your thoughts. Bryon |
Well, Bryon, that was a very interesting article. I'm not sure what to think after reading it... is this yet another investigation into a micro-problem that doesn't really affect music reproduction, or is it a significant factor? I certainly don't know. I can't even venture a guess.
Anyway, Kunchur admits to listening to cassettes. I haven't heard cassettes for many years, but 16/44 CDs must sound like a revelation by comparison. ;-) |
Hi Bryon, Your question about the audibility of jitter that is on a time scale far shorter than the temporal resolution of our hearing is a good one. The answer is that we are not hearing the nanoseconds or picoseconds of timing error itself. What we are hearing are the spectral components corresponding to the FLUCTUATION in timing among different clock periods (actually, among different clock half-periods, since both the positive-going and negative-going edges of S/PDIF and AES/EBU signals are utilized), and their interaction with the spectral components of the audio. For example, assume that the worst case jitter for a particular setup amounts to +/- 1 ns. The amount of mistiming for any given clock period will fluctuate within that maximum possible 1 ns of error, with the fluctuations occurring at frequencies that range throughout the audible spectrum (and higher). That is all referred to as the "jitter spectrum," which will consist of very low level broadband noise (corresponding to random fluctuation) plus larger discrete spectral components corresponding to specific contributors to the jitter. Think of it as timing that varies within that +/- 1 ns or so range of error, but which varies SLOWLY, at audible rates. All of those constituents of the jitter spectrum will in turn intermodulate with the audio data, resulting in spurious spectral components at frequencies equal to the sums of and the differences between the frequencies of the spectral components of the audio and the jitter. If you haven't seen it, you'll find a lot of the material in this paper to be of interest (interspersed with some really heavy-going theoretical stuff, which can be skimmed over without missing out on the basic points): http://www.scalatech.co.uk/papers/aes93.pdfMalcolm Hawksford, btw, is a distinguished British academician who has researched and written extensively on audiophile-related matters. One interesting point he makes is that the jitter spectrum itself, apart from the intermodulation that will occur between it and the audio, will typically include spectral components that are not only at audible frequencies, but that are highly correlated with the audio! He also addresses at some length the question of how much jitter may be audible. So to answer your last question first, no, I don't think that the audibility of jitter on a nanosecond or picosecond scale has a relation to the plausibility of Kunchur's claim. As far as point no. 1 in my previous post is concerned, yes I think that the quote you provided about closely spaced peaks being merged together does seem to provide a logical connection between his experimental results and a rationale for hi rez sample rates. It hadn't occurred to me to look at it that way. So that point would seem to be answered. Best regards, -- Al |
Byron, I appreciate your questions. You are definitely curious enough to look into this and I commend you on your interest. However, poor Kunchur seems a very confused individual. His test simply shows how two pure tones can interfere with eachother in a way that becomes audible. However, his conclusions are completely bogus. The listener is NOT hearing temporal time-domain effects of microseconds. The listener is actually hearing changes in the combined resultant waveform which has been altered by offsetting one source to the other (combined - meaning both waves and including all room reflections). As I explained, this will lead to TOTAL destructive interference of the primary direct signal as heard by the listener at an offset of 2.5 CM. This is like a signal that is TOTALLY out of phase. The direct sound will be inaudible and all the listener hears is all the sound around the room (reflected sounds). Since we detect the direction of sound from the relative timing of the wave front (or nerve bundle triggers) across each ear then we lose that ability when a signal is out of phase. Poor Kunchur is conflating things in a bad way - this is bad science. However, his remarks about speaker alignment and panels are partly valid. It is almost certain that large radiating surfaces can cause the kind of interference at certain frequencies like what he achieved in this experiment. This manifests itself in a speaker response that has many suckouts across the frequency spectrum. In fact the anechoic response of a large panel response will look like a comb with many total suckouts across the frequency range. The result is that some sounds and some frequencies will not be as tightly imaged as with a point source speaker. Since most sounds are made up from many harmonics this effect will not be complete but on the whole it will lead to a larger more diffuse soundstage with some sounds imaging precisely and others more diffuse than when compared to a point source speaker. There is an audio tool called a flanger that is used for electric guitar - it achieves a similar effect but even stronger. Also Jitter is not audible in the sense you describe. It is audible when non-random jitter over a great many 1000'sa and 100,000's of samples combines in a way that introduces new frequencies. We hear those new frequencies that are created by the non-random modulation of the clock (random jitter is just white noise at very low inaudible levels). We are totally UNABLE to hear jitter effects on a few samples. |
07-05-11: Almarg ...we are not hearing the nanoseconds or picoseconds of timing error itself. What we are hearing are the spectral components corresponding to the FLUCTUATION in timing among different clock periods... That's what I suspected, Al, but I wasn't sure. And thanks for your explanation of jitter. I was aware that jitter resulted in frequency modulation, but I didn't know that it was a kind of intermodulation distortion. Your explanation is much appreciated. Shadorne - You may be right that Kunchur's methodology is flawed. I've read a few other experiments on human temporal resolution with similar methodologies, but my memory of them is a little vague. In any case, I have a question about your observation that "Some sample rates are noted for being better than others for reducing audible jitter." I'd be interested to hear a technical explanation for why that is the case. Finally, I have a general question about high resolution audio that anyone might be able to answer: My understanding is that the principal advantage of larger bit depth is greater dynamic range. What is the principal advantage of higher sampling rates, if it is not better temporal resolution? Bryon |
What is the principal advantage of higher sampling rates, if it is not better temporal resolution? None above redbook CD except it allows cheaper and better filtering which may improve very slightly the audible band. However, higher sample rates do allow you to go to one bit resolution (like SACD format which is a DSD stream but SACD has very high levels of out of band noise - so to be honest I am not sure I accept that it is even as good as 24 bit/96) |
What is the principal advantage of higher sampling rates, if it is not better temporal resolution? Yes, as Shadorne indicated the principal advantage is that it dramatically relaxes the rolloff requirements for anti-aliasing filters (in the recording process) and reconstruction filters (in the playback process). Or it makes it possible to avoid the use of techniques that have been used to relax those requirements, which have their own tradeoffs (e.g., oversampling + noise shaping). It should be kept in mind that not only will 44.1kHz sampling be unable to capture signal frequencies at or above 22.05kHz, but the a/d converter used in the recording process must not be exposed to those frequency components. Otherwise "aliasing" will occur, resulting in those ultrasonic frequencies appearing in the digital data as audible frequencies. Therefore an a/d converter that doesn't use oversampling or other special techniques must be preceded by a low pass filter that is flat to 20kHz, but has rolled off to the point of inaudibility in about 1/10th of an octave, at 22.05kHz. That is an EXTREMELY sharp rolloff, and, besides being expensive to manufacture, that kind of filter can have the sonic effects Kijanki described above in his post of 6/27, and the effect described in my second post of 6/30. In contrast, 96kHz sampling would make it possible to allow more than a full octave for the same rolloff to occur (at 48kHz rather than 22.05kHz). Similar considerations apply to the playback process, with respect to the "reconstruction filter," which refers to a low pass filter used to eliminate the stepped character of the d/a converter device's output. Best regards, -- Al |
Sounds like the consensus is that the original CD redbook format engineers did a more than adequate job, at least in theory.
So does that mean that when we hear deficiencies in specific redbook CDs compared to other formats (say R2R or very good vinyl even) that it is because of poor execution somewhere in the implementation , either in the recording or playback process or equipment, or most likely even both?
I like to think so but I have not heard the near perfectly created CD on the near perfectly executed system in a viable test scenario compared to other high quality formats yet that would confirm this, so I am not so sure reality reflects the theory in practice quite yet?
Has anybody else heard something specifically that has them convinced?
On my system, I think the issue is a wash, but I have done some imperfect a/b comparisons on very high end dealer reference systems where it was not, especially in comparison to R2R and with better large scale orchestral recordings involving massed strings in particular. |
"Sounds like the consensus is that the original CD redbook format engineers did a more than adequate job, at least in theory."
Well, I would have said the red book CD is "just adequate". I think reducing the word length or the sampling rate might have audible effects, so "just adequate" comes to mind.
As for comparisons to "other media", are you talking about analog? For me at least, vinyl doesn't come close, no matter what you spend. Analog tape can sound very, very good, but except for our own master tapes, where would one get source material? In the past I've heard 1/2" 15 or 30ips tape with Dolby sound spectacular in a studio, but how many of us have access to such a thing?
Here's a better question: is there any significant amount of content coming out of the recording industry these days that requires a medium superior to a 16/44 CD? Perhaps some rare examples in the classical venue, but it seems like none in jazz, rock, pop, country, new age, or whatever. Or am I wrong? |
Al - Thanks again for your helpful explanations.
Bryon |
"In the past I've heard 1/2" 15 or 30ips tape with Dolby sound spectacular in a studio, but how many of us have access to such a thing?"
I'm referring to modern large format R2R reference recordings I have heard.
Yes, few have or would want access, but I guess my point is what I have heard here is the reference standard as best I can tell, so CD if adequate or more should be able to match it and I cannot say that it does based on what I have heard (though limited).
"Here's a better question: is there any significant amount of content coming out of the recording industry these days that requires a medium superior to a 16/44 CD?"
That is a good question. Again, for true hardcore audiophiles only, I think there may be some but not much, but I am not certain. Quality large scale orchestrated works with massed strings are the type I question most.
For 98% of the population (maybe more) I think redbook CD covers all teh needed bases adequately at least. That's pretty good! |