Human Time-Frequency Acuity Beats the Fourier Uncertainty Principle
It seems Amir nor Prof read the article i posted twice...Not one proved to me that they read it and UNDERSTAND IT...Here i put it, in an easy and clear way to read vulgarized, for the third time with my helping comments...Are they not scientist ?
Or they did not understand it to begin with ?
Too much techno babble ideology biases instead of psycho-acoustic science in their mind ?😊
SCIENCE IS NOT TECHNOLOGY...This article is psycho-acoustic pure science, not debunking propaganda from objectivist claiming what is impossible : deducing from their time symmetrical linear electrical modeling tools how the human ears works, and what we WILL HEAR and what we can never hear by simply adding measured decibels levels or substracting them , and what information the ears will catch or not ... 😊
First: repairing or falsifying designed components to verify or put back the components to their their standards norms is one thing...
Assuming that ALL of what human may be able to hear between components coupled together in a room will be COMPLETELY determined by these electrical standards for each separate component, and claiming that, is missing the fact that they must be COUPLED together and their audible sums cannot be predicted COMPLETELY in each room for all human ears,...
Why ?
Because hearing dont live only and merely in the time symmetric linear frequencies range kingdom of the electrical gear measuring tools modeling ...Ears/brain works non linearly...
Our brain beat the Fourier uncertainty barrier up to 13 times in one case of psycho-acoustic experience in a laboratory...
We extract information in one privileged direction of time , because of the brain habit with the sound of the natural world.
As the two physicist in this article put it said it : « Many sounds in nature are produced by an abrupt transfer of energy followed by slow, damped decay, and hence have broken time-reversal symmetry.»
«There’s a theorem that asserts uncertainty is only obeyed by linear operators (like the linear operators of quantum mechanics). Now there’s five decades of careful documentation of just how nastily nonlinear the cochlea is, but it is not evident how any of the cochlea’s nonlinearities contributes to enhancing time-frequency acuity. We now know our results imply that some of those nonlinearities have the purpose of sharpening acuity beyond the naïve linear limits.»
You begin to catch why the ears/brain HEAR something extracting it from the time domain which cannot be there in your linear symmetrical modeling electrical measures of gear design ?
All your electrical measures refer to hearing models which are obsolete anyway... And anyway electrical measures of gear has nothing to do with psycho-acoustic measures in a laboratory able to test hearing information extracting abilities in the time domain ...
«The results have implications for how we understand the way that the brain processes sound, a question that has interested scientists for a long time. In the early 1970s, scientists found hints that human hearing could violate the uncertainty principle, but the scientific understanding and technical capabilities were not advanced enough to enable a thorough investigation. As a result, most of today’s sound analysis models are based on old theories that may now be revisited in order to capture the precision of human hearing.»
now try to imagine the wealth of information which is extracted from simple speech (or from musical event coupled to acoustic soundfield) and try to imagine HOW THIS INFORMATION EXTRACTED BY THE HUMAN EARS/BRAIN CANNOT BE PREDICTED BY ELECTRICAL LINEAR SIMPLE GEAR DESIGN TOOLS ; listen this two physicists :
«"In seminars, I like demonstrating how much information is conveyed in sound by playing the sound from the scene in Casablanca where Ilsa pleads, "Play it once, Sam," Sam feigns ignorance, Ilsa insists," Magnasco said. "You can recognize the text being spoken, but you can also recognize the volume of the utterance, the emotional stance of both speakers, the identity of the speakers including the speaker’s accent (Ingrid’s faint Swedish, though her character is Norwegian, which I am told Norwegians can distinguish; Sam’s AAVE [African American Vernacular English]), the distance to the speaker (Ilsa whispers but she’s closer, Sam loudly feigns ignorance but he’s in the back), the position of the speaker (in your house you know when someone’s calling you from another room, in which room they are!), the orientation of the speaker (looking at you or away from you), an impression of the room (large, small, carpeted).
"The issue is that many fields, both basic and commercial, in sound analysis try to reconstruct only one of these, and for that they may use crude models of early hearing that transmit enough information for their purposes. But the problem is that when your analysis is a pipeline, whatever information is lost on a given stage can never be recovered later. So if you try to do very fancy analysis of, let’s say, vocal inflections of a lyric soprano, you just cannot do it with cruder models."
By ruling out many of the simpler models of auditory processing, the new results may help guide researchers to identify the true mechanism that underlies human auditory hyperacuity. Understanding this mechanism could have wide-ranging applications in areas such as speech recognition; sound analysis and processing; and radar, sonar, and radio astronomy.
"You could use fancier methods in radar or sonar to try to analyze details beyond uncertainty, since you control the pinging waveform; in fact, bats do," Magnasco said.»
Do you catch now why it is impossible to predict ,with linear modeling electrical tools designed for measuring circuits performance , what humans will hear from audio system parts coupled together in different acoustic settings environment ?
now read that ATTENTIVELY :
«Building on the current results, the researchers are now investigating how human hearing is more finely tuned toward natural sounds, and also studying the temporal factor in hearing.
"Such increases in performance cannot occur in general without some assumptions," Magnasco said. "For instance, if you’re testing accuracy vs. resolution, you need to assume all signals are well separated. We have indications that the hearing system is highly attuned to the sounds you actually hear in nature, as opposed to abstract time-series; this comes under the rubric of ’ecological theories of perception’ in which you try to understand the space of natural objects being analyzed in an ecologically relevant setting, and has been hugely successful in vision. Many sounds in nature are produced by an abrupt transfer of energy followed by slow, damped decay, and hence have broken time-reversal symmetry. We just tested that subjects do much better in discriminating timing and frequency in the forward version than in the time-reversed version (manuscript submitted). Therefore the nervous system uses specific information on the physics of sound production to extract information from the sensory stream.
"We are also studying with these same methods the notion of simultaneity of sounds. If we’re listening to a flute-piano piece, we will have a distinct perception if the flute ’arrives late’ into a phrase and lags the piano, even though flute and piano produce extended sounds, much longer than the accuracy with which we perceive their alignment. In general, for many sounds we have a clear idea of one single ’time’ associated to the sound, many times, in our minds, having to do with what action we would take to generate the sound ourselves (strike, blow, etc)."»
is this article have been read by Amir or prof?
Are they able to understand why their simplistic assumptions about hearing PREDICTED on the basis of verified gear electrical standards cannot be used to predict how, and why, and when , a musician or an acoustician or any ordinary people will hear in some determined room acoustic environment coupled to an audio system ?
It is not a question about an alleged claim they accuse audiophiles to assert: their "golden ears"... A description used as an insult is not a scientific claim...
Here two physicist explain from their psycho-acoustic experiments conclusions how the human ears extract information from the time region where and WHEN there is, as in natural sounds environment, a broken time symmetry dimnension, and then why the human ears/brain BEAT THE FOURIER UNCERTAINTY PRINCIPLE UP TO 13 TIMES...
It is important to observe here that this fact about the symmetry breaking et the perception in the time domain is also related to the way human are able to PRODUCE sound sources vibration and not only perceive them...
Simplified version:
https://phys.org/news/2013-02-human-fourier-uncertainty-principle.html
Unsimplified original version:
https://arxiv.org/abs/1208.4611
As i said : thanks Amir to debunk gear specs claims, but do not pretend to do MORE...Insulting people even uneducated one is not a sign of high education, as demonstrated by those who you provoked because of the mortal sin of using their ears and who insult you in return...Insults beget insults...
As i said objectivist and subjectivist are twin brothers born from the same gear market conditioning mass publicity claims focussing on the gear piece...
Psycho-acoustic and room acoustic experiments is the heart of audio...
The heart of audio is not tasted "branded name" gear for "golden ears" or measured numbers verified at specs gear for pretended to be " unbiased" objectivist ears reading electrical graphs ...
Will it be necessary to have an answer to post these articles a fourth time ?
«Science is what you eat, technology is what you shit, the balanced recycling is called knowledge»--Groucho Marx 🤓