Phase manipulation is still around and will continue to be around and will continue to be used and will continue to forge ahead.
It’s current incarnation is in the realm of in-ear headphones, and the use of manipulation files, or pinaa typing files. To find a norm for the given user of the given VR headset. To have 20 or so different manipulation files available, so the user of the given VR headset can find a phase data manipulation pattern that mimics their ear, when used with the given headset or in-ear cans of their choice.
in that incarnation, it will find and is finding it’s biggest normalization and adoption audience so far. Which will only increase in size, and correctness and subtlety of technique and thus a slow evolving near mastery of it....as time goes by.
Already there are established ways of placing microphones in an ear canal, and then testing that, against a sound location testing set up, which brings about a master file of ’in time’ phase manipulations, that can ’mimic’ to a decent degree, the localization cues of digitally generated sound - in a given VR application. For that given individual.
Which is the only way it truly works, compared to the coarser simple effects that are a thing like q-sound. your own individual physical subtleties of outer ear shaping in all ways, is what brings you to aurally understand localization cues, in gross and fine ways. Thus a custom system for the individual is the only way to make this work.
Before that high level stuff becomes a norm, the extant norm is the averaging’ of ear types, vs a set of distinctly different phase manipulations, eg, 20 of them, each based on some minimal attempts at ’ear typing’, or trying to find ear types in the same way we group body features. It works to some degree. Well enough of a fit for some, in some scenarios of playback.. a bad fit for others.
It’s current incarnation is in the realm of in-ear headphones, and the use of manipulation files, or pinaa typing files. To find a norm for the given user of the given VR headset. To have 20 or so different manipulation files available, so the user of the given VR headset can find a phase data manipulation pattern that mimics their ear, when used with the given headset or in-ear cans of their choice.
in that incarnation, it will find and is finding it’s biggest normalization and adoption audience so far. Which will only increase in size, and correctness and subtlety of technique and thus a slow evolving near mastery of it....as time goes by.
Already there are established ways of placing microphones in an ear canal, and then testing that, against a sound location testing set up, which brings about a master file of ’in time’ phase manipulations, that can ’mimic’ to a decent degree, the localization cues of digitally generated sound - in a given VR application. For that given individual.
Which is the only way it truly works, compared to the coarser simple effects that are a thing like q-sound. your own individual physical subtleties of outer ear shaping in all ways, is what brings you to aurally understand localization cues, in gross and fine ways. Thus a custom system for the individual is the only way to make this work.
Before that high level stuff becomes a norm, the extant norm is the averaging’ of ear types, vs a set of distinctly different phase manipulations, eg, 20 of them, each based on some minimal attempts at ’ear typing’, or trying to find ear types in the same way we group body features. It works to some degree. Well enough of a fit for some, in some scenarios of playback.. a bad fit for others.