Is soundstage width a myth?


AHHH CRAP, I MEANT THE TITLE TO BE ABOUT DEPTH. Sorry & Thank you. Can’t edit the title.

128x128samureyex

Amused to death was processed using something called Q-Sound.  This is a clever processing that mimics what happens when sound from sources to the extreme left or right or even behind hit the listeners head.  If, for example, a source is located directly at your left ear, the sound first hits your left year, the head shades the source from directly hitting your right ear, but the sound diffracts and travels around the outside of your head anyway and reaches the right ear.  That sound hitting the right ear is different in timing (phase) and in frequency balance, and your brain knows how to interpret those differences as a source to the extreme left.  Q-Sound is reproduces this effect and then, for the extreme left example, inject out of phase information into the right channel to cancel parts of its signal to achieve a simulation of this sort of effect.  Very clever.  Not many recordings are encoded this way, but, there are enough recordings where there are similar clues in the recording that cause instruments to appear well outside the location of the speakers.   

The same phase shifting/frequency response changes from sound hitting the head from different angles, also give height cues.  This is something demonstrated well by the Chesky Test CD described above.  The create an artificial series of test signals (the LEDR test Rodman99999 described above)  that appear to rise up out of one speaker, move to a point almost overhead, then descend back into the other speaker.  The effect is not as pronounces if the speaker does not have good phase coherency, or the speaker location has a lot of nearby surfaces reflecting sound and confusing the carefully constructed cancelling signals.  This helps the listener do speaker placements that minimize such interference, which should help with imaging of the system.

Post removed