Soundstage Width and Depth


I’m curious about what your systems produce when it comes to soundstage. My speakers are about 8’ apart and I sit about 10’ from the front plane of the speakers. The speakers are toed in so that they each are pointed at a spot about 8” from my ears on each side. (Laser verified) My room is treated with bass absorption and diffusers.

In many recordings my soundstage is approx 28’ wide and, although this is tougher to determine, I would say on most recordings I’m hearing sounds 10’-15’ further back than the speaker plane. Some sounds, usually lead guitars, are presented slightly in front of the plane of the speakers. There are also recordings that produce height in the soundstage. Some fill the room floor to ceiling, while others are more on the same plane about 5’ from the floor. I do get layers usually in about the same order, guitars, lead singer, bass guitar, drums, violins and backup instruments and singers in order front to back. Again this is recording dependent. Intimate recordings that feature a singer playing a guitar usually has all of the sound between the speakers. Is this what everyone experiences? Could the depth be deeper? Do many of you hear sounds in front of the speaker plane? Do you have any recordings that accentuate the front to back soundstage?
128x128baclagg
Duke,
w.r.t. your sensation of height with that speaker set, been thinking about it more. I still feel there was the potential for frequency filtering (and possibly reflection/sheltering) due to the microphone pattern and position, and this could have simulated a human torso / ear / pinna. Being line source, there would be limited ceiling and floor reflections, so what got to your ears would have limited room interaction from a height standpoint preserving what was in the recording. Again, interesting. Been checking the local classifieds for a tolerable line source speaker to do some experiments.


audiokinesis
@geoffkait wrote: " The best and easiest way look at soundstage imho is that the better the output signal the larger the 3 dimensional sphere of the recording venue will be presented.”

I’ll concede that what you propose is the "easiest" way to look at soundstage, but I’m not sure it’s the "best" because it is incomplete. Neither does it tell us anything about how or why, nor offer guidance as to how we might make improvement.

>>>>>OK, fair enough. Here are some ways to improve the soundstage, in terms of expanding the 3 dimensional sphere and organizing and resolving the information within the sphere. There is no substitute for Signal to Noise ratio. All of these suggestions improve SNR - including for the signal in the room. I am not going to address information fields or mind-matter interactions, which are also important factors, as they’re beyond scope.

Here is my short list of how to obtain deeper wider higher and more resolved soundstage. This list is not meant to be all inclusive.

Isolate all components
Suspend or elevate all cables and power cords
Position speakers precisely using XLO test CD or similar Test CD
Address room echo and room corner SPL peaks - I.e., comb filter effects
Cryogenics and/or home freezer for anything that you can fit in there
Address static electric fields
Check electrical continuity/polarity
Clean all electrical contacts, including all wall outlets in the house
Address RFI/EMI
Address vibration of the CD itself, or cassette for that matter if you’re into the whole cassette thing.
The real question should be about soundstage width and depth that exist but shouldnt. Exactly how much of these characteristics are the result of speaker placement and how much is actually on the recording? 
There a couple of Stanley Clarke tracks that are just electric bass and Gregory Hines tapping for percussion. You can feel the size of the stage and follow steps he takes.

Another track that never fails to amaze me is by The Propellerheads and starts with a skateboarder. While I fully expect to hear the panning back and forth, hearing the depth of the half pipe below my floor is always a surprise. I kinda get how speakers can sound forward or laid back, how the soundstage can extend beyond the speakers left or right, but up and down is mystifying.
Ambiphonics:  I would start with this:   https://cdn.website.thryv.com/7b2b654758d449b08935c9dfa207e8f9/files/uploaded/Ambiophonics_Book.pdf

Then read this article on methods that are more robust:  https://www.microsoft.com/en-us/research/wp-content/uploads/2013/10/Ahrens2013a.pdf

While there is criticality of listener position, it is much more robust than ops "fluke" that requires perfect everything to "maybe" work.
@geoffkait wrote: " The best and easiest way look at soundstage imho is that the better the output signal the larger the 3 dimensional sphere of the recording venue will be presented. "

I’ll concede that what you propose is the "easiest" way to look at soundstage, but I’m not sure it’s the "best" because it is incomplete.   Neither does it tell us anything about how or why, nor offer guidance as to how we might make improvement. 

Duke
Heaudio123, I have not delved into ambiophonics to the point of understanding it, but at least for a while Ralph Glasgal was using SoundLab speakers. I am under the impression that with ambiophonics the listener’s position is critical, and I’m more inclined towards wide-sweet-spot presentations. 

Somebody - might have been Ralph? - once used SoundLab speakers as microphones... not very practical, but from what I was told the results were pretty good, at least when played back through the "microphones".  

Duke
I’m speechless that people still talk about being surprised by sound outside the speakers. You’ll have to excuse me but isn’t that a little archaic? As in 1970s? Come on, guys. The best and easiest way look at soundstage imho is that the better the output signal the larger the 3 dimensional sphere of the recording venue will be presented. When you finally get your system working the expanding sphere of soundstage should be well-defined in width, depth and height. A wonder to behold. 🤗

“An ordinary man has no means of deliverance.” - old audiophile axiom
Yes, this it the most current knowledge and there are no indication it is incorrect, but even with these cues, it can be difficult to accurately assess height. I spent a number of years doing R&D on hearing aids and similar audio "devices". Our group believed we were one of the first to look at how the design of the hearing aid could be improved with the goal of preserving positional cues most take for granted. Unfortunately that R&D was abandoned after I left as well as other programs to pump up the balance sheet before selling. It was a bit contentious at the time as well. It indicated issues with signal processing delay differences masking timing cues.

"Technically",  just as you have indicated, frequency filters that mimic the pinna, can provide a sense of height in head-phonic playback and encoded in only two channels. There has been a fair amount of research done with HATS (head and torso simulators) for recording, but, as you indicated, it requires tailoring to the individual to work properly. If you attempt that technique with speakers, you get not only the HATS transform, plus the listener ... and two pinnas are not better than one.  W.R.T. your particular situation, making a wild ass guess, the microphone above his head, if not omni and not pointed at him, created a filtering effect that simulated height with pinna filtering. Curious if the wavefront from the electrostats is less impacted by torso/head/pinna than would normally occur with dynamic speakers.  Interesting!  I may have to pick up a pair now and do some testing.


Speaking of interesting, to the last post about difficulty of creating a stable image outside the speakers, have you done much research on ambiphonics?


Regarding height cues out in the "real world", my understanding is that the way sound diffracts around the head and outer ear (the pinna) from above is what gives us height cues. I have read papers and articles about encoding these "head and pinna transforms" into a signal to convey height information, but to really do it right, the equalizations would have to be tailored to the individual's ears. (One possible application would be in the helmets of fighter pilots, so that an audible threat warning would also convey the direction. Head position tracking would of course have to be included.)  

I don't see how height information could be encoded in a normal two-channel recording... BUT something weird happened to me years ago.

Heaudio123 wrote: "The source "appearing" outside the speakers is "encoded" in the music with either mixing methods or microphone techniques, but and it is a big but, how that appears on playback is highly dependent on speaker, listening position, microphone or mixing technique and very important the listener themselves."

Thanks for adding this, as I know virtually nothing about microphone or mixing techniques.

Heaudio123 again: "Keep in mind that how this works is... by tricking the brain with a delayed signal from the other speaker hitting the opposite ear to generate timing information that the brain may perceive as equivalent to the timing information of a sound wrapping around the head to determine direction."

Very interesting! This "reflection timing = direction/angle" information is related to why cabinet edge diffraction is generally more detrimental to imaging on a wide cabinet than on a narrow one: The longer the time delay for the diffracted signal, the greater the angle (the further around to the side) of the false cue it conveys. So a narrow cabinet’s diffraction cues indicate a narrow false angle, while a wide cabinet’s cues indicate a wider false angle and thus blur the correct image more. However if the cabinet is sufficiently wide the Precedence Effect may start to mask those false angular cues. One of the reasons for flush-mounting studio monitors is to eliminate edge diffraction entirely, which makes the imaging more trustworthy.

Regarding height cues out in the "real world", my understanding is that the way sound diffracts around the head and outer ear (the pinna) from above is what gives us height cues. I have read papers and articles about encoding these "head and pinna transforms" into a signal to convey height information, but to really do it right, the equalizations would have to be tailored to each individual’s head and ear shape. (One possible application would be in the helmets of fighter pilots, so that an audible threat warning could convey complete directional information, including azimuth. Head position tracking would have to be included because fighter pilots swivel their heads a lot.)

I don’t see how height information could be encoded in a normal two-channel recording... BUT something weird happened to me years ago:

I bought a new CD that had just been put out by a musician I was friends with, Coco Robichaux. Listening over my SoundLab electrostats (floor-to-ceiling fullrange single-driver line-souce-approximating speakers), I heard his voice coming from normal height on most songs but on one song in particular his voice came from the bottom of the speaker, down at the floor! I played the song for others and some heard it coming from down near the floor and some did not.

So I asked Coco about that song. What he told me was very interesting: The recording process had been rushed, and on THAT song, the microphone had been incorrectly positioned ABOVE his head in the recording booth! So relative to the microphone location, his voice WAS coming from the direction of the floor.

I can only speculate about HOW this accidental height information was included: Perhaps the signal that the microphone picked up was altered by its location above his head, and upon playback my brain interpreted that as something it was familiar with, namely height information. Maybe my head and ears were sufficiently similar to Coco’s, at least from that angle. 

Duke
Duke,

The source "appearing" outside the speakers is "encoded" in the music with either mixing methods or microphone techniques, but and it is a big but, how that appears on playback is highly dependent on speaker, listening position, microphone or mixing technique and very important the listener themselves. The microphone technique or mixing (playing with timing) can have more impact on perceived playback that anything to do with the venue.


Keep in mind that how this works is not by recreation of a real source outside the speaker as would be the case with a first reflection, but by tricking the brain with a delayed signal from the other speaker hitting the opposite ear to generate timing information that the brain may perceive as equivalent to the timing information of a sound wrapping around the head to determine direction. Not everyone interprets it the same and the effect can be positive or negative and is influenced it appears by individual interpretation of timing and volume cues for direction, not to mention what works for one recording on one system could fall apart completely with a different recording or system. 


For those discussing "height", we have a hard enough time getting height in a real 3d environment. Nothing that comes out of your speakers (in a single plane) is height beyond that your room acoustics may generate and it would bear little reality to the actual recording environment (if it had any height at all in the first place).
Soundstage width which extends to the outside of the speakers can be encoded on the recording, but it can also be the result of strong early sidewall reflections. The Precedence Effect is not completely effective at suppressing directional cues from significant early lateral reflections, which can tend to pull sound images to the outside of the speaker plane. Toole calls this an "increase in Apparent Source Width (ASW)", and finds that most listeners enjoy it.

But this reflection-induced increase in Apparent Source Width comes at a price, if I understand Geddes correctly, and that price is clarity and/or imaging precision and/or depth of image, assuming the latter is on the recording.

In my opinion image depth and a sense of spaciousness and/or envelopment are all related, in this sense: They are spatial cues which are on the recording itself, rather than being contributed by room reflections (as is the case with increased Apparent Source Width). When the soundstage seems to go significantly deeper than the wall behind the speakers, and/or it seems that you are enveloped in a much larger acoustic space than your room, that is not coming from the acoustic signature of your small playback room.

We can think of the spatial cues which are on the recording as being in competition with the spatial cues generated by the playback room. The ear/brain system will tend to pick whichever cues are the most convincing. Unfortunately the playback room’s "small room signature" has a natural advantage, but with good speaker setup and/or good room treatment it is possible to weaken the playback room’s signature while effectively presenting the venue cues which are on the recording (whether they be real or synthetic).

Briefly, the technique includes minimizing strong, distinct ("specular") early reflections while preserving enough reverberant energy that we have a fair amount of relatively late-onset, spectrally-correct reflections. This is a bit more nuanced than merely hitting a target RT60, as RT60 tells you nothing about what is happening early on, and it is the earliest reflections which most strongly convey the characteristic signature of a small room.

As others have noted, when you are hearing a significantly different spatial presentation from one recording to the next, THAT is very good sign. It means that the recording’s venue cues are dominating over your playback room’s signature.

Duke
@erik_squires  My listening area is my living room and I had to balance aesthetics with functionality when it comes to room treatments. They have been my best investment to date. GIK was great to work with and my sound stage, imaging and tonality were all improved beyond my expectations.
of course this thread has devolved into a discussion about DBA, of course it has.

OP:  I find the combination of room treatment and speaker dispersion in the plane of question what affects sound stage.  That is, if you have front and back depth, having good treatment behind the listener and behind the speakers is what helps the most. Want width?  Narrow dispersion or treating walls is what helps. Height? Carpet and ceiling.

There's also a known enhancement if your speakers dip around 2.4 kHz. Wilson used to take advantage (or cheat) of this behavior, though later speakers have forgone this.

In general  you can get a really good idea of how good your system could sound with an ideal room by listening 2' from the speakers. Everything that changes between that and your listening location is due to the room.

Best,

E
Excellent answer by @millercarbon. dBA? A-weighted measurement?

No not dB-A, DBA: Distributed Bass Array. Multiple subs asymmetrically distributed around the room.

The XLO Test CD has a great imaging test track. Roger Skoff is talking in a bare room. He describes the room, dimensions, microphone placement, and where he is standing. As he talks you can plainly hear exactly what he's talking about. I mean it is like you are the microphones and he is talking at you. He strikes a calves and you hear the reverb. Then he starts walking around the room. Talking the whole time. He walks to your left, he was to your right. He walks behind you! He stands behind you and hits the clavis! If your system is mega this will blow your mind! The room he is in, and the microphone placement, the dimensions are close to the same as me in my room. So its just crazy spooky to hear. 

There's demagnetizing tracks on there too. Some decent level and channel test stuff. And several examples of really good quality recordings, including Michael Ruff Poor Boy in mono. Another good reference disk for setup and tuning for imaging.
@simao ...Oh, thanks. That makes sense in the context of this thread. 
I was thinking that somehow a decibel meter would be used. That's dB-A, for A weighted measurement.
Thanks for straight dope.

@lowrider57 DBA is, I believe, Distributed Bass Array. Like an Audiokinesis Swarm or the like.
- can I ask about your system? What pre-amp and amp and speakers and source are you using? 

I have a relatively modest system.

Hegel H390 integrated amp
Cambridge Audio Azur 651C CD
KLH Kendall speakers
GIK room treatments

I have tried to run digital from the 651C to the Hegel and use their onboard DAC but I lost a lot of depth and musicality. I went back to the dual Wolfson DAC onboard the 651C and wired it using the RCA analog inputs and it opened back up and sounds great!
@baclagg - can I ask about your system? What pre-amp and amp and speakers and source are you using? I’m curious aside from the recording itself, which components can render a huge soundstage. My thinking is that a great preamp plays a big role.
Excellent answer by @millercarbon. dBA? A-weighted measurement?
Also agree with @twoleftears regarding speaker design.

With a combination of specific tubes in my amp, I'm able to achieve a 3D image that centers around the middle of the speakers, projects forward, and still extends back to the wall. The design of my speakers also makes this soundstage possible.
I neglected to mention hours of speaker positioning was required.


The best way to test your system, for accurate, sound stage reproduction: https://www.audiocheck.net/audiotests_ledr.php     Also available, on the following Chesky CD, along with a number of other tests(ie: stage depth, Wood Effect/reverse polarity, etc):    https://www.amazon.com/Chesky-Records-Sampler-Audiophile-Compact/dp/B000003GF3      More info, regarding the LEDR test: https://www.stereophile.com/features/772/index.html                    OOPS; I forgot, we’re not supposed to EVER trust our ears, according to the pseudo, "science" types, around here(snort of derision).
What you’re hearing sounds right to me. Most important point is you notice it varies by recording. This is key. Recordings are not all recorded equal, mastered the same, pressings vary, and producers are all over the map when it comes to where they want things to sound like they’re coming from. The more you hear these different recordings sounding different like this the more you can be sure you’re getting it right.

These differences can with certain recordings go so far as to make it seem like the sound is right in your face, or coming from anywhere even sometimes way off to the side beyond the speakers or other times coming from nowhere or everywhere all at once. I’m talking extremes here, not what is common, but just to drive home the point the range is so vast you can’t really talk dimensions in a general sense very well, because there are so many exceptions.

What can be said in a general sense is there is a tendency for the system as if it goes in a more liquid natural presentation with less grain and glare there is a slight tendency for the stage to gain in depth. Certainly when the noise floor drops like you can get with really good power cords, IC and speaker cables (and other things, everything can do this) so that more of the acoustic signature of the venue is heard this will increase your sense of depth and space expanding the stage. We got a guy here misinterprets everything so let me be clear I'm not saying the instruments all move farther apart. Everything stays where it was, you just get the sense its all taking place in a much bigger space, because you can hear the reverberant signature of that space so much more clearly.

The greatest and most universally achievable improvement of all is the improvement you get with a DBA. A huge amount of our sense of space derives from our perception of really low frequency bass. When this is right it creates a sense of envelopment, of being no longer in your room but in the recording venue.

There are speakers whose soundstage starts on a plane in front of the plane of the drivers, some whose soundstage starts coincident with the plane of the drivers, and some that start behind.  All other things being equal, the third category will give you the "deepest" soundstage, but only because it already starts further back to begin with.

One test that's fairly easy is to see whether with a good recording the farthest back point of the soundstage is perceived as lying beyond the front wall (behind the speakers).

Personally, I'm not fond of speakers in category 1; I find them too "in your face".  YMMV.