If you don't have a wide sweet spot, are you really an audiophile?


Hi, it’s me, professional audio troll. I’ve been thinking about something as my new home listening room comes together:

The glory of having a wide sweet spot.

We focus far too much on the dentist chair type of listener experience. A sound which is truly superb only in one location. Then we try to optimize everything exactly in that virtual shoebox we keep our heads in. How many of us look for and optimize our listening experience to have a wide sweet spot instead?

I am reminded of listening to the Magico S1 Mk II speakers. While not flawless one thing they do exceptionally well is, in a good room, provide a very good, stable stereo image across almost any reasonable listening location. Revel’s also do this. There’s no sudden feeling of the image clicking when you are exactly equidistant from the two speakers. The image is good and very stable. Even directly in front of one speaker you can still get a sense of what is in the center and opposite sides. You don’t really notice a loss of focus when off axis like you can in so many setups.

Compare and contrast this with the opposite extreme, Sanders' ESL’s, which are OK off axis but when you are sitting in the right spot you suddenly feel like you are wearing headphones. The situation is very binary. You are either in the sweet spot or you are not.

From now on I’m declaring that I’m going all-in on wide-sweet spot listening. Being able to relax on one side of the couch or another, or meander around the house while enjoying great sounding music is a luxury we should all attempt to recreate.
erik_squires
audio2design
If you worked better on your reading comprehension and spent more time trying to understand what I wrote, and less time trying to prove me and others wrong ...
Ad hominem attacks are a basic logical fallacy. You are literally not making sense.

I am pretty sure that English is not @mahgister’s native language - yet his reading comprehension seems fine to me even if there’s an occasional stumble. I think he should be welcome here. Under the forum rules, he deserves to participate here without your attacks.
You may feel your response is erudite, but to me, you just told me "I like Oranges", after I told you it was 7 below freezing and snowing outside. Perhaps to you there was some correlation, but I am just shaking my head and I suspect others are too at this point.
Insulting is your only argument...

i just put a simple point here you never answer to it...

Any reader can read that for himself...

I will simplify my post for your own understanding...



Acoustic explain imaging.....Engineereing use the acoustical explanation for better recording technique...

yes toe in speakers matter and anything pertaining to timing and volume...

BUT timing of wavefront matter MOST because it is ACOUSTIC science first... It is the same thing for the concept of timbre which is acoustical one...

This was my point...






How in the world this simple fact which is totally true correspond to you answer about orange and freezing...

You are a very intelligent person, but you are not a very "gentle" and very trustfull one sorry...




Post removed 
Audio2design, thank you for your post. You are mostly right. It is all about timing and volume. You are also probably right about certain situations.
The vast majority of recording is done multi micing, not with stereo microphones. Then it becomes all about volume differentials between the channels, to where the sound was mixed. Now the timing event becomes paramount and that can happen only when your head is equidistant from the speakers that are properly balance (volume) Unless you prefer to go the ambisonic route. Your central nervous system was designed to work with head shading. It increases the volume differential between the ears allowing more accurate location of the threat. Timing also changes. In order to produce an accurate image you have to be equidistant from speakers balance correctly and both speakers have to have the exact same frequency response curve. Very few systems meet all these criteria and do not image as well as is theoretically possible. Yes, the way the recording was done influences all of this.
Half truth....

The missing half is in acoustic science and called the first frontwave law related to the different  possible thresholds timing  of direct and reflected waves and their interpration by the ears..

Imaging is not first a fact in digital recording tech. but in acoustic first...

I created my own mechanical equalizer for balancing the timing of the  different  waves  without microphone... It worked so well my imaging i call depth imaging fill the room...My measured standard is the range of the human voice and his timbre perceived by the ears...Not a a set of very narrow testing frequencies for a very minute location of the head using a mic... 


 Then imaging is FIRST : timing + the law of the first wavefront..
After that you can speak of timing+volume ...

missing this point is complete reversal and misunderstanding of the phenomena...

Acoustic neurophysiology is FIRST  recording engineering second for the explanation....  
Your central nervous system was designed to work with head shading. It increases the volume differential between the ears allowing more accurate location of the threat. Timing also changes. In order to produce an accurate image you have to be equidistant from speakers balance correctly and both speakers have to have the exact same frequency response curve. Very few systems meet all these criteria and do not image as well as is theoretically possible. Yes, the way the recording was done influences all of this.


I think we are predominantly in agreement and acoustic cross-talk cancellation is an area of both academic and professional research for me.  Of note, the speakers I PM'ed you about have some ability to correct frequency response both direct and reflected.

The volume differential from head shading is critical, at >1,500Hz, but practically within limited range, more than just a perfect sweet spot, you can achieve this if that is your goal. It is conditional on speaker dispersion, or when you correct you will create as many problems as you solve.

W.R.T. timing, the current literature, and consensus on whether timing in recordings is accurately portrayed in a stereo speaker setup is debatable and the argument is leaning towards how the timing information is perceived is not what was captured. The reasons I illustrated above, but the biggest being crosstalk and filtering due to reinforcement and cancellation from the same sound having different arrival times. This is best illustrated by comparing timing panning using speakers and headphones, both with narrow band (<=1000Hz) and wider band signals. Lots of trade-offs too, going wider on the speakers, can improve extraction of timing detail, but screws up other location aspects and hurts the center image. Go narrower and you get a more accurate center image. The reality is 2 channel via speakers is imperfect. Signal processing will get us closer to reality, but uphill commercial struggle, and has its technical issues. More speakers just increases cross-talk issues, but more speakers working under concepts of ambisonics has the potential to move us forward.



Audio2design, thank you for your post. You are mostly right. It is all about timing and volume. You are also probably right about certain situations.
The vast majority of recording is done multi micing, not with stereo microphones. Then it becomes all about volume differentials between the channels, to where the sound was mixed. Now the timing event becomes paramount and that can happen only when your head is equidistant from the speakers that are properly balance (volume) Unless you prefer to go the ambisonic route. Your central nervous system was designed to work with head shading. It increases the volume differential between the ears allowing more accurate location of the threat. Timing also changes. In order to produce an accurate image you have to be equidistant from speakers balance correctly and both speakers have to have the exact same frequency response curve. Very few systems meet all these criteria and do not image as well as is theoretically possible. Yes, the way the recording was done influences all of this. 
With a good system one can sit comfortable in a chair and enjoy an accurate image. If you move side to side enough you will hear the center image melt. With line source speakers you can move all the way to a side wall and the instruments mixed to the other side will still be loud and clear coming from that side as if you were at a concert but the center image will be vague. With point source speaker the volume drops off much more acutely with distance so the center image shifts entirely to the side you are on including instruments in the center channel but mixed a little to the opposite side.

I use line source ESLs which have been digitally corrected and produce identical frequency response curves. I frequently have to adjust the balance with different records a few dB to improve the focus, something you would never notice in most systems because the image specificity is just not there. Volume and timing have to match up!
As you would expect some recordings produce better images than others. Mono records can not be listened to from the listening position.
It sounds like you are listening through a crack in a door, weird. I sit off center when I listen to mono. Everything opens up. 
I have listened to corrected point source speakers particularly a friends Watt/Puppy JL Audio subwoofer system and dead on center it produces a beautiful miniature image. Move off center and it falls apart as you would expect. 
It is sort of the exact opposite of what the OP says, the more noticeable the sweet spot the better the system. If you can not differentiate the exact center from two feet over your system is not imaging. Some people may be happier this way. Ignorance is bliss.
 
I don't agree with that mijostyn.  Imaging comes from both volume cues (predominant in most multi-channel studio recordings by far), and timing from a proper stereo microphone setup which is rather uncommon.  This is a long post, but all relevant.


With good dispersion and non-symmetric toe in, you can get reasonably accurate volume cues over a wider range.  That provides two significant mechanisms for location,  1) Relative volume level,  and 2) Frequency dependent head shading. 


What you can't compensate for is timing, but there are two issues, a) Was timing even captured, and b) Can timing be conveyed with speakers in a traditional two channel audio setup, both because of the extreme accuracy needed in head placement, and the inability to prevent sound from one speaker reaching the opposite ear.

0.1" of head miss position = 1. 6 degrees of timing inaccuracy0.5"  = 8.2 degrees2"  = 32.7 degrees


So lets say you are sitting 10 feet back from the center line of your speakers at 60 degrees. A 2 degree toe-in difference only represents about 3.8 degrees of image movement, and the movement will be true for all sounds. I.e. the image shifts left or right.  If the toe-in is symmetric, 3.8 degrees represents moving your head left-right about 3".  At 5 degree toe-in difference, you are looking at 10 degree offset, and about 7" of side-side head movement (14 inches total range). You just moved from the best seats, to pretty good seats.

Of course much of this is all literally fuzzy anyway. When you have your speakers at 60 degrees, head shading to both ears creates an improper center image. You may have recorded timing information, but because you have no cross-talk cancellation, you have a secondary timing event about 0.2 seconds later confusing the brain on whether that is the event, an echo, etc. The singer (continuous tones) are properly placed, but perhaps a bit fuzzy due to aforementioned issues of shadowing for volume, and the drum hit off to the side, gets confused in the false secondary timing event.

Oh, so it is easy ... ya no. There is one other huge issue in capturing timing difference in stereo microphones. You are now playing back the same signal delayed in time between two speakers. Guess what that does when it hits the head?  Filtering!  Comb filtering effects will be evident and significant as the fixed timing delay reinforces and cancels depending on the frequency.   Oh, but it gets even better ... I mean worse. Where timing only contributed spatial cues at <1,500 Hz, those new comb filtering effects you generated are now across the frequency range. You think you widened the stereo image, but really you created an auditory illusion of space that is not representative of the timing recorded.  The timing becomes a level difference perception.  *** Note that now, head accuracy becomes far less critical ***


And just to be clear, stereo speakers attempting to reproduce timing can't place the image outside the speakers (see crosstalk above).  Of note also, timing only really works at <1,500Hz, and predominantly <1,000Hz.  So to all those "phase" "phase" "phase" people, less posting, more learning, and for those buying or making speakers, keep the crossover out of the 200-1500Hz range if you can.

So what can be done?
- Signal processing akin to noise cancellation, but in this case, to reduce cross-talk
- Headphones with signal processing to replicate the body functions (head shading, reflections, etc) that are lost without an audio field.

@misogyn,

another completely useless post, that. You excel at harping without contributing
Anybody who thinks they have a wide sweet spot either does not know how to evaluate an image or has no sweet spot at all. High frequencies blasted all over the room does not constitute a sweet spot. It is all about the image not dispersion. 
By way of background: The ear localizes sound by two mechanisms: Arrival time, and intensity. If the arrival times from both speakers are identical, the image will be shifted towards whichever speaker is loudest. And if the intensities are identical, the image will be shifted towards whichever speaker’s output arrives first. With conventional speakers, as you move off to either side of the centerline, the near speakers "wins" BOTH arrival time and intensity, thus the image shifts towards the near speaker, often dramatically so.

What I’m going to suggest is sometimes called "time-intensity trading", as the off-centerline listening locations which have a later arrival from one speaker compensate by having greater intensity (loudness) from that speaker.



I will use your excellent post to illustrate a listening experiment of mine suggested by this japanese article research to me... Adding then to your information the idea of 4 critical thresholds linked to LEV and ASW...



https://www.researchgate.net/publication/223804282_The_relation_between_spatial_impression_and_the_l...




My experience is simple and improve greatly the " imaging" but also the "encompassing sound effect " factor or the auditory source width (ASW) and the listener envelopment (LEV)

I use small Helmholtz pipes of the right volume and neck ratio near the tweeter and near the bass driver but in an asymmetrical fashion between the 2 speakers... One speaker tweeters is linked to 2 Helmholtz  different pipes near the tweeter, the other not.... One speaker is linked with the Helhmoltz  2  different pipes, near the bass driver not the other speaker... The difference of timing of these frequencies between the 2 speakers illustrate this 4 thresholds law which spoke about the japan scientists... This experiments is mine and not in this article...

The effect is huge and explained by the japanese article on the law of the first wavefront linked to their 4 tresholds law in audio....

This is my last experiments and device... I will put it in my audio thread: "miracles in audio"... Where i described my audio journey...

COST: PEANUTS...

Effect: imaging way better and also better timbre....

Conclusion : imaging is not ONLY the result of  the structural electronic engineering of the speakers like suggested in this thread erroneously and ONLY their location , but first and last mostly the result of the law of the first wavefront and of their 4 tresholds in acoustic...


I will repeat the definition of Toole of the law of the first wavefront in his main work :

«In audio in the past, the terms Haas effect and law of the first wavefront
were used to identify this effect, but current scientifi c work has settled on the
other original term, precedence effect. Whatever it is called, it describes the
well-known phenomenon wherein the fi rst arrived sound, normally the direct
sound from a source, dominates our impression of where sound is coming from.
Within a time interval often called the “fusion zone,” we are not aware of
reflected sounds that arrive from other directions as separate spatial events. All
of the sound appears to come from the direction of the first arrival. Sounds that
arrive later than the fusion interval may be perceived as spatially separated
auditory images, coexisting with the direct sound, but the direct sound is still
perceptually dominant. At very long delays, the secondary images are perceived
as echoes, separated in time as well as direction. The literature is not consistent
in language, with the word echo often being used to describe a delayed sound
that is not perceived as being separate in either direction or time.Haas was not
the first person to observe the primacy of the first arrivedsound so far as localization in rooms is concerned.»

Sound Reproduction The Acoustics and Psychoacoustics of Loudspeakers and Rooms Floyd Toole Chap.6 P.73
A lot of these so called true audiophiles think fuses, wires and speakers that have a FR that looks like the snake river provide the best listening experience. Give me a pair of Genelec the ones set up right and you can have a wider sweet spot than the size of your head,  sacrificing nothing.
I guess my point is that to me, the true audiophile requires the best his/her system can deliver. Trying to do that for multiple positions means you are going to sacrifice the best. That's okay if that's what you want.

I do not buy into the thought, 'I'm a true audiophile because four people can listen to 97% of what my system can do.' Doesn't work that way for me.  I'll take 1x100%

I've heard MBLs plenty at shows, and the good old Bose 901s of yester-year. I always thought the MBLs were great, if you like that kind of spread out sound. I like a bit more definition, and for a head singing to sound like the size of a head.

 Like Richard Vandersteen said, most narrow baffle speakers have good off axis radiation patterns so using an omnidirectional speaker is one way to diffuse the direct sound with reflected sound (homogenization) for a larger “sweet spot” albeit less resolute.



            A guess tossed out earlier (perhaps too boldly):
"It’s great to want a large sweet spot, if you use it, and if you have company that can actually appreciate it. But, understand you are not getting the very best your system can offer. It may be the best it can average out to over a large zone though."

Thanks ctsooner for sharing, and Richard V for minor validation:

"The only way to make the “sweet spot” larger is to lower resolution and homogenize the signals enough to make the presentation mediocre everywhere. RV"    
Tom Danley on the Synergy horn (emulating a perfect point source per channel/speaker), which will see a domesticized version in the shape of the Signature Series:

https://www.youtube.com/watch?v=MBl5lhmzRKA
Post removed 
I wonder why KEF chose to time align and phase correct the amplified LS-50 ?
I am guessing they too are aiming..... low.
It comes to my mind an interesting metaphor about how active and able to be activated a room is.... The room is not ONLY a set of passive walls waiting for the sound waves to bounce on them partially reflected, absorbed or diffused also... This is market mythology of those who simplify acoustic to sells easy to use products... Like i already said this is only HALF of the story...


The other Half is connected with my metaphor:

What is the difference between a violin and a room?

No difference at all....

Imagine if the waves of sound cross my room 80 times in one second , that these waves could be modified by their multiples crossings of my room each second by a tightening of the air, a compression of the air in different zones which will work exactly like the mechanism on the violin that will tighten or relax the tension of the strings , here in the room different pressure engines with the form of bottles or tubes and pipes devices will make the air tighter on a set of different frequencies in fonction of their volume/neck ratio exactly like the violin mechanism will tighten the strings ...

The room become a violin and acoustic is the art of tuning it...

Then nevermind the source of information, digital or analog, coming from the speakers in the form of waves, what we listen to is the room/ violin interpreting this physical direct waves of the speakers ALWAYS mediated by the early and late reflections yes but also mediated by the different pressure zones of the room in the form of the pressure engines but also in the form of the interacting waves themselves in relation to the geometry of the room which create in the room an array of cellular pressure zones themselves....

This impact of the room on the sound we hear is so huge that arguing about the sound of different piece of gear is most of the times ridiculous.... For sure no speakers or no amplifiers sound alike, but the room/violin is hugely more impactful on the sound you will hear than the choice between a Pioneer amplifier or a Sansui one....More than that you could modify the room and transform completely the response of the room and make your amplifier an another beast completely.... It is true also of the speakers.... It is the reason why reviews are relative to say the least...The sound of ANY system is mediated and transformed by the room at the end because ou ears/brain  use the room to make the sound like a violonist use the shell of the violin to amplify and transform the sound....

For sure there is many other factors, like the way we can use materials to act on timing of the direct waves with the early and late reflections and the use of reverberations.... I used all that also but in an intuitive way, listening to my room for the tuning, and i will let the specialist explain all that way better than me...

My post is only here to say something rarely said and never insisted on, compared to the huge marketing of electronic design in audio threads....

I am in no way a scientist nor an acoustician....

All these reflections are more the results of my experiments than direct knowledge...

If i am wrong correct me....

Thanks....





I wanted to share what Richard Vandersteen thinks about sweet spot.  This is a direct quote:

Most speakers today especially those with narrow baffles have a wide dispersion pattern and therefore will have a decent stereo image off axis. Having noted this if there is only a small improvement when sitting in the “sweet spot” this is a sure indication of low resolution as imaging is created by small differences in time, phase, amplitude and differential time between left and right channels. Most of these ingredients are at least compromised when the listener is not equal distance from the two speakers. Evidence of high resolution, time, phase accuracy and reasonable acoustic symmetry within the room is a significant improvement of all things coveted by most audio enthusiasts when seated in the “sweet spot”. The only way to make the “sweet spot” larger is to lower resolution and homogenize the signals enough to make the presentation mediocre everywhere. RV    
Post removed 
In other for me to get wider sweet spot I use four identical speakers two on right and two on left 
my preamp has XLR and RCA inputs both active at the same time 
two separate amplifiers that’s it sounds so amazing in my room and to my ears 😌😌😌
Headphones are like a room, we must adjust the response in frequencies of the driver and the frequencies responses of the shell room... Between the 2 there is a hiatus in this hiatus are where we can inplement  our possible controls and tuning between the 2 ...

The timbre perception in an headphone is the most important characteristic like in a room... We can improve it by modifying the damping of the shell or his geometry...Like in a room...And like in a room the recording sources does not contain all the information necessary for the ears to recreate the timbre or imaging perception, we must complementarily add what is missing for a perfect illusion, we must control the shell like we control our room for the best possible illusion... For sure we can listen to intra headphone and here we have more of a direct experience of the direct sound in a sense of what was the recorded information at the live event it seems but is it right?

No because the recording live original event was incomplete or better said imperfect because of the trade off related to the recording process, locations and types of mic.

It is for this reason that internal headphones are not better than speakers for recreating timbre perception....And probably less efficient to recreate the illusion of a live performance as if the musicians were playing right now in our face....We can improve the room shell of normal headphone, and the room with many controls but it is more difficult with very small internal headphone...

The best experience of music is for the time being always with speakers in a controlled room....


For sure.... Nobody reinvent the wheel.... But the japanese article is very clear...

I dont pretend to anything myself except being the father of this maxim:

Dont upgrade embed everything right before....

😊


By the way i enjoy precise very good bass i hear with my stomach from a 7 inches driver in a square small room 13feet by 13 feet with bad location for one speaker in a corner, thanks to Helmoltz activation method of the room....Passive materials treatment is half of the story to tell....


Just a remark about the direct sound....

Image focus comes almost entirely from the direct sound.
There is no direct sound separated from reflected sound, early and late reflections for the brain...The brain work with the three , direct, early and late at the same times in milliseconds to recreate the image and timbre experience....

Even in near listening reoom treatment and controls work in a huge way because of that... When someone speak of direct sound it is a "physical concept" about the wave coming from the source, but acoustically for the ears there is NO solely direct sound perception in a closede small room, the ears recreated the sound perception in milliseconds with the physical direct sound and the early and late reflections.... Here we must distinguish physical concepts and neurological acoustical one.... The sound we hear IN A SMALL ROOM is never the direct sound....It is a composite of the different multiple waves summed into one interpretation by the brain.... Many people missing these distinction affirm that near listening can spare someone of room treatment because of these confusions... The most astonishing fact in audio for me was meditating about the fact that the sound waves cross my room 80 times per second....Then what i listen to is this composite sums of waves i interpret like music in my room....

For sure for the brain the difference between what is a direct sound, and early or late reflections are linked to timing and distance in the room and the location of the listener.... It is relative....The brain recreate the sound when i move in my room with these 3 psysical concepts but what i listen to is a COMPOSITE always of these three....

 i can for sure glued my ears to some inches of the driver to hear ONLY the direct sound but i am not sure that these noise will be interpretable by the name music....But even in a headphone my brain created the musical  sound with the direct physical waves of the source with the early and reflected waves of the shell room...No headphones sound the same in great part because of the shell room vibrations and reflective properties...


That Japanese science thing is very similar to what I have heard from years ago and what Duke has talked about as well. 

Image focus comes almost entirely from the direct sound. Reflected sound affects this differently depending on the amount of delay. Within a window of about 3-5ms it is too close in time and imaging suffers. Sound travels about 1ft/ms. This is where the advice to place speakers several feet from walls comes from. Beyond about 5ms reflected sounds contribute to a perception of space. This is where the sense of envelopment comes from. 

That is of course far from the whole story. That is just one aspect of it. The initial wave front. Really accurate low bass is associated with large spaces and is another factor in envelopment. Then there is the spectrum of direct sound to the reflected, diffuse sound. And more. They all go together. 

These are all closely related and similar. There is more difference in the language being used to describe them and from what point of view than anything else.
Mahgister-- OK. Now I’m following this. Your later posts seem (at least to me) to use the ordinary meaning of timbre. What I did not understand was the relation of this to things like "imaging" or "soundstage", which I believe in this context are essentially ’something else’ ’
It is difficult concept of acoustics i work hard to understand them a bit in few hours i cannot make that more simple than the japanese article about Imaging and soundstage....

Nor more simpler than Toole explanation in his book...

I has given the adress of the article and the book is on the net free to read...

I cannot create longer posts here and take 3 or 4 hours to make them clearer ....

I give the gist of the problem....

All that was to argue with someone who was arguing with everybody here.... 😁

i am not a scientist but i learned how to read in my daily 45  years work: counselling students for books and their reading abilities in almost any fields... I know nothing but i can create relations with multiple fields rapidly....It serves me well to create my own audio system at peanuts costs when parsing the essential bits of information percolating audio thread.... I only made a synthesis of these bits and i called that working with the three embeddings controlled dimensions of any audio system... I discovered this triple tuning of a system is more important than the system itself....Simple no?

I hate the word "tweaking" because it miss the point, being interpreted to be SECONDARY additions and not essential installations controls and dogmatic mind call that "snake oil" easily because they are costly, or "placebo" because they are not always very audible in some conditions...For sure true snake oil and placebo effects exist... But thowing the baby with the bath waters is not a solution....

My best to you...

If you read carefully about the law of the first wavefront and the paper of the 2 japan scientists written in 2008, you will begin to understand why imaging is possible and guess how we can make it with materials means in the context of this law of the first wavefront and his relation to early and late reflections balance in a room...

You will also immediately understand why it is impossible to recreate a natural timbre perception in a room where no imaging is clearly delineated or possible....

Then this is the reason why i affirmed that timbre perception is the benchmark of audio judgement of the balanced relation or the disruptive relation between room and gear....


The recording source i will repeat contains information and cues about the original musical event, but by the recording choices of the engineer about mic. location and types the information about timbre and imaging are no complete without the dynamical addition of what is missing in the recording source and is potentially in the activated room of the listener which will make possible the recreation of the imagin and timbre perception... The frequency response of the controlled room  synchronize itself with the frequency response of the audio system....It is my way to describe that but i am not an acoustician....

I learn all that in few hours of arguing with someone who does not seems to know timbre concept nor imaging concept...And by my anterior experiment and experience with my room problem now solved...


I am not a scientist only an average listener dreaming of Hi FI at low cost....

I succeed and anybody with a room can....


Thanks for the translation.... 😊

Save there is other means of controls in acoustic, and others in mechanical and electrical dimensions for sure...

A remark:

If you coupled this Helmholtz idea with the ideas of the 2 Japanese scientists i cited already in a preceding post about the law of the first wave front and his relation to the source width (ASW)and the listener envelopement concept(LEV) who gives us a very precise set of experiments to understand how it is possible by room material treatment and by room controls to create a balance which will make us able to create an image width also compatible with an enveloping listener sound, we have some idea about how it is possible to make the room an activated entity in the recreation of sound, imaging and timbre and no more a set of passive walls...

I will give their introduction here and their conclusion....


«In 1989, Morimoto and Maekawa demonstrated that
spatial impression comprises at least two components and
that a listener can discriminate between them [1]. One is
auditory source width (ASW) which is defined as the width
of a sound image fused temporally and spatially with direct
sound image, and the other is listener envelopment (LEV)
which is defined as the degree of fullness of sound images
around the listener, excluding a sound image composing
ASW [1,2],»




«In conclusion, it seems that the results of three experiments shown in this paper evidence in favor of the hypothesis that the components of reflections under and beyond
the upper limit of validity for the law of the first wavefront
contribute to ASW and LEV, respectively. Accordingly, it
is possible to control ASW and LEV independently by controlling physical factors for each component. The important is that it is necessary to provide reflections beyond
the upper limit in order to generate LEV. Furthermore, it
is clarified that the reflections beyond the thresholds of
LEV do not always lead to disturbance. In other words,
it is possible to make the listeners perceive LEV without
causing disturbance.»

https://www.researchgate.net/publication/223804282_The_relation_between_spatial_impression_and_the_l...

I will repeat what is the LAW OF THE FIRST WAVEFRONT:


«In audio in the past, the terms Haas effect and law of the first wavefront
were used to identify this effect, but current scientifi c work has settled on the
other original term, precedence effect. Whatever it is called, it describes the
well-known phenomenon wherein the fi rst arrived sound, normally the direct
sound from a source, dominates our impression of where sound is coming from.
Within a time interval often called the “fusion zone,” we are not aware of
reflected sounds that arrive from other directions as separate spatial events. All
of the sound appears to come from the direction of the first arrival. Sounds that
arrive later than the fusion interval may be perceived as spatially separated
auditory images, coexisting with the direct sound, but the direct sound is still
perceptually dominant. At very long delays, the secondary images are perceived
as echoes, separated in time as well as direction. The literature is not consistent
in language, with the word echo often being used to describe a delayed sound
that is not perceived as being separate in either direction or time.Haas was not
the first person to observe the primacy of the first arrivedsound so far as localization in rooms is concerned.»

Sound Reproduction The Acoustics and Psychoacoustics of Loudspeakers and Rooms Floyd Toole Chap.6 P.73


Mahgister--  OK. Now I'm following this. Your later posts seem (at least to me) to use the ordinary meaning of timbre.  What I did not understand was the relation of this to things like "imaging" or "soundstage", which I believe in this context are essentially 'something else' (perhaps not 'red herrings', but not crucial to what you're talking about).  But yes, surely the reproduction of timbre correctly will improve our experience of music (and even perhaps enhance the feeling that instruments are 'right there' or 'over there'.)
djones51-
I’ve never understood what Mahgister was talking about, especially concerning timbre.

I’ll give my layman version, timbre is how I can tell a trumpet from a clarinet playing the same notes.
Right. We don’t even need a fancy audiophile definition for timbre the regular dictionary one is plenty good enough:
the character or quality of a musical sound or voice as distinct from its pitch and intensity.
The character or quality we are talking about is what distinguishes a violin from a viola, alto sax from tenor, flute from piccolo. Even when both are playing the same note at the same volume. Because that note is never a pure tone, it is always a complex combination of harmonic overtones. The particular way the relative values of all those harmonics combine is timbre.

What acoustic embedding has to do with it I don’t know I don’t even know what acoustic embedding even is much less the other two though I have tried to figure out what he’s talking about.


Okay well the way I read mahgister is embedding is just another way of saying tune or control. Helmholtz resonators for example are one sort of acoustic control. Air pressure goes through an opening, in a bottle or straw, into a space, and back out again. In the process of going through the restriction it gives up energy. So a Helmholtz resonator is like a shock absorber. In reality it is just another sort of tube trap. It is also fundamentally the same or related to porting in a speaker cabinet. All the same sort of thing.

Your room, any room, has it’s own particular set of resonant frequencies. Why do you think it is so many people have the same bass problems in the same areas? Because the rooms are so similar in dimension. The helmholtz resonator can be tuned by its size and shape to damp these room resonance modes.

Okay so now take a look at what we have so far: timbre is the exact combination of harmonics that tell us which instrument is which. Room resonances affect different frequencies differently. Therefore, controlling them will help reproduce timbre accurately, making each instrument sound more like it should.

Replace "controlling" with "embedding" and you got it. Same for the other two embeddings, vibration and fields. Got it?
Post removed 
I’m glad you have a really great room, mine is my living room so I do what I can but I don’t have any complaints.
The most important is learning to be happy....You have it.... then you are lucky.... All the rest is only hobby matter....

But it is true that owning a dedicated room tough is one of the more important asset in audio experience.... Not the gear most of the times like always everybody think....It is simply because acoustic controls is so powerful.... Using all his facets is more easy in a dedicated room....


My best to you and i apologize for my sometimes  rude answers.... Here we lost sometimes controls of ourself.... I am too passionnate.... You are more wise than i am....


I don’t know my old AKG 701s sound pretty good. I could tell drums from pianos so they get the timbre.You can also EQ headphones.
Djones me too i was thinking at first that my headphones was good...

It is only with many dofferent headphones comparisons, and my speakers increased S.Q. that i begin to love them less, and at some point never use them...

Eq is like my modifications, only partial solutions...

I never realized directly using them at first what i was missing, it comes whith my room and gear control improvement...

Iike a i said elsewhere NOBODY can directly experience  the impact of the three noise floors of his system, which all together if uncontrolled affect greatly our S.Q' without even we know it at all....

Nobody ever listen directly to his electrical house noise floor and say: " i know where you are"....

 It takes some form of controls to realize the level of the  noise floor.... 

Nobody listen to his speakers say to them i know you vibrate and negatively impact he sound.... You put anything under them and you listen to a change. ,ore positive or more negative.... It is through these experiments that i learn about my specific noise floors presence...
I'm glad you have a really great room, mine is my living room so I do what I can but I don't have any complaints.
I don’t know my old AKG 701s sound pretty good. I could tell drums from pianos so they get the timbre.You can also EQ headphones.
If you’re listening through headphones then you can toss out the room and it’s all up to the equipment.
What do you think the shell of a headphone is?

A ROOM.... Most of the times a bad room... a room with hard trade-off that you can modify and control better with damping for example.... i modified with success all my headphones because they were all unsatisfying...

I trash my 7 headphones in a drawer: 2 stax, 2 dynamic, 2 magneplanar, one hybrid.... Only the hybrid one has a good timbre recreation but other limitation....

My room now is SO good at 2 locations for listening thay listening to headphones is unbearable....

Some years ago it was the opposite, listening to the same speakers was unbearable at times because of his limitations... in fact the problem never were my gear but the 3 noise floors uncontrolled: mechanical electrical and acoustical....
I assumed what we heard in relation to timbre was on the recording
Try any recording in a bad system and try to distinguish clearly the different instruments playing and their timbre distinctive voicing...

Good luck....

After that try that on a good system, with a low noise floor in all his three working dimensions especially acoustical....


You will understand...


The information about timbre in any recording source is uncomplete by definition and by the choices of the recording engineer.... Trade-off inevitable choices...This is the bad news...

The good news is we can compensate this in our own room settings by making our gear able to sound at his best potential.... Imaging, soundstage but especially timbre is the test that our controls of the noise floors are right....It will never be the REPRODUCTION of the original event which is impossible but a good partial RECREATION...

You room never reproduce your source but recreate it....



If you're listening through headphones then you can toss out the room and it's all up to the equipment. I think you're over reaching with room treatments. I'm not saying some are important to smooth out the FR but I'll take DSP to finish the job. 
Range between tonal and noiselike character

- We have no control during playback, except w.r.t. dynamic range of our system, i.e. potential volume and noise floor, the rest is inherent in the recording.
How do i control this attribute in my room?

Controls of mechanical and electrical and acoustical noise floor....With the many homemade devices you mocked  and whichi used successfully  at NO COST....

Time envelope in terms of rise, duration, and decay (ADSR, which stands for "attack, decay, sustain, release")

- With the exception of decay, which is room dependent, we have very limited control of this on playback
controls of decay with MY acoustical settings is KEY here.... In my room...

Changes both of spectral envelope (formant-glide) and fundamental frequency (micro-intonation)

- Again frequency response
Yes frequency modified response potentials of my room by my Helmholtz tubes and pipes modifying the original  response of my room....



Prefix, or onset of a sound, quite dissimilar to the ensuing lasting vibration

- Again, either in the recording or affected by the room.
Precisely the acoustical controls in my room play a greater part here also than the source recording Why?


Because the best source in the world with the best system will NEVER give a good and natural perception of timbre in a BAD ROOM....





Have you forget the CRUX of this discussion possessed by the urgency to be right against all at all cost repeating this mantra of frequency response in the face of a complex problem ?


The recording source is one HALF of the story when we speak about timbre perception, the most important half is the acoustical control of the room which will permet or not a good or very good RECREATION of the information encoded in the source....Remember that this information encoded in the source is NEVER complete nor perfect by reason of trade-off locations and types of mic. in use by the recording engineer esthetical or practical choices....

Then playback experience can never be equal to lived experience...
This is the reason why RECREATION of timbre perception being a complex acoustical and fundamental experience is the BENCHMARK test if we want to know if our system is good or not.....

I'll give my layman version, timbre is how I can tell a trumpet from a clarinet playing the same notes. What acoustic embedding has to do with it I don't know I don't even know what acoustic embedding even is much less the other two though I have tried to figure out what he's talking about.
I've never understood what Mahgister was talking about, especially concerning timbre. I assumed what we heard in relation to timbre was on the recording. I'm glad someone could decipher his tome like posts.
There is a saying that people that understand a topic well can explain it in the simplest terms.

I am not sure there is a saying for the opposite, but I can show you some examples :-)

 Report this
You are right this times audio2design.... 

It is not always possible to reduce a very complex problem in simple term.... The tensor curvature problem in geometry cannot be simplified....especially not here...

The "timbre" comcept and perception is in the same order...

But some here are very able to explain it with 2 words...

Frequency response only.....




It's definitely been one of my objectives to have a system that sounds good all around the room, even though it's still best when directly between the speakers. Speakers with smooth off axis performance and some degree of directionality in the treble seem to do the trick when given an appropriate toe-in. 
Post removed 
Post removed 
At some point Mahgister will learn or realize what he calls "timbre" is really just frequency response, though he will scream otherwise.
First- at some point audio2design will understand that the timbre concept being a complex one cannot be understood FROM only one field but by many at the same times... And most importantly cannot be reduced by recording engineer to frequency response ONLY at all...even if he scream otherwise...
😁😊


Second- the reason why this is so is that when we speak of timbre in audio system playback experience we speak of timbre not from the musician perspective only, not from the recording engineer perspective only, but from an acoustical more general viewpoint including for sure the neurophysiology of hearing but also the particular listening history of the tested and testing subject, here an audiophile listening to his system in his own specific room and perspective.... The ears listening history of the subject play a part, the playback installation gear specific system play a part and the specificity of the room acoustic another part.... Reducing all that to frequency response is an engineer joke....Or a bad reply to a complex subject....

Third- reducing TIMBRE to frequency response only is so limited and beside the point, and reflecting a purely technological narrow view that someone saying this just prove he has no idea what the timbre perception or production is... Why? Because the complex phenomenon associated with timbre perception or production cannot be reduced to linear or non linear frequencies responses .... A problem spanning human perception, acoustic physic and neurophysiology and art and psychology or linguistic cannot be ONLY "really just frequency response"....


Four- Not only then did you seems to know nothing about timbre but you dont even seems know that you dont understand the stating of the problem itself at all...







Five- i will give you a clue:


Read this text from a textbook on timbre about the limitations of the Helmholtz definition of timbre, and if you are able to understand this few sentences you will understand WHY timbre cannot be reduced to only frequency responses:

Regarding timbre, Helmholtz stated: “The quality of the musical portion of a
compound tone depends solely on the number and relative strength of its partial
simple tones, and in no respect on their difference of phase” (Helmholtz 1877,
p. 126). This exclusively spectral perspective of timbre, locating the parameter in
the relative amplitude of partial tones and nothing else, has dominated the feld for
a long time. But it is interesting to note how narrowly defned his object of study
was, the “musical portion” of a tone: “… a musical tone strikes the ear as a perfectly
undisturbed, uniform sound which remains unaltered as long as it exists, and it
K. Siedenburg et al.7
presents no alternation of various kinds of constituents” (Helmholtz 1877, p. 7–8).
By assuming completely stationary sounds, his notion of tone color was indeed a
strong simplifcation of what is understood as timbre today. Most obviously, attack
and decay transients are not considered by this approach. Helmholtz was quite
aware of this fact: “When we speak in what follows of a musical quality of tone, we
shall disregard these peculiarities of beginning and ending, and confine our attention to the peculiarities of the musical tone which continues uniformly” (Helmholtz
1877, p. 67). This means that Helmholtz’s approach to timbre had its limitations
(cf., Kursell 2013).
1.2.2 Timbre Acoustics, Perception, and Cognition by Kai Siedenburg, Charalampos Saitis, Stephen McAdams, Arthur N. Popper, Richard R. Fay Page 7

Helmholtz was conscious in his definition of timbre that he must put aside some characteristics very fundamental but secondary for his purely mathematical approach with Fourier series... But in the modern more complete definition of timbre what was putting aside is at the core center of the timbre interdisciplinary studies....



Now for your understanding read this definition of timbre in wikipedia, a very elementary and simplified one and try to distinguish clearly WHY timbre cannot be reduce to frequency response ONLY...

Range between tonal and noiselike character
Spectral envelope
Time envelope in terms of rise, duration, and decay (ADSR, which stands for "attack, decay, sustain, release")
Changes both of spectral envelope (formant-glide) and fundamental frequency (micro-intonation)
Prefix, or onset of a sound, quite dissimilar to the ensuing lasting vibration








A clue: the mechanism to produce and perceive "timbre" is not reducible to pure linear mechanic only,nor to the body/gesture of the singer or to the microdynamic gesture of the musician, neither to simply acoustic perception and acoustic variable and changing conditions but also implied changes in the brain subject and his precise listening specific history ... When a singer produce a tone there is way more factors at play than frequencies response only....It is the same thing for an audiophile recreating for himself in a specific room with specific gear the timbre experience and perception for himself....It is the same thing for speech sound recognition....Impossible to reduce this complex problerm to frequency response.....










«For an idiot using all the times a hammer all is nails, and sometimes even for a wise man, if the hammer is near his hands, all he see is nails»-Anonymus Smith

«My hammer was the only nail i had»-Groucho Marx