Some thoughts on ASR and the reviews


I’ve briefly taken a look at some online reviews for budget Tekton speakers from ASR and Youtube. Both are based on Klippel quasi-anechoic measurements to achieve "in-room" simulations.

As an amateur speaker designer, and lover of graphs and data I have some thoughts. I mostly hope this helps the entire A’gon community get a little more perspective into how a speaker builder would think about the data.

Of course, I’ve only skimmed the data I’ve seen, I’m no expert, and have no eyes or ears on actual Tekton speakers. Please take this as purely an academic exercise based on limited and incomplete knowledge.

1. Speaker pricing.

One ASR review spends an amazing amount of time and effort analyzing the ~$800 US Tekton M-Lore. That price compares very favorably with a full Seas A26 kit from Madisound, around $1,700. I mean, not sure these inexpensive speakers deserve quite the nit-picking done here.

2. Measuring mid-woofers is hard.

The standard practice for analyzing speakers is called "quasi-anechoic." That is, we pretend to do so in a room free of reflections or boundaries. You do this with very close measurements (within 1/2") of the components, blended together. There are a couple of ways this can be incomplete though.

a - Midwoofers measure much worse this way than in a truly anechoic room. The 7" Scanspeak Revelators are good examples of this. The close mic response is deceptively bad but the 1m in-room measurements smooth out a lot of problems. If you took the close-mic measurements (as seen in the spec sheet) as correct you’d make the wrong crossover.

b - Baffle step - As popularized and researched by the late, great Jeff Bagby, the effects of the baffle on the output need to be included in any whole speaker/room simulation, which of course also means the speaker should have this built in when it is not a near-wall speaker. I don’t know enough about the Klippel simulation, but if this is not included you’ll get a bass-lite expereinced compared to real life. The effects of baffle compensation is to have more bass, but an overall lower sensitivity rating.

For both of those reasons, an actual in-room measurement is critical to assessing actual speaker behavior. We may not all have the same room, but this is a great way to see the actual mid-woofer response as well as the effects of any baffle step compensation.

Looking at the quasi anechoic measurements done by ASR and Erin it _seems_ that these speakers are not compensated, which may be OK if close-wall placement is expected.

In either event, you really want to see the actual in-room response, not just the simulated response before passing judgement. If I had to critique based strictly on the measurements and simulations, I’d 100% wonder if a better design wouldn’t be to trade sensitivity for more bass, and the in-room response would tell me that.

3. Crossover point and dispersion

One of the most important choices a speaker designer has is picking the -3 or -6 dB point for the high and low pass filters. A lot of things have to be balanced and traded off, including cost of crossover parts.

Both of the reviews, above, seem to imply a crossover point that is too high for a smooth transition from the woofer to the tweeters. No speaker can avoid rolling off the treble as you go off-axis, but the best at this do so very evenly. This gives the best off-axis performance and offers up great imaging and wide sweet spots. You’d think this was a budget speaker problem, but it is not. Look at reviews for B&W’s D series speakers, and many Focal models as examples of expensive, well received speakers that don’t excel at this.

Speakers which DO typically excel here include Revel and Magico. This is by no means a story that you should buy Revel because B&W sucks, at all. Buy what you like. I’m just pointing out that this limited dispersion problem is not at all unique to Tekton. And in fact many other Tekton speakers don’t suffer this particular set of challenges.

In the case of the M-Lore, the tweeter has really amazingly good dynamic range. If I was the designer I’d definitely want to ask if I could lower the crossover 1 kHz, which would give up a little power handling but improve the off-axis response.  One big reason not to is crossover costs.  I may have to add more parts to flatten the tweeter response well enough to extend it's useful range.  In other words, a higher crossover point may hide tweeter deficiencies.  Again, Tekton is NOT alone if they did this calculus.

I’ve probably made a lot of omissions here, but I hope this helps readers think about speaker performance and costs in a more complete manner. The listening tests always matter more than the measurements, so finding reviewers with trustworthy ears is really more important than taste-makers who let the tools, which may not be properly used, judge the experience.

erik_squires

I concur with

 

Remember that human hearing dont decode sound qualia and information ONLY and MERELY by computing air waves and the waves signals but also and mainly "read" the physical invariant behind any vibrating sound sources as a qualia belonging to the vibrating sound sources physical invariant ( like in the design of a drum )  and touching also our physical and emotional body, as demonstrated in the book of Essien and the two independent research articles above :

an ecological theory of sound needs also a body-image theory of sound..

«The definition of sound in physics as vibrations in an elastic medium establishes a link between the sound source and the organism. Thus, it satisfies an essential psychophysical prerequisite for a theory of perception. However,
over the past 170 years since Ohm’s law (1843), and some 137 years since Helmholtz’s resonance theory (1877), psychoacoustic procedures founded on air vibration have shrouded music and speech in mystery. Ecological theories have fallen short, not only of Gestalt invariance, but also of the link between the distal object and the organism. This paper approaches auditory analysis from the standpoint of sound production. It argues that although air vibration produces sound, sound is not air vibration; and that exploitation of features of air vibration
can hardly (if ever) lead to accurate understanding of the principle of the auditory mechanism in speech or music perception. Evidence is provided in support of the definition of sound as the vibratory image of the sonorous body.
It establishes isomorphism between characteristics of a sonorous body and auditory attributes of sound. Wherefore, a body is different from the sound it produces in much the same way as steam is different from ice ─
two different forms of the same entity. The data under consideration offer succinct insights into the way the auditory mechanism extracts from sound wave invariants for use in speech or music regardless of chaotic production and acoustic variability.»

This comes from this acoustician article and book :

https://www.academia.edu/63847071/The_Body_Image_Theory_of_Sound_An_Ecological_Approach_to_Speech_and_Music

This 2 new researchs confirm Akpan J, Essien book thesis:

Timbral effects on consonance disentangle psychoacoustic mechanisms and suggest perceptual origins for musical scales

https://www.nature.com/articles/s41467-024-45812-z

Bodily maps of musical sensations across cultures

https://www.researchgate.net/publication/377699983_Bodily_maps_of_musical_sensations_across_cultures

Now if you want to know how much information can be read in the vibrating sound sources immediate environment read this and you will fall of your chair :

Extracting audio from visual information

Algorithm recovers speech from the vibrations of a potato-chip bag filmed through soundproof glass.
 
 
https://news.mit.edu/2014/algorithm-recovers-speech-from-vibrations-0804
 
«“When sound hits an object, it causes the object to vibrate,” says Abe Davis, a graduate student in electrical engineering and computer science at MIT and first author on the new paper. “The motion of this vibration creates a very subtle visual signal that’s usually invisible to the naked eye. People didn’t realize that this information was there.”»................
 
«“We’re recovering sounds from objects,” he says. “That gives us a lot of information about the sound that’s going on around the object, but it also gives us a lot of information about the object itself, because different objects are going to respond to sound in different ways.” In ongoing work, the researchers have begun trying to determine material and structural properties of objects from their visible response to short bursts of sound.»
 
 
Then people contemptuously bragging about a few electrical measures of some pieces of design claiming that it is all we need to know if a system will sound good it will sound good for them in ALL specific environment for ALL ears and ALL brain/body, this is pure ideology to market and sell some tools . Thats all ... A good design for sure will stay a good design in all conditions for all owner but it will need an optimization process to make it shine. All audiophiles interested by "tweaks" in mechanical, electrical and acoustical conditions know what i means.
 
The ears/brain decode vibrating sound source Qualias associated with physical invariant properties of the vibrating sound source in acoustic environmental conditions in very specific and competent way and these acoustic content of an environment , being Nature or a listening room matter a lot for the optimization of any design.
 
A system/room cannot be evaluated by a mere subjectively selected choice of small set of electrical measures among all the electrical measures possible, among all the mechanical measures possibles, among all the acoustical measures possible and even with all the psychoacousticals measures possible, it will lack the qualia experience by a conscious feeling body associated with the physical invariant of the vibrating sound sources.
 
Then we must create a system/room for a listener characteristics, few electrical measures of the design pieces will not do and measuring speakers will not be enough to complete the optimization process.
 
Ok enough said... Read the articles... 😁
 
English is not my language. I apologize for my clumsy sentences. I never spoke english where i live and read in english only philosophy or science. 😊
 
( There is no concrete vocabulary in these books, no humor, no popular or slang expression and most scientists and philosophers are not great writers then if i can wrote top poetry in french, in english i am lagging a lot 😉😊 but you are lucky i wrote the shortest possible posts here in English because of that , imagine what it could be if my english was top litterature, my posts will be unbearable as short novel)
 
«I dont speak english»-- Groucho Marx 🤓

 

For sure what i call a vibrating sound source may be the "timbre" of a musical instrument for example. A musician hear perfectly well and can classify immediately the different qualias and qualities pertaining to the physical invariants behind any of these vibrating sound sources (violin) ... He can detect the wood qualities the strings qualities and the micro dynamic gestures of the players too .

A system/room vibrate as a whole any listener can detect the quality of it ... If i put diverse acoustics content in this room even a single straw located at the right place a difference will be audible... I know because when i tuned my 100 resonators the length and size of ONE neck matter and make a difference ...

Ignorant who know nothing about acoustics and who never design a Helmholtz resonators will call me a liar and will ask for a double blind test,...😊

It is why to evaluate a system the room conditions matter a lot more than the THD of the amplifier for the final perceived exam ...😊

I read about human hearing beating Fourier Transform and thought it was an exciting thing. That’s why the company I work for started analyzing rooms using short tone bursts of known frequency so we could see time domain information in bass notes to a greater accuracy since the frequency doesn’t have to be calculated out of a sine sweep. This works well, but there are more methods than Fourier Transform to separate time and frequency. Wavelet analysis can closely approximate human hearing and vision. I was surprised to find out that I could take a sweep of a room and then make an impulse file out of that. With that impulse I could simulate any acoustic environment through wavelet convolution and get the same pulsed tone results as I got from actually recording them. As I’m sure you’ll agree, human hearing doesn’t violate laws of physics so there is still time required for our ears to distinguish tones, and we have limited accuracy for detecting the start and stop times of tones. We’re much better at detecting the difference in timing between each ear than the absolute timing.

Our hearing definitely doesn’t beat the microphone and the digital recording electronics, which pick up far more than our hearing mechanism. It wasn’t designed for that. The telescope analogy is a good one. The analysis of the sound is what we do that’s so impressive. We can make sense of it.

We’ve got a bunch of resonators in our ears, so we can pick up on a tone as soon as a resonance differential between them is physically established, and that takes at least a half wave cycle to get started. A wavelet transform does something very similar, by running little wavelets through the signal at many different frequencies to see when in the signal a resonance occurs at that frequency. It’s an ear simulator of sorts. And it’s about as precise as you’re going to get in biology or electro mechanics.

Picking up differences between two signals is very easy for measuring equipment. A null test can reveal the slightest difference deep down into the noise floor.

I haven’t seen a single case yet of signals that could be audibly distinguished as different by the human ear but showed up as identical in reasonably competent measurements.