The Audio Science Review (ASR) approach to reviewing wines.


Imagine doing a wine review as follows - samples of wines are assessed by a reviewer who measures multiple variables including light transmission, specific gravity, residual sugar, salinity, boiling point etc.  These tests are repeated while playing test tones through the samples at different frequencies.

The results are compiled and the winner selected based on those measurements and the reviewer concludes that the other wines can't possibly be as good based on their measured results.  

At no point does the reviewer assess the bouquet of the wine nor taste it.  He relies on the science of measured results and not the decidedly unscientific subjective experience of smell and taste.

That is the ASR approach to audio - drinking Kool Aid, not wine.

toronto416

Showing 5 responses by knownothing

I have several problems with the site and with Amir, but mainly it is the name “Audio Science Review”.  Science is a systematic discipline that builds and organizes knowledge in the form of testable hypotheses and predictions about the universe.  What hypothesis is Amir testing about the universe, the designer’s hypothesis that their device sounds good when listened to with our ears only tested through, say, an APx555?

I suggest “ASR” is not pushing the boundaries of “knowledge” or “science”, but is applying engineering principles that are 50 or more years old as a proxy to “review” audio gear in isolation and in place of careful listening.  This is not “science”, and it is barely a “review”.  It is “measurement”, or “testing” or “applied engineering”, but it is not “science”, and labeling it as such should be an embarrassment.

I do believe there is a relationship between measurements and experienced sound of audio gear, but I definitely do not believe the line is linear nor the relationship conservative across all gear and all applications, ESPECIALLY when you place that gear in your system in your room with your ears in your seating position. No way.

I would be a lot more comfortable if the site was called “Audio Measurement Inferred Review”.  See what I did there?

kn

@oberoniaomnia I respect your opinion.  As a marine biologist do you think that Amir testing commercially available audio gear is in the same category of scientific work as an NSF funded project to understand how physical, chemical and biological processes mediate carbon transfer in and out of the sea surface, or a Sea Grant funded study of how plankton might affect oxygen levels in eutrophic coastal waters?  I suggest that if Amir were to submit a proposal to a competitive science or engineering funding organization claiming what he does on ASR somehow qualifies as “science” he would not get past the first round.

I read Amir’s reviews and look at his charts comparing different components and find it interesting.  But it is not “science”.  I never said that TAS reviews are remotely scientific, or even unbiased.  Stereophile often combines subjective reviews based on listening with independent machine measurements - is that “kinda scientific”?  No, it’s not, it’s just a combination of subjective and objective measurements.  I find the name “Audio Science Review” pretentious, inaccurate and mistakenly bestowing the reviews with the mystique of expanding the boundaries of our understanding when in fact Amir is just functioning as a dude with some measurement devices and a lot of time on his hands.  His professional work for Microsoft and others advancing digitally reproduced sound may more rightly qualify as “science”, I don’t know enough about it. His work at ASR, not so much.

if you tried different cables and they did nothing for you, be happy, you are saving a lot of money.  As for me, I’m going to turn off my phone, open a bottle from the bottom shelf of my wine rack with the sticky note on it that says “ask” and enjoy drinking it while listening to my digital front end without reminding myself that it measured well in Amir’s tests or that the new power cable on my power conditioner is making everything sound better.

kn

@mdalton I was intrigued by your exerts from the PSM156 review in ASR so I read it. My suspicion was confirmed, Amir tested the device with one piece of hifi gear attached at a time. My understanding is that higher end power conditioners have several functions: to supply non limiting current to a number of devices simultaneously, to filter noise that may be in the AC supply line, either coming through the mains at the breaker or imparted to the line between the breaker and the outlet, reduce noise in AC lines emanating from the attached gear or power cables, and potentially surge protection. In Amir’s test, he looked at the performance of the Puratin device with respect to noise in the AC supply line and looked at the measured performance of one piece of attached gear.

Hifi systems in practice can be quite complicated, with multiple devices connected to a power distributor/conditioner. For example, my system is a hybrid 5.1/2.1 home theater and two channel system with four different sources, both digital and analog, a large receiver and a subwoofer. Power from the 20 amp breaker is supplied by a single run of 10 gauge romex to a medical grade outlet. I have a stacked rack arrangement with seven different power cables and fourteen different lower level digital and analog cables all in rather close proximity. Most of my power supply cables are upgraded except for an old Blu-ray player and a vintage turntable which have attached “lamp cord”. The receiver is plugged directly into the wall and everything else is plugged into a power conditioner/distributor. I suggest the opportunity for some electrical noise generated in digital devices or power supplies to affect other devices or cables nearby is significant.

My system definitely benefitted by replacing a sturdy but simple non-surge protected power strip with a more substantial power conditioner/distributor, and by again by replacing the supply cable from the wall to the conditioner/distributor with a better cable (the single biggest cable improvement I have made in over 20 years of tinkering). I do not know if this benefit of adding a new conditioner/distributor and supply cable had to do with less current limitation due to higher gauge internal and external wiring, cleaning up AC from the mains, or reducing the impact of noise generated in the attached equipment.

What I do find challenging is a review of a power conditioner designed in part to tame electrical noise in a system of attached components by connecting and assessing one device at a time. The review of the Puritan Audio PSM156, perhaps more than any other on ASR, points to the weakness of the reductionist approach for evaluating products that were designed to reduce noise and improve the sound of a complex system by looking at one variable in isolation. To strain the oft used audio-car analogy, this is like saying “I tested the Corvette on the skidpad and while it did what the manufacturer claims I cannot recommend it for driving in traffic”. This also reinforces the need to test equipment in your system in your room with your ears. No two circumstances are alike, and as always, YMMV.

kn

@mdalton +1

I suggest it is not just gear that is susceptible to effects of noise or responsible for generating noise, but also interactions between low and high current cables (AND THEIR CONNECTORS) that are operating in close proximity to each other and may create distortion, slight changes in timing or actual signal loss.  Until everything is hooked up together, you really don’t know what you got.

kn

One of the topics that always comes up when objectivists from ASR and elsewhere are challenged is to present the alternative to machine measurement being blind or DBX listening tests.  The conversation usually devolves into a recitation of 1) the general failure to back up claims of “sounds better” with procedurally and statistically valid proof, and 2) when such studies have been attempted and published they are generally inconclusive or fail to refute the null hypothesis that there is no difference between pieces of equipment A and B.  Over time, these “tests” have been done with human subjects on electronics and cables, with an often cited study conducted in the 1980s by a marketing professional testing perceived differences in the sound of various amplifiers across a range of prices.

I have been perplexed by “findings” that routinely show no perceived difference between different gear or wires when limited blind tests I have done with friends show clear differences (not always tracking linearly with price).  When I report the findings of my admittedly amateur analysis as a counterpoint to the objectivists, my methods are invariably questioned.  The consensus feedback is “you must be wrong, you made a mistake, or you are lying”.

So what is going on here?  Why do so many in this hobby insist they hear subtle and not so subtle differences when they swap a cable or a DAC?  Especially when objectivists like Amir can’t measure a difference THAT SHOULD MATTER, and what blind testing that has been done is either inconclusive or unsupportive of a real performance difference.  I have some ideas and they generally fall into these categories: 1) some people have better hearing or listening skills (this is where the original wine tasting analogy in this thread makes a comeback), 2) the gear, room and/or listening position are not optimized to allow for a difference to be heard, and 3) the experimental or sampling design is flawed and has not been optimized to detect subtle differences.

I will explain.  The portion of the world population that are “audiophiles” is small.  And just because a person likes music and can afford to lavish time and money on the hobby does not guarantee they have strong listening skills - and if they love going to rock concerts or even spent a lifetime on a classical concert stage or in practice rooms, their hearing may be compromised.  In the general public, there is a gradient of hearing acuity but generally poor training in listening to reproduced music, so who is included in such a test matters.

Choice of gear matters.  What is the synergy between the components selected?  Are they selected in such a way to accentuate the differences in the piece of equipment being assessed?  Is the room well designed and acoustically neutral?  is the power supply from the wall clean and adequate?  Does the system have power conditioning?  Is the seating position optimized to allow for maximum resolution by the test subject?  Testing a group of people in a room at the same time where only one person is located in the sweet spot of the speakers would not be the ideal way to test the soundstage reproduction characteristics of a DAC or cable.  Using headphones for the test could reduce this variable, but headphones are generally a poor substitute for well-set up and sourced speakers in good room for testing reproduction of soundstage.

Finally, what is the test regime for the subjects.  Are they allowed adequate time to acclimate to the sound characteristics of the system and room before making a change for the test?  Most audiophiles spend multiple days analyzing the sound of a new component and cable, swapping them in and out before deciding if there is a difference or an improvement.

If I was tasked with developing listening tests for high end audio gear, I would screen the listeners to determine both their hearing acuity and their listening skills.  The target audience for these products are not teenagers listening to poorly produced streamed music with their smart phone and ear buds on the school bus.  I would select the system, the room and the seating position to optimize any inherent differences in the items to be tested.  I would probably throw in some headphone listening as well to remove many of those variables, as there are elements of reproduction that headphones excel at.  And I would partner with an expert in human subject tests, and optimize the testing regime to maximize the likelihood of a statistically valid test of the null hypothesis that there is no difference between item A and item B or no change (X) with adequate controls.

That is a lot of work to determine with some statistical rigor if a Mola Mola DAC sounds better than, the same as, different from, or worse than a Topping DAC.  I have done blind wine tastings as part of a wine club at one time in my life.  I found that I was not the most adept at detecting differences in the different wines and when I could, I did not have the vocabulary to describe what I was tasting.  It was enough for me to recognize that I liked the stuff in bag number #3 and compete with others to drain that first.  The most pompous member of our group had gone to Harvard from kindergarten through PhD, and thought he had an excellent pallet.  He did not.  His very humble wife however did have an excellent pallet, and routinely identified differences in the wine and the associated characteristics on a tasting wheel.  

In wine tasting as in audio listening, some people “have it” and some people do not, but many people can enjoy drinking or listening in their own way.  As the OP noted, engineering and food science can be applied to assure a certain level of quality, but there are very many other variables that go into how we experience and enjoy wine and hifi.  I find in moderation they are best enjoyed together.

kn