In defense of ABX testing


We Audiophiles need to get ourselves out of the stoneage, reject mythology, and say goodbye to superstition. Especially the reviewers, who do us a disservice by endlessly writing articles claiming the latest tweak or gadget revolutionized the sound of their system. Likewise, any reviewer who claims that ABX testing is not applicable to high end audio needs to find a new career path. Like anything, there is a right way and many wrong ways. Hail Science!

Here's an interesting thread on the hydrogenaudio website:

http://www.hydrogenaud.io/forums/index.php?showtopic=108062

This caught my eye in particular:

"The problem with sighted evaluations is very visible in consumer high end audio, where all sorts of very poorly trained listeners claim that they have heard differences that, in technical terms are impossibly small or non existent.

The corresponding problem is that blind tests deal with this problem of false positives very effectively, but can easily produce false negatives."
psag

Showing 5 responses by psag

Actually they are not my words. They're from a post on the hydrogenaudio website. I'll try to post the link again:

http://www.hydrogenaud.io/forums/index.php?showtopic=108062.

Anyway, ABX testing has been around forever, and it works. Its used extensively in the recording industry.

Anyone who is willing to invest the time and effort can learn it and use it to compare audio components, digital files, whatever. I've used it mainly to evaluate cables. I was able to replace costly Audioquest cables with a much more cost-effective brand.
One more try:
http://www.hydrogenaud.io/forums/index.php?showtopic=108062
If this doesn't work, the thread is titled: Problems with Blind ABX testing - advice needed.
"Especially the reviewers, who do us a disservice by endlessly writing articles claiming the latest tweak or gadget revolutionized the sound of their system."

"Can you show us some scientificly valid listening tests that were done comparing individual components as part of a review?"

Probably I shouldn't have used the word 'science', which seems to get people in an uproar. Perhaps a better word would have been 'logic'. It is logical to assume that by using standard ABX testing, one can determine with certainty which of two testing scenarios sounds better. And in fact, that assumption turns out to be true.

If reviewers did this, they could start to build trust with their readers, and the 'snakeoil' aspect of high end audio might start to diminish.
I agree that only a limited number of switches are needed, if the test conditions are good. What are good test conditions?: A treated room with good acoustics, high quality electronics, well-recorded music, the ability to do rapid switching (having a second person to manipulate the hardware helps), and familiarity with the musical selections. That's all you need to eliminate subjectivity and get to the truth.
Zd542, I think statistics is somewhat of a pseudoscience, so I can't completely address what you are saying, other than to offer my own experience. When there is a significant difference between two listening test scenarios, that difference is quickly evident, and it will be evident each and every time. Were that not the case, the difference would not be significant. Obviously I'm not talking about the textbook definition of 'significant difference'. There's not much point in making changes to an audio system if the differences approach the statistical limit of significance.