In defense of ABX testing


We Audiophiles need to get ourselves out of the stoneage, reject mythology, and say goodbye to superstition. Especially the reviewers, who do us a disservice by endlessly writing articles claiming the latest tweak or gadget revolutionized the sound of their system. Likewise, any reviewer who claims that ABX testing is not applicable to high end audio needs to find a new career path. Like anything, there is a right way and many wrong ways. Hail Science!

Here's an interesting thread on the hydrogenaudio website:

http://www.hydrogenaud.io/forums/index.php?showtopic=108062

This caught my eye in particular:

"The problem with sighted evaluations is very visible in consumer high end audio, where all sorts of very poorly trained listeners claim that they have heard differences that, in technical terms are impossibly small or non existent.

The corresponding problem is that blind tests deal with this problem of false positives very effectively, but can easily produce false negatives."
psag
"01-20-15: Jea48
Back in the medieval days was it science or logic for the times when doctors used to bleed a patient saying the patient had too much blood?"

Neither. It was stupidity. They has no way of knowing how much blood was too much blood. The only thing they knew for sure was that if you lost enough blood, you died.

"01-20-15: Onhwy61
Zd542, I think I get your point, but the earth is flat is not good example. Ancient Egyptians and Greeks figured out that the earth was round via observation and logic."

Yes, but not every one knew that. And without knowing the earth is round, it is logical to assume that its flat. From they're perspective, that's how the world appeared. Also, and more important, its true that Ancient Egyptians and Greeks figured out the earth was round. But it only became logical assumption until after study/observation was done. They got direct results that proved otherwise.

"I thought the point of A/B testing was to determine if there was a difference, not a preference?"

Absolutely. If I said otherwise, point it out because its a mistake. Even though my view is that listening is the most important part of evaluating an audio component, I don't see why A/B testing, to root out differences, if any, is not worthwhile. Especially when the differences are small. For example, a test would come in handy if a reviewer has a difficult time hearing a difference between 2 products. We've all been there. Sometimes, its hard to tell. Having some type of conclusive data concerning these areas can help. But still, if a reviewer included this type of data in a review, it would be important to list exactly how the test was done and with what type of equipment. The reason, of course, is that not everyone has the same equipment, hearing ability and listening skills. And while not perfect, test results can be used as an aid, just like measurements. Something to help with a selection but not to be taken absolutely.
Its a tool. No tool is perfect. If it fits the task at hand use it. Just don't expect anyone else will draw the same conclusion. They may or may not and it should not matter other than as a point of interest. Only you can hear what you hear. No one else. Being of the same species, we all hear similarly perhaps but not exactly the same. Some differences may be major others subtle. The subtle ones will probably never be measured or quantified so just forget about it.
Post removed 
Jea48, I think if they did it the way you recommend, then we'd have nothing to argue about. :-)