Why is Double Blind Testing Controversial?


I noticed that the concept of "double blind testing" of cables is a controversial topic. Why? A/B switching seems like the only definitive way of determining how one cable compares to another, or any other component such as speakers, for example. While A/B testing (and particularly double blind testing, where you don't know which cable is A or B) does not show the long term listenability of a cable or other component, it does show the specific and immediate differences between the two. It shows the differences, if at all, how slight they are, how important, etc. It seems obvious that without knowing which cable you are listening to, you eliminate bias and preconceived notions as well. So, why is this a controversial notion?
moto_man

Showing 4 responses by drubin

Doesn't double blind mean that the experimenter also does not know which is A and which is B? That's harder to pull off. Simple blind testing should suffice.

I agree with you. It should be used more.
I have had Elmuncy's experience many times: noticing a dramatic change when I first put in a new component, only to have the improvement slip away after a time.

That said, I hasten to add that I have Valhalla speaker cables in my system and find them consistently miraculous. I've not done a blind test with them, but I should and would be willing to certainly. Blind testing may be bogus for all I know, but why do people seem so afraid of it? Methinks thou dost...

One piece of empirical "evidence" I have collected: when you go around the rooms at a show, is it the tweaks and little things that make the difference? If Valhalla, just to pick on that product, is so transformational, then I would expect the rooms using it would, generally speaking, be the better sounding rooms. Or perhaps the rooms with the Aurios MIB devices, or the Hydras, or the Sistrum stands, or the demagnetized CDs, etc. etc. Hell, even the rooms with the Audio Aero CD players.

Of course there are many variables that contribute to the sound of a system, particularly at a show, but my experience has been that the gross components, not the tweaks, account for a lion's share of the overall sound. And the little things, those things that we audiophiles so often proclaim to have DRAMATIC effects on our systems, amount pretty much to squat in the overall sense we get of a system when we first hear it. (Yes, I know this reasoning is shaky: even a great pair of speakers can sound wonderful in one room and dreadful in another.)

I think there is something going on, some way in which tiny, incremental changes in our own systems appear greatly magnified to us, magnified out of proportion. Sometimes I think it is change itself that suggests improvement. Ever had the experience of going back to something you had long since decided was dogmeat, only to find that--hey--this thing is good, what was I thinking?

Still, I'm not getting rid of my Valhalla. But I did dump the Hydra.
Well, you don't need to use the blind test to make your decision. The idea is to isolate, at least for the period of the test, the contribution of knowing who the manufacturer is, what the product costs, what it looks like, and so forth. Getting those variables out of the way at some point during your evaluation, even briefly, might be helpful, don't you think? Doesn't mean you won't choose the higher priced product in the end, but at least you will have the benefit of some calibration between what you hear and what you perhaps expect or hope to hear.
I advocate this testing as a way of attempting to control variables in subjective evaluations, not as means of disproving the merits of high-end, high- priced components.