Some years ago I agreed to test prescription eyeglasses for Pearl Vision. Several pairs were switched while I kept my eyes tightly closed. I immediately choose the "perfect" pair, that were very reasonably priced.
After a day I had a horrible headache and was forced to keep my eyes closed to enjoy my new glasses.
I like to shop for audio this way too. Make a decision based on a quick test, make myself miserable and then quit listening so the headache goes away. |
Moto man, I do not question the importance of cable comparisons.
What I question and challenge is the notion that an objective BLIND comparison can be set up for audio cables. Without an objective measurable (user perception is not a valid, reproducible measurable), it is not a valid blind test and has no objective validity.
Subjective observations that are presented as "objective" are exactly what lead people like Elmuncy to spend money on things which disappoint. |
I once blind tested a Ford and a Chevy. With the Ford I bounced off a cop car, hit a little old lady in a crosswalk-- she survived, and ended the test by crashing into a garbage truck. The Chevy was much better in most respects, I only killed a dog and the test administrator was taken away in a straight jacket. When released from prison I bought the Chevy, which was then 5 years old but the radio still worked and the car was well broken in. Anyway, this experience caused me to be skeptical of blind testing. My loss I suppose. Cheers. Craig |
|
I have had Elmuncy's experience many times: noticing a dramatic change when I first put in a new component, only to have the improvement slip away after a time.
That said, I hasten to add that I have Valhalla speaker cables in my system and find them consistently miraculous. I've not done a blind test with them, but I should and would be willing to certainly. Blind testing may be bogus for all I know, but why do people seem so afraid of it? Methinks thou dost...
One piece of empirical "evidence" I have collected: when you go around the rooms at a show, is it the tweaks and little things that make the difference? If Valhalla, just to pick on that product, is so transformational, then I would expect the rooms using it would, generally speaking, be the better sounding rooms. Or perhaps the rooms with the Aurios MIB devices, or the Hydras, or the Sistrum stands, or the demagnetized CDs, etc. etc. Hell, even the rooms with the Audio Aero CD players.
Of course there are many variables that contribute to the sound of a system, particularly at a show, but my experience has been that the gross components, not the tweaks, account for a lion's share of the overall sound. And the little things, those things that we audiophiles so often proclaim to have DRAMATIC effects on our systems, amount pretty much to squat in the overall sense we get of a system when we first hear it. (Yes, I know this reasoning is shaky: even a great pair of speakers can sound wonderful in one room and dreadful in another.)
I think there is something going on, some way in which tiny, incremental changes in our own systems appear greatly magnified to us, magnified out of proportion. Sometimes I think it is change itself that suggests improvement. Ever had the experience of going back to something you had long since decided was dogmeat, only to find that--hey--this thing is good, what was I thinking?
Still, I'm not getting rid of my Valhalla. But I did dump the Hydra. |
Having started this thread, I will weigh in with a comment. First, I can't understand how Judit can flatly say that DBT "serves no useful purpose." I DBT'd cables using a Marantz 8300 DVD-A/SACD/CD player (this has two L/R outs so cables can be directly compared back to back, and also has coax and optical digital outs to compare those). The result was that by keeping everything else in the system constant, we were able to listen for differences between the IC's we compared (Audioquest Python vs. Tributaries SCA 150 and Nordost Red Dawns). We were also able to identify the sonic characteristics of each cable. Some of the differences between the Pythons and the Red Dawns were subtle, but readily identifiable. I found the ability to DBT invaluable. I then put both cables into my system for a while to get a "feel" for each over time and many different LP's and CD's. I certainly don't say that DBT is the be-all and end-all of decision-making, but it is difficult to say that it serves no useful purpose. I am waiting to DBT cables several steps up from the Pythons and see what a huge expense in $$ buys in identifiable differences! I think Redkiwi is also correct in saying that sonic differences do not necessarily translate to more musical enjoyment. |
A): The audible differences between cables is usually smaller than audiophiles report (though I believe they are there); and B): Short-term memory is indeed just a few moments long and not sufficient for such tests even when the switching is immediate. (If you're using a continuous piece of music then you haven't heard the part after the switch with the first configuration so there is effectively no comparison. If you're switching back to the beginning of the test music at the switch, then there is a time lag of at least the duration of the snippet of music you heard). So given that human short-term memory is only moments long, blind and double-blind tests are inherently flawed and fairly useless -- unless the differences really are "night and day".
I largely agree with TWL that objectivists use double-blind testing as an excuse not to spend more money, while deluding themselves that they can't do any better. |
The main reason why I am not a fan of A/B testing methods is they use only short bursts of music. I find I need to live with a new component for at least a few days to get its measure. This is because, what can sound "right" in a brief listen, can prove to fail to convey the emotion in music, and this judgement requires more extended listening, at least for these ears.
A straight A/B test will allow you to identify obvious differences, for sure - such as "A has more bass extension than B" - but that does not mean A is better than B when musical enjoyment is the goal.
I find that A/B testing tends to obscure many musically meaningful differences. You may decide I am deluded about these differences, and that all differences can be detected in a brief listen - there we will have to agree to disagree. |
Blind testing serves no useful purpose. It presumes that by switching cables in and out of ONE system, that you will uncover something fundamental about the cables. I think not. |
It's amazing that anyone would find a totally objective and neutral testing method controversial.
Isn't it ironic that an uncolored and totally neutral audio system is the primary goal of audiophiles? Why are these qualities good for audio systems, but not for the methods used to test them? |
I think that double blind testing is essential. I have actually fooled myself. Upon receiving something new in the mail, I immediately hook it up, and am "astounded" by how much better it sounds that what it replaced. After a prolonged listen, and especially if I have my wife switch the component in and out, which is not double blind, but single blind, I find myself hitting it about 50/50, which means that I can't tell the difference. When we purchase some expensive tweak we badly want not to have lost our money that we justify it by things like, "less listener fatigue" or once long term break in has taken place it will fall into place. I've seen cables described as "a night and day difference" Well, while you're at work have someone switch one of them with out telling you which one, or even if nothing has been done. If it's night and day you'll spot it immediately. |
I think that double blind testing was invented to confuse people, and put them in a state of mind that is too over-stressed to "perform under pressure". This virtually ensures the confused outcome which is interpreted as a "scientific proof" that he can't tell the difference between products. It is mainly used to justify the psychological and financial need to not spend money on gear, but still be convinced that you have the best, without having to pay for it. The cover story is that the audiophiles who can hear something are "deluding themselves" psychologically about hearing the differences. I maintain that the ones who don't want to spend money are "deluding themselves" into not hearing differences. So who's right? |
One reason blind (or double blind) testing is controversial is simply some folks won't admit that the $500 (or more, sometimes much more) cable they just bought doesn't sound any different from the cheaper brand.
Let's face it, the guy who says his jaw dropped when he installed his newest (and more expensive) piece of equipment, and then can't id that same piece when he can't see it is the reason blind testing has never been accepted.
I believe there are differences between some components, but not to the degree some people claim.
Although I have nice components, including all tube electronics, planar speakers, and listen more to vinyl than digital, I have never cared for spending hundreds or, in some cases, thousands of $$$ on cables, power conditioners, or power cables.
From all I've read, not one double blind test has ever given credibility to audible differences in cables.
What really gets on my nerves is someone who starts describing the differences he hears with components auditioned weeks or months apart. Sorry, our auditory memory usually doesn't last more than a few moments.
Jim |
I do not know why it is so controversial, but I can tell you it is irritating. Many who preach this methodology are so dogmatic about issues such as placebo effect, being deceived by snake oil salesmen, and the physical science behind a given product that many are insistent that audiophiles that don't apply this methodology are deceived.
Subjective audio enthusiasts know from personal experience whether one product sounds better than another to them and whether the cost to benefit ratio is satisfactory or not, albeit a personal matter for sure.
So I guess the rub is in their insistence that their scientific method is the only valid approach opposed to making a decision based on simple listening tests alone. I personally would not make a decision any other way. Why not just let each other make our own judgments with whatever method of comparison chosen.
I am certainly not opposed to folks voicing their opinions and findings (I am sure this is the reason most of us read Audiogon posts), but to insist that others are deceived, misspending their money, and acting irrationally is just plain unbecoming behavior in my estimation. |
I wonder if a real "blind test" is what is meant here. Certainly swapping any component in and out of your system will allow you to get an idea of what difference it makes. You will even be able to describe the difference, and note it for future reference. Nothing controversial about that. But a "blind" test involves hiding the identity of the component from the listener, who then chooses his or her preference. There are several things wrong with doing it this way, aside from the practical problem of finding someone you trust to swap components in your system while you are unable to observe him.
One theory has it that we are better equipped physically to notice similarities rather than differences. And as you say, the choice for the long term should be made on the basis of a longer term listen. Otherwise we may listen for the wrong things... at worst, for hi-fi and not for music. |
Drubin's right about double-blind: it means that nobody in the room knows which is which. And researchers use it because they've learned that there are all sorts of ways that someone can subconsciously indicate which is which to whoever is actually doing the comparing. If you want to be absolutely sure that there's no outside influence (intentional or not) and that you're making your decisions based only on the sound, double-blind is essential.
That said, the main reason DBTs are controversial is that they tend to produce results that are at odds with the received wisdom of audiophilia. |
Doesn't double blind mean that the experimenter also does not know which is A and which is B? That's harder to pull off. Simple blind testing should suffice.
I agree with you. It should be used more. |