Natnic:
Who can disagree with "garbage in garbage out?" Not I. I suggest that a competent "modern" digital front end will not reveal its more modest price point, consistently or reliably in a double blind test.
At the bottom of this post I have copied some criteria for a double blind test which I found at http://www.pcabx.com/#ten_req
The #8 "requirement" is crucial. Volume levels must be checked closely and matched exactly. Preferably by instrument using a test tone. (You can't properly match volume with music because it is a "moving target.") Very slight differences in volume play havoc with any attempt at subjective comparison of hi fi components. The oldest "sales trick" in the hi fi biz is to have slightly more gain coming from the purportedly better component. It makes it "sound better" to the customer. You will need a switching box (Radio Shack) so that the test subject (you in this case) can choose between source "A" and source "B" as explained below.
At the same time, the subject must not know which digital source is CDP "A" and which one is CDP "B." That means that someone other than you and any other test takers has to do the set-up. Well chauvinistically call him the set-up man. The two components must be run through the same electronics at the same time to the same speakers. Sure, match the wires too, just to cover all bases. Make sure that the length of the wires is the same.
The test takers enter the room one at a time. A tester is present to help but the tester can't
know A from B either. Two identical music CDs need to be cued up and started at the same time.
The test taker(s) (you and as many other "trained listeners" who you can gather) can then switch from A to B and back and forth as many times and as often and at whatever intervals he wishes (with the switching box) while listening to the type of program material described below. I personally think it helpful if it is program material that you are very familiar with, too. Keep a log in which you commit in writing (or the tester does, based on what you say) whether "A" or "B" is better. Maybe make notes as to why "A" or "B" is better. The tester should preserve the notes, and when there is time, record the results for each individual and for the entire group. Then, finally, the set-up man "translates" the results with the use of his record of which CDP was "A" and which was "B" in each round of the test.
Now round #1 is over. The set-up man comes back to the room and either switches "A" to "B" or doesn't switch A to B according to a random sequence previously written down. No one else knows if A is still A, or if now A is in fact B.
Then proceed to round #2, following the same procedures that were followed in round #1. Do this for a reasonable and fair number of rounds. Say, twenty. Tally the results. Has the more expensive component been identified as better significantly more frequently than the laws of probability allow for?
With as many emotional factors removed as possible (the $20,000 Zapmaster looks gorgeous and is SOTA and it, therefore, MUST, sound better) and the crucial volume levels matched, your job is really in front of you. Hard work, but fun.
Which speakers you use to conduct the test is not crucial as they will be a constant. To be as fair as possible, to your point of view, the speakers should be as good as possible, as revealing as possible, as transparent as possible, etc. Do it in your own home with your own gear or at the home of a friend. It is not likely that any dealer is going to let anyone in his/her Salon to commandeer it for the several hours the test may take, especially if the high priced spread they are selling may fall victim to the procedure.
I don't have any specific suggestions for the digital front ends but I think fair criteria would be a well regarded $2,000 CDP and a well regarded $400 - $800 CDP. We are not looking for "giant killers" here. Just fairness.
(quote from http://www.pcabx.com/#ten_req)
"Ten (10) Requirements For Sensitive and Reliable Listening Tests
(1) Program material must include critical passages that enable audible differences to be most easily heard.
(2) Listeners must be sensitized to a audible differences, so that if an audible difference is generated by the equipment, the listener will notice it and have a useful reaction to it.
(3) Listeners must be trained to listen systematically so that audible problems are heard.
(4) Procedures should be "open" to detecting problems that aren't necessarily technically well-understood or even expected, at this time. A classic problem with measurements and some listening tests is that each one focuses on one or only a few problems, allowing others to escape notice.
(5) We must have confidence that the Unit Under Test (UUT) is representative of the kind of equipment it represents. In other words the UUT must not be broken, it must not be appreciably modified in some secret way, and must not be the wrong make or model, among other things.
(6) A suitable listening environment must be provided. It can't be too dull, too bright, too noisy, too reverberant, or too harsh. The speakers and other components have to be sufficiently free from distortion, the room must be noise-free, etc..
(7) Listeners need to be in a good mood for listening, in good physical condition (no blocked-up ears!), and be well-trained for hearing deficiencies in the reproduced sound.
(8) Sample volume levels need to be matched to each other or else the listeners will perceive differences that are simply due to volume differences.
(9) Non-audible influences need to be controlled so that the listener reaches his conclusions due to "Just listening".
(10) Listeners should control as many of the aspects of the listening test as possible. Self-controlled tests usually facilitate this. Most importantly, they should be able to switch among the alternatives at times of their choosing. The switchover should be as instantaneous and non-disruptive as possible.
HAVE FUN. Keep us posted! ;)
Who can disagree with "garbage in garbage out?" Not I. I suggest that a competent "modern" digital front end will not reveal its more modest price point, consistently or reliably in a double blind test.
At the bottom of this post I have copied some criteria for a double blind test which I found at http://www.pcabx.com/#ten_req
The #8 "requirement" is crucial. Volume levels must be checked closely and matched exactly. Preferably by instrument using a test tone. (You can't properly match volume with music because it is a "moving target.") Very slight differences in volume play havoc with any attempt at subjective comparison of hi fi components. The oldest "sales trick" in the hi fi biz is to have slightly more gain coming from the purportedly better component. It makes it "sound better" to the customer. You will need a switching box (Radio Shack) so that the test subject (you in this case) can choose between source "A" and source "B" as explained below.
At the same time, the subject must not know which digital source is CDP "A" and which one is CDP "B." That means that someone other than you and any other test takers has to do the set-up. Well chauvinistically call him the set-up man. The two components must be run through the same electronics at the same time to the same speakers. Sure, match the wires too, just to cover all bases. Make sure that the length of the wires is the same.
The test takers enter the room one at a time. A tester is present to help but the tester can't
know A from B either. Two identical music CDs need to be cued up and started at the same time.
The test taker(s) (you and as many other "trained listeners" who you can gather) can then switch from A to B and back and forth as many times and as often and at whatever intervals he wishes (with the switching box) while listening to the type of program material described below. I personally think it helpful if it is program material that you are very familiar with, too. Keep a log in which you commit in writing (or the tester does, based on what you say) whether "A" or "B" is better. Maybe make notes as to why "A" or "B" is better. The tester should preserve the notes, and when there is time, record the results for each individual and for the entire group. Then, finally, the set-up man "translates" the results with the use of his record of which CDP was "A" and which was "B" in each round of the test.
Now round #1 is over. The set-up man comes back to the room and either switches "A" to "B" or doesn't switch A to B according to a random sequence previously written down. No one else knows if A is still A, or if now A is in fact B.
Then proceed to round #2, following the same procedures that were followed in round #1. Do this for a reasonable and fair number of rounds. Say, twenty. Tally the results. Has the more expensive component been identified as better significantly more frequently than the laws of probability allow for?
With as many emotional factors removed as possible (the $20,000 Zapmaster looks gorgeous and is SOTA and it, therefore, MUST, sound better) and the crucial volume levels matched, your job is really in front of you. Hard work, but fun.
Which speakers you use to conduct the test is not crucial as they will be a constant. To be as fair as possible, to your point of view, the speakers should be as good as possible, as revealing as possible, as transparent as possible, etc. Do it in your own home with your own gear or at the home of a friend. It is not likely that any dealer is going to let anyone in his/her Salon to commandeer it for the several hours the test may take, especially if the high priced spread they are selling may fall victim to the procedure.
I don't have any specific suggestions for the digital front ends but I think fair criteria would be a well regarded $2,000 CDP and a well regarded $400 - $800 CDP. We are not looking for "giant killers" here. Just fairness.
(quote from http://www.pcabx.com/#ten_req)
"Ten (10) Requirements For Sensitive and Reliable Listening Tests
(1) Program material must include critical passages that enable audible differences to be most easily heard.
(2) Listeners must be sensitized to a audible differences, so that if an audible difference is generated by the equipment, the listener will notice it and have a useful reaction to it.
(3) Listeners must be trained to listen systematically so that audible problems are heard.
(4) Procedures should be "open" to detecting problems that aren't necessarily technically well-understood or even expected, at this time. A classic problem with measurements and some listening tests is that each one focuses on one or only a few problems, allowing others to escape notice.
(5) We must have confidence that the Unit Under Test (UUT) is representative of the kind of equipment it represents. In other words the UUT must not be broken, it must not be appreciably modified in some secret way, and must not be the wrong make or model, among other things.
(6) A suitable listening environment must be provided. It can't be too dull, too bright, too noisy, too reverberant, or too harsh. The speakers and other components have to be sufficiently free from distortion, the room must be noise-free, etc..
(7) Listeners need to be in a good mood for listening, in good physical condition (no blocked-up ears!), and be well-trained for hearing deficiencies in the reproduced sound.
(8) Sample volume levels need to be matched to each other or else the listeners will perceive differences that are simply due to volume differences.
(9) Non-audible influences need to be controlled so that the listener reaches his conclusions due to "Just listening".
(10) Listeners should control as many of the aspects of the listening test as possible. Self-controlled tests usually facilitate this. Most importantly, they should be able to switch among the alternatives at times of their choosing. The switchover should be as instantaneous and non-disruptive as possible.
HAVE FUN. Keep us posted! ;)