Blind Shoot-out in San Diego -- 5 CD Players


On Saturday, February 24, a few members of the San Diego, Los Angeles and Palm Springs audio communities conducted a blind shoot-out at the home of one of the members of the San Diego Music and Audio Guild. The five CD Players selected for evaluation were: 1) a Resolution Audio Opus 21 (modified by Great Northern Sound), 2) the dcs standalone player, 3) a Meridian 808 Signature, 4) a EMM Labs Signature configuration (CDSD/DCC2 combo), and 5) an APL NWO 2.5T (the 2.5T is a 2.5 featuring a redesigned tube output stage and other improvements).

The ground rules for the shoot-out specified that two randomly draw players would be compared head-to-head, and the winner would then be compared against the next randomly drawn player, until only one unit survived (the so-called King-of-the-Hill method). One of our most knowledgeable members would set up each of the two competing pairs behind a curtain, adjust for volume, etc. and would not participate in the voting. Alex Peychev was the only manufacturer present, and he agreed to express no opinion until the completion of the formal process, and he also did not participate in the voting. The five of us who did the voting did so by an immediate and simultaneous show of hands after each pairing after each selection. Two pieces of well-recorded classical music on Red Book CDs were chosen because they offered a range of instrumental and vocal sonic charactistics. And since each participant voted for each piece separately, there was a total of 10 votes up for grabs at each head-to-head audition. Finally, although we all took informal notes, there was no attempt at detailed analysis recorded -- just the raw vote tally.

And now for the results:

In pairing number 1, the dcs won handily over the modified Opus 21, 9 votes to 1.

In pairing number 2, the dcs again came out on top, this time against the Meridian 808, 9 votes to 1.

In pairing number 3, the Meitner Signature was preferred over the dcs, by a closer but consistent margin (we repeated some of the head-to-head tests at the requests of the participants). The vote was 6 to 4.

Finally, in pairing number 5, the APL 2.5T bested the Meitner, 7 votes to 3.

In the interest of configuration consistance, all these auditions involved the use of a power regenerator supplying power to each of the players and involved going through a pre-amp.

This concluded the blind portion of the shoot-out. All expressed the view that the comparisons had been fairly conducted, and that even though one of the comparisons was close, the rankings overall represented a true consensus of the group's feelings.

Thereafter, without the use blind listening, we tried certain variations at the request of various of the particiapans. These involved the Meitner and the APL units exclusively, and may be summarized as follows:

First, when the APL 2.5T was removed from the power regenerator and plugged into the wall, its performance improved significantly. (Alex attributed this to the fact that the 2.5T features a linear power supply). When the Meitner unit(which utilizes a switching power supply) was plugged into the wall, its sonics deteriorated, and so it was restored to the power regenerator.

Second, when we auditioned a limited number of SACDs, the performance on both units was even better, but the improvement on the APL was unanimously felt to be dramatic.
The group concluded we had just experienced "an SACD blowout".

The above concludes the agreed-to results on the blind shoot-out. What follows is an overview of my own personal assessment of the qualitative differences I observed in the top three performers.

First of all the dcs and the Meitner are both clearly state of the art players. That the dcs scored as well as it did in its standalone implementation is in my opinion very significant. And for those of us who have auditioned prior implementations of the Meitner in previous shoot-outs, this unit is truly at the top of its game, and although it was close, had the edge on the dcs. Both the dcs and the Meitner showed all the traits one would expect on a Class A player -- excellent tonality, imaging, soundstaging, bass extension, transparency, resolution, delineation, etc.

But from my point of view, the APL 2.5T had all of the above, plus two deminsions that I feel make it truly unique. First of all, the life-like quality of the tonality across the spectrum was spot-on on all forms of instruments and voice. An second, and more difficult to describe, I had the uncany feeling that I was in the presence of real music -- lots or "air", spatial cues, etc. that simply add up to a sense of realism that I have never experienced before. When I closed my eyes, I truly felt that I was in the room with live music. What can I say.

Obviously, I invite others of the participants to express their views on-line.

Pete

petewatt
Post removed 
Newbee – You ask some important questions. All five voters sat in the same seats throughout all comparisons, so the perspective of each listener never changed.

Certainly none of the side positions is as revealing as the sweet spot, which can accommodate two people -- one in front of the other. However, we spent a considerable amount of time in advance of the event to position the other seats so that acceptably focused images and a convincing soundstage is perceived from the other three positions -- two on either side of the sweet spot in the back row and one to the side in the front row. None of the voters raised the concern about the lack of image focus or not being able to hear dimensional details. Later when discussions were allowed, the three voters sitting at the side positions were surprised they were able to tell each player’s ability to present a focused image and believable soundstage. These side perspectives may not be correct, but it was the best we could do given our time constraints.

I do not know if the other voters prefer nearfield listening. However, I know they can easily recognize good, focused imaging and excellent soundstaging when they hear it. However, stereo imaging and soundstaging capabilities, although important, are only two of the many criteria each voter had to keep in mind as when they listened. In fact we did not prediscuss or define these sonic parameters. We simply asked each voter to listen, compare and honestly and confidently cast a vote as to which player they liked in each pairing.

The speakers used are neither dipoles nor horns. The drivers are not horn loaded, do not use ribbon tweeters, and has excellent off axis response. It is appropriate at this point to provide the room dimensions (hopefully this info partially addresses other member’s curiosities):

width - 12 feet
length - 15 feet (see * more info)
front row - ~9.5 feet diagonal from front of each speaker
back row - ~11.5 feet diagonal from the front of each speaker

The wall behind the seats has a central window, which is covered with 2 layers of fairly thick curtains. The floor is wool carpeted with foam insulation underneath and this is supplemented by another 6x8 ft area rug on top. The cement foundation is underneath the carpet. Spikes are used at the rear of each speaker to couple them to the foundation. A single Finite Elemente Cerapuc is used in front of each speaker as a vibration control treatment.

*There is no wall behind the speakers and this contributes greatly to this system's superb imaging/soundstaging. The lack of wall behind the speakers also partially contributes to a flat FR response measurement from the sitting position. The room node interactions are negligible at +1.5 dB at 80Hz and flat at nearby frequencies. My very first post includes details of the very good in-room, from-the-listening-position SPL measurements. These data also clearly detail that there are no “slight mid-range recession” or a “slight elevation of the high frequencies”.

I do not know if there is “shortening of the decay time of the signal (imparts a fast sound and a clarity due to the shopping off of the trailing edge of the signal”. Please pardon my obvious lack of technical knowledge, but I can only guess this is more perceived than measured, yes? This system has never been described as fast, slow or muddy. Besides its superb imaging, soundstaging and layering/delineation capabilities, it is also dynamic and articulate, while also having a tonal balance that results in a believable representation of real instruments or voices. These along with the system’s overall musicality and resolving ability are the reasons why we keep using it for our comparisons.

I cannot confirm “the excellence of the sound is simply the absence of any distortions”. We never measured this system in this regard so we have no meaningful information to share. Suffice it to say that there is distortion (what system doesn’t), but none of its symptoms have ever noticeably/audibly surfaced. We’ve used other systems in the past so this is not the only one with which we have experience. However, during last four years we’ve done comparisons using this system, no one has ever commented something that would lead us to investigate if distortions are an issue.

As to the type of listening fatigue I think you described, not one of the voters mentioned or commented anything that had to do with system edginess or harshness. Another member already raised concern about careful AB comparisons for 6 hours. We took plenty of breaks in the kitchen and family room areas, while the set-up, level matching, and blinding was being done for each pairing. Fatigue of a different kind eventually set in. We would have kept going were it not for one voter having to leave, another voter needing to join his family and the others wanted to go out and get steak ;-)
Jfz - Nice approach you have. We did a mixture of this actually.

During round 1 we learned a number of things. Important among these was the fact that the voters (although they could not see the players) could tell the one being used because they can see which one received the CD we wanted hear. So we decided to mix things up as detailed in my previous response to Tbg. Although we kept evaluating the choral track before the orchestral recording, we mixed up which player started each pairing and THIS WAS DONE FOR EACH TRACK. Thus, for any pairing being evaluated we did not necessarily begin with the same player for the NEXT CD used. After round 1, the voters did not know which player was playing at any given time.

I wanted to address your comment about hearing more the subsequent times you listen to a recording. I agree and some would also say that their focus changes during the second or third tries, when compared to the first time they hear a recording. There was really no way to address this equally for all voters. Three of the five voters know the Rutter piece very well as we have used it in previous evaluations. Only two of the voters know the Bernstein recording. We felt it was important for each voter to have many opportunities to hear each track so they can confidently cast their votes. Here is a little more detail on our listening process for EACH TRACK...

1) With the correct input selector and volume levels set, we loaded both players, one with the test CD, the other with a dummy CD. We then listened to a predetermined point on player A. For the Rutter piece this was at the 2:08 mark and at 2:44 for the Bernstein piece.
2) Rewind and listen again but for a shorter time. Up to 1:13 for Rutter and 1:38 for Bernstein.
3) We asked is any voter needed additional listening and, if so, we would repeat 2) above
4) Unload both players, switch CDs (the location of the one being tested is not revealed/visible to voters)
5) Reload both CDs and select the appropriate line input of the preamp and make the necessary volume adjustments as predetermined by the level matching done during the set up for both players.
6) Repeat 1, 2 and 3 above for player B.
7) We asked if any voters wanted to go back to player A, and we would repeat steps 4, 5, 1, 2, and 3 for everyone. This option was done only for one pairing – the DCS vs. the Meitner.
8) Immediately vote with show of hands (no discussions)
9) Repeat the process for the next recording.

One more thing to note, for subsequent pairings and even in between each pairing, we also switched the input selector to which the players were connected on the preamp. Even though we were assured by its manufacturer that Line 1 is identical to Line 2 in every way (materials/parts as well as specs), we wanted to vary this too, just in case ;-)
When I do sighted A/B tests at home - which I very rarely do, because I find longer (usually measured in hours or days) experience with a component much more reliable - I always do A/B/A/B; or AA/BB/AA/BB; or A/B/B/A, etc. I do this because the second time I hear a piece of music I notice (or "hear") more than I did the first time. This can result in a bias for the second component. I realize that this mixing of the order was probably not possible to do, given the logistics involved in this very well-thought-out test. I'm just adding my thoughts here to the discussion; i.e., not arguing a point of any kind.
Ctm_cra, You guys apparently spent a lot of time eliminating as much of the varibles involved in blind A/B testing, but I remain curious about a couple of things I have always felt might affect the outcome.

The first issue is stereo imaging. One of the hall marks of great 2 channel stereo systems is its ability to convey with absolute accuracy the information in the source recording. Nearfield listening, within the parameters of the system set up requirements and room possibilities, is the most revealing in this respect. (Other set ups for far field, more reverberant sounds, like omidirectional or bi polar speakers may sound 'wonderful' but are not necessarily accurate or reproducable in other environments.

My first question - How can five folks hear the same sound at the same time? Only one can sit in the sweet spot and we all know that listening off the sweet spot may be good but I doubt that anyone will consider it accurate. Or do you feel that stereo imaging capabilities of the digital devise, or the set up, is not relevant?

The next question has to do with short term perceptions that are based on high frequency information. That is, can you tell when the sound of the higher frequencies are more detailed due to 1) A slight mid-range recession, 2) A slight elevation of the high frequencies, 3)Shortening of the decay time of the signal (imparts a fast sound and a clarity due to the shopping off of the trailing edge of the signal, or 4) The excellence of the sound is simply the absence of any distortions what so ever.

IMHO a slight increase, or clarity, in high frequency information can have a very audible effect in stereo imaging, but the reason for the apparent increase is very important. If its for any reason other than increased clarity its likely to induce some fatigue factor in long term listening sessions.

The question - how can you resolve these issues in short A/B listening with any assurance that the sound that you find attractive under such conditions will survive long term listening under controlled conditions?

Am I missing something here? Are the assumptions leading to my questions off base?
Metralla - I can only account for the blinded comparisons in SD since I could not attend the LA session. Our approach is as you indicated. Throughout most our listening evaluations, Alex stayed out of the dedicated listening room, which was set up only for the 5 voters. He often waited in the adjacent kitchen area and he was to provide no comment or any vocal expressions until the blind phase of the comparisons were complete. He occasionally peeked to have a listen, but was still out of the line of site of the voters.

We confidently proceeded primarily due to the following logistical details:
1) blind comparison format in which the voters did not know which player was being used,
2) assignment of letter IDs for each player used and these are not revealed until the completion of the comparisons
3) immediate voting process via simple show of hands (without discussion) after each track per pairing.

Alex, or any other manufacturer, who is willing to put his product up against the very best and allow others to try and objectively evaluate it is always welcome. As an fyi, we recently hosted Raul Iruegas. He presented his Essential 3150, a superb full-function preamp to our audio club and stayed longer to allow club members to evaluate it in their systems. Like Alex and Nick Doshi, Raul wears many hats as designer, manufacturer, distributor and dealer. Because of the latter two roles he is solely responsible for showing his products. We thank Raul for providing the chance for us to try to objectively evaluate it against our own preamps and phonostages, and we are equally grateful to Alex for giving us the listening opportunity with his player.

I appreciate your understanding of this and taking the time to comment on these comparisons.
Excellent job guys. This is what makes this hobby so much fun. Audio societies around the country should be involved with more of these "shootouts". It provides useful information and subsequent interesting discussions. Now if only someone would be interested in buying a spleen, I could then afford those top of the line components(unfortunately I have an emotional attachment to my kidneys).
Arthursmuck - A few partipants of this comparison have evalutated the Opus 21 previously. In stock form it has outperformed an older version of the Audio Aero Capitole, an Audience modded Denon 3910 and a stock Esoteric DV-50. It also competed surprisingly well with the older version of the Meitner DAC6/DCSD. So we wanted to know how well the latest version with Steve Huntley's Great Northern Sound Company reference mods would do against top shelf players. You have every right to be proud of your little guy!
Nice read, very informative and insightful on some players I've been wondering about.

One thing I guess I'm wondering....what was the Opus21 doing in there to begin with? I own one modded by Steve, and it's a very nice player, but retails at 1/3 of everything else in the shootout, in some cases 1/4 of the price point....seems to me it's surprising the little guy even got one vote...I'm proud of it for that!

Thanks for the time and write up guys, good stuff!
Ctm_cra wrote:

The most expensive player evaluated is the latest version of the Meitner and it did not win. Also, recall that each player was assigned a letter code written on a piece of paper and placed in bag. The order of the players was determined by the consecutive order of letters pulled from this process. By chance it turned out the Meitner and APL were last. None of the voters knew the letter assignments, however, until after the comparisons were complete

Okay so this is a fair comparison. Folks always have the tendency to compare cheaper with cheap and expensive with more expensive at most times. Looks like this is not the case. Thanks for the clarification.
Ctm_cra wrote in two posts above:

The APL NWO 2.5T Alex brought with him had 200 hours of burn in as a 2.5 only model. As as a 2.5T it barely had 30 hours.

The Meitner's owner also confirmed that it has surpassed the burn-in time requirement. The DCS and Meridian are regularly used, but I have no specific info to provide. The same goes for the Opus 21.

Guidocorona wrote:

I should like to point out that a TEAC UX-1/X-01 derivative with 200 hrs of playing time on part of the circuit and 20 on the balance is not at all a well broken in device. . . likely 80% of break in is still ahead. How many hrs of playing time had been logged on the other contenders? Next time the shoot out is conducted I suggest all units be fully broken in. The condition would harden your findings.

Hi Guidocorona,

Based on my experience with the APL NWO-2.5 and the 2.5T, I totally agree with your comments about break-in. I heard major improvements on my 2.5 around 250-hour mark and it continued to improve until it was upgraded to a 2.5T at the 370-hour mark (200 hours on Redbook and 170 hours on SACD). Since Alex's 2.5 had only 200 hours, it is likely that its sonic excellence is yet to unfold.

When I received my 2.5T upgrade, I left everything else in my audio system and room unchanged so that I could evaluate (subject to memory and elapsed time) the sonic differences between the 2.5 and the 2.5T. Out of the box, the 2.5T had greater fullness, more defined and explosive bass, more sparkling highs, quicker transients, more harmonic and ambient detail, and it was more involving in the bass region. Since that is based on my memory of the 2.5, take it for what its worth.

Guidocorona, What may be more significantly related to your comments is what happened as my 2.5T burned-in. Between 60 and 70 hours the bass and midrange opened up and became more detailed and more refined; the midrange was now more involving. Although the treble had more sparkle than what I recall the 2.5 to have, it did not have the same level of openness at this point in the burn-in process as the bass and midrange. That is now changing. Today, the 2.5T upgrade part of the 2.5 has 110 hours on it. (The 2.5 part now has 480 hours.) The treble has begun to open up nicely and the continuousness of the sonic landscape is enhanced beyond that of the 2.5. I wait with bated breath to see what further improvements will unfold with the burn-in process.

According to Ctm_cra, as a 2.5T, Alex's unit had only 30 hours. Based on my experience with 2.5T's burn-in, your estimate seems reasonable to me that 80% of break-in for his machine is yet to come. If the 2.5T was properly burnt-in, could this have also been a "Redbook blow-out" as well as an "SACD blow-out"? We'll have to wait for the next round. Hopefully, everyone's player will be fully burnt-in.

Best Regards,
John
I have been adamantly opposed to the DBtesting so advocated my many using the same/different 30 sec. evaluation so strongly urged on Propellor Head in Audioasylum. The test here, however, is more valid and I suspect fun for the participants. As I said, I would have enjoyed participating. Nevertheless, I could not use it for making a buying decision.
Post removed 
Tlday wrote -

"Interesting that the last to be tested was the preferred. Results might change if the order was different. Psychology always affects perception. Also, once the group decided that the last choice was best, was this player then compared back against all of the previous? If A if preferred to B, B over C and C over D, it is not 100% sure that A will be preferred over D."

Your points are spot on! Sorry for not addressing it sooner. If we had more time we would have gone back to compare The APL to the Opus, Meridian and DCS. If we were to do this again, we will do just that.

Ryder wrote -
"The best and most expensive will always be saved for the last, and it usually comes out tops."

The most expensive player evaluated is the latest version of the Meitner and it did not win. Also, recall that each player was assigned a letter code written on a piece of paper and placed in bag. The order of the players was determined by the consecutive order of letters pulled from this process. By chance it turned out the Meitner and APL were last. None of the voters knew the letter assignments, however, until after the comparisons were complete.

The premise of the most expensive item being last and winning has not been the case in previous level-matched digital comparisons we've done. From my own personal experience, this is also not with speaker cables, IC and PC, phonostages and amps. It just happens to be true for cartridges and preamps, but this is my experience only and of course the quest for excellent, high value products continues. ;-)
Thank you for the correction Ctm_cra. Actually yours is rather a clarification, as I had no prior hypothesis of the relative state of any of the test machines. I should like to point out that a TEAC UX-1/X-01 derivative with 200 hrs of playing time on part of the circuit and 20 on the balance is not at all a well broken in device. . . likely 80% of break in is still ahead. How many hrs of playing time had been logged on the other contenders? Next time the shoot out is conducted I suggest all units be fully broken in. The condition would harden your findings.

Bravo for choosing several excellent orchestral examples, of which at least the Bernstein is delightfully bruitistic. I also suggest next time some subtler selections, from jazz and especially chamber music, be included to gage low level detail, microdynamics, harmonic development, etc. . .

Now on the Wilson/IPOD test. . . I have been present at such an event and was not fooled for a minute. .. is it perhaps because I listen to music only through my ears?
Wow more than 70 replies in 3 days. That's a record I guess.

Tlday wrote:

Interesting that the last to be tested was the preferred. Results might change if the order was different. Psychology always affects perception.

The best and most expensive will always be saved for the last, and it usually comes out tops.
Thank you Pete for putting forth the effort to share the results of your listening session.

The whole idea of these threads is to share our experiences with our fellow audio friends. But with all the whining about level matching to .1db, how much distortion these players have, how can all the "top" players really sound that different, that a manufacturer was there, that the system and music played were not initially noted, etc. etc. etc., who really cares!!!!! The initial report was very interesting. Can we not thank Pete for his willingness to participate and leave it at that?

Even when components are not precisely level matched, it is very evident as to the sonic signature of two products...no matter what the rest of the chain as long as the system is of a highly resolving nature to begin with. Just throw on a 400hz test-tone, match levels with a Radio Shack SPL to 0.5db and put the politics aside! We don't need a fancy test bench of Tektronix and HP gear to evaluate CD players in our home. I just don't see the necessity for us to use our engineering educations to fairly evaluate audio components.

I too think Alex should have dropped off the player and returned later just to keep the focus to the listeners. But with the evaluators all hearing these players compared to each other for the first time, any player could have ended up by the group favorite. So quit whining about a manufacturer being at the session, already! And who cares if the speakers are Pipedreams, SoundLabs, Avalons, etc. The relative differences of the players will ultimately come out the same.

As for how all these players can sound different with one coming out clearly the prefered winner, well, maybe the other players should get a demoted rating? Afterall, Stereophile is loaded with Class-A components and how can so many be rated the state-of-the-art? After having personally heard many such components directly to each other along the years, it is clearly evident that many such products never had any business being rated so highly.

Just like the threads that cover Dartzeel, EMM, Von Schweikert, various power cables, Triplanar vs Schroeder tonearms, etc., people get all upset when their beloved favorite product gets threatened by a newcomer that is the new "king". Can't we all come away from this report with greater incentive to hear all these for ourselves, in our own system, and not put forth so much effort to tear holes in the evaluation? It certainly gives me less desire to share findings if I have to spend days defending each step I took with responses back to my thread.
Post removed 
Terrible! ;-)

Two pieces of well-recorded classical music on Red Book CDs

Regards,
Musicfirst - Below are the details on the recordings selected for the blind comparisons.

Tvad - It is my understanding that you may be near our neck of the woods. You will have to join us next time. If you do, please promise to bring the recordings of the "little tabla and kalimba ditty, followed by some rollicking Bossa Nova" ;-)

For the SD blinded comparisons we used:
1) Pie Jesus (track 7) - Requiem by Rutter, Reference Recordings, RR-57CD
2) Overture to Candide (track 1) - Bernstein, Reference Recordings, RR-87CD

For the experimentations we continued to use 2) above plus the following SACD recording:
Symphony #4 - Mahler, San Francisco Symphony 821936-002-2

The following was used for the LA comparisons (amazing how close Tvad got):
Tubby - True Stereo, Unprocessed Analog Recordings, Naim LC: 0794
Post removed 
Hi Audiofeil,

Perhaps, but subliminally or not, his presence is influential despite their best efforts to absolve themselves from it. It's simple human nature.

What kind of subliminal powers are you talking about? This was a blind test and the participants didn't know which brand they were voting for. Are you saying that by some psychic power the participants actually knew subliminally which player was APL's and that that subliminal knowledge somehow overpowered what they heard?

According to Ctm_cra, Alex did not provide commentaries during the comparisons, did not vote, and was not involved in the set up. Even if Alex was able to recognize when his player was playing, how did his mere presence communicate subliminally to the participants how to vote in a blind vote? Are you saying his mere presence alone could produce 7 votes for APL to Meitner's 3 on Redbook, and an "SACD blowout" in APL's favor? That doesn't sound reasonable to me, but if you have some expertise in this area, please enlighten us with an explanation of your particular notion of subliminal powers.

Best Regards,
John
Interesting that in all this discussion, no one asked for the details of the recordings! After all, isn't it all about the MUSIC!!?

Sometimes we as a group are just too much (BTW, I include myself in this circle).....

Kerry
CORRECTION for Guidocorona - The APL NWO 2.5T Alex brought with him had 200 hours of burn in as a 2.5 only model. As as a 2.5T it barely had 30 hours.
>>did everything they could to ensure he had no influence<<

Perhaps, but subliminally or not, his presence is influential despite their best efforts to absolve themselves from it. It's simple human nature.

Thank you.
Essential audio said:

Additionally, the presence of the manufacturer of one of the units being evaluated at the session does not show impartiality and exerts influence on the test subjects, intentionally or not, which may have skewed the results.

It was impossible for Alex's presence to exert influence on the test subjects, because it was a blind test. The votes could just have easily gone in favor of any of the players, even if the voters did feel some obligation or affinity toward Alex and APL.

The voting was honest, and there were voters who owned APL players who voted in favor of the Meitner, DCS and Resolution Audio player on some tracks. When we voted, we had no idea what player we were voting for. It was simply "Player A or Player B". Only after the votes were tallied were the players revealed.

I was one of the participants at the listening session, and I can speak from experience.
Post removed 
two pieces of well recorded classical music? sounds like a day in your life you'll never get back.
Essentialaudio - Your your post in general and your last paragraph specifically speakes volumes and is very accurate IMO.
Ctm_cra, thanks for answering my question. The NWO sounds like a winner - pun intended :)
Post removed 
Audiofeil writes:
His presence was totally inappropriate
I'm sure the members of the San Diego Music and Audio Guild recognized the situation and did everything they could to ensure he had no influence. We know he did not express an "opinion until the completion of the formal process, and he also did not participate in the voting". I imagine the members requested that he sit in the background out of line of sight. Perhaps Petewatt or ctm_cra can explain.

Having listened to the older APL player at THE Show on a number of occasions, I know that Alex Peychev can keep his thoughts to himself when others are auditioning.

If Alex Peychev's presence was the pre-condition under which the shoot out could include his player, I think the trade off is worth making.

Were it not there, I would be left wondering how the Meitner would fare against the APL NWO 2.5T.

Regards,
>>the presence of the manufacturer of one of the units being evaluated at the session does not show impartiality<<

His presence was totally inappropriate.
Ctm_cra: I see no justification after the fact for omitting the details of the system and the room. Magazines and online publications list details, and since this discussion is fairly involved it would be appropriate to do so here so as to interpret the results in some kind of relevant context.

Although the results have not been represented as official, it is easy for them to be interpreted as representing the opinion of the San Diego Music and Audio Guild, which in fact they do not. As founder and past president of the Chicago Audio Society, I have seen posts on this and other fora saying things like "everyone thought" this and that about a certain product (herd mentality), making it sound as if the CAS had some kind of position in the matter, where in fact it was a few audio buddies who got together and may have decided to spread the word, and others who were not present but for whatever reason chose to spread rumors, whether they really believed it or they had some kind of agenda to promote. I suggest no one can speak for an audio society and that audio societies should actively discourage it.

Additionally, the presence of the manufacturer of one of the units being evaluated at the session does not show impartiality and exerts influence on the test subjects, intentionally or not, which may have skewed the results.
Shadorne -

"but the shootout suggests large and earth
shaking differences in performance from
several extremely expensive and extremely
high quality players"...

The results of both sessions, particularly the blinded comparisons do not suggest this at all. There were audible differences and these differences were certainly noticeable enough for each voter to confidently select which player he preferred in the four pairing conducted. As noticeable as these differences were, they were not night and day differences and certainly not large and earth shaking.

"I read the conclusion again...it attributes
all the qualities of the audio at the listening
session to the CD source (as if nothing else
influenced the sound; speakers, music selection,
room acoustics, amp, listener preferences...)"

Now you know why we asked the voters to give their simultaneous votes in the form of simple show of hands. Asking them to provide commentary leads to other issues and I addressed this in detail in my first post on this thread. It can also lead to misinterpretations or inconsistent conclusions on the part of forum members.

Certainly the downstream components in the system used have a certain sonic character, but this was consistent across all players. It is interesting to note that there was not an instance when a voter said that one player was thinner or warmer than another suggesting that tonality is not really the area of greatest discernable difference between these players. Having no tonality issues across two very different systems is a compliment to the designers of each of the players we evaluated.

If I would have had the chance to speak to Pete prior to his initial post I would have asked him to simply report the results and refrain from giving his personal assessment. Water under the bridge...

Instead of trying to get an understanding of Pete's assessments I can only attempt to steer you to focus more on the overall voting results on day 1 and day two. Please see my response to Essentialaudio and at the end of this post I attempted to summarize what this all means.
Shadorne says:
I remain somewhat skeptical that the other products are "dimensionless" in comparison.
First, let me say how much I enjoy your posts here, Shadorne. I would characterize you as a knowledgeable, "gentleman disbeliever" of audiophile dogma. I hope you will stay with us, though the track records of disbelievers around here has not been terribly good.

I am reading two themes in your comments in this thread. First is that it doesn't make sense that these high-priced units should sound as different from one another as the test results suggest. You'll have a hard time getting much agreement on that point from most of us for whom the hobby survives on the basis of our perceptions of these differences. Second, and very much related but also a bit different, is suggested by the quote I lifted above. So often in reviews we read that some important aspect of sound reproduction was transformed by a specific component, as if, as you say, it was "dimensionless before." Here's my theory: we hear minor differences from component changes in our systems (or in systems we are familiar with) and, for whatever reason or reasons, the magnitude of these differences gets blown up in a big way. A bit more dimensionality in some respect strikes as us as a night and day difference. I don't know the underlying mechanism at work, but I suspect there is some psychology of change in which small changes to a known staus quo are perceived disproportionally. As I've argued before, if [Nordost Valhalla or EMM Labs or take your pick] is really as great as people say, why is it that, when you go to an audio show, the rooms using [Valhalla or whatever] don't sound better tha the average rooms with some reasonable consistency? I don't have an answer, but it's possible -- just possible -- that in the scheme of the overall system, the speaker cable amounts pretty much to bupkis. I dunno. Even so, whoever is running that room is likely to feel that their sound was transformed when they put the Valhalla in (or even when, at 3:00am this morning, they changed the cable elevators they were using). And I'm sure it did sound like a transformation to them.
Essentialaudio - Two sets of evaluations took place. A blind, level-matched comparison in San Diego, and a non-blind, level-matched session in LA.

I take this opportunity to commend the voters who participated in both sessions. I know the systems they each use at home and they differ significantly from system used in either session. Despite this (and their differing tastes in music), I appreciated that they really gave the evaluation of each pairing their honest appraisal. We all knew that not doing so would taint the results.

I have noted in a previous post and will do so here again that I purposely left out system details so as to keep the focus on the players and the results. The opportunity to repeat these comparisons will present itself and we’ll definitely take into consideration forum member recommendations.

We did not set out to create the most scientific of comparisons, but instead made the most with what we had to work with while making sure that we had as level of playing field as possible for each player involved. It just so happened that the fellowship and hospitalities ended up being very enjoyable too.

We also did not seek to create a definitive set of results that can be extrapolated so that it could be relevant to yours or anyone else's system. In the end the results of the San Diego blind comparisons reflect the opinions of 5 voters on two great classical recordings played though 5 highly regarded digital players in a resolving full range system on that day.

Similarly, the LA results reflect the opinions of 4 voters (two of whom did not participate in the San Diego comparisons) on one great jazz recording played though 3 highly regarded digital players in a completely different full range system the following day.

I’d do it again in a heartbeat!
Jfz - You bring up a very intriguing proposal...

"Wouldn't it be great if each of these players (or at least the top three) could be used in each of the 5 individuals' systems for a week or so? I'd love to hear their thoughts after that."

I would gladly participate in this exercise. So Alex and the owners of the Meitner and DCS please consider letting your units go for a week at a time to allow the voters of the blind comparisons to do more listening experimentations and additional comparisons across a broader spectrum of quality acoustic recordings... let me know ;-)

As to the warm-up of players, the time constraints of evaluating 5 players meant that each unit had only about 20-25 minutes or so to warm up once it was placed on the rack and powered up, level matched, disc loaded, connections double checked, the setup blanketed for the blinded eval, then gathering the group back into the listening room. The advantage went to the DCS player, which had the most warm-up time as it remained in the system against the first two players. Interestingly, the Meitner was only on for about 20-25 minutes (vs. greater than 2hrs for the DCS), and it beat the DCS. Similarly, the APL unit was only on for 20-25 minutes (vs. greater than 1 hr for the Meitner) and the APL won.

As to potential interactions of having other players plugged in simultaneously... The PS Audio P300 has a set of two well-isolated duplex AC receptacles. We made sure that the players were plugged into different duplexes. The players were one rack position away from the preamp, one above and the other below. Each player was no less than 9 inches away from the preamp, which has an outboard power supply. The input selectors on the preamp have superb isolation and careful attention was taken by the designer to match the inputs, down to carefully matching the components used (including the lengths of wiring). Throughout the sessions, we heard no sonic anomalies that would lead us to investigate any system issues having to do with two players connected and playing simultaneously.
Guidocorona - If this opportunity comes up again, please let us know who can provide a TEAC P03/D03 combo. It would be a treat to hear this player.

Burn in on the APL player is on the order of about 300-500 hours because of the extent of modifications. Alex confirmed that he had about 350 hours on it prior to our evaluation. The Meitner's owner also confirmed that it has surpassed the burn-in time requirement. The DCS and Meridian are regularly used, but I have no specific info to provide. The same goes for the Opus 21.
Scottr - During the experimentation phase, the group unanimously preferred the APL over the Meitner using redbook. The difference between this and the blinded comparison (involving the same two players) is that the Meitner was connected to the AC regenerator and the APL unit was plugged directly to the wall, and both units were connected to the amps. We then switched to an SACD recording keeping the optimal AC connection the same as in the redbook comparison and unanimously concluded the same.

In another previous post I mentioned that the DCS had a more lively/dynamic presentation than either the Meitner or the APL. Recall that this took place during the blinded comparisons when all players were connected to the AC regenerator. What would have been great to do if we had more time was to also experiment with the DCS to see if it was better when plugged direct to the wall or to the PS Audio P300. Next, it would have been good to compare the DCS at its best AC connection against the APL plugged direct to the wall to see if the gap in liveliness/dynamics remained. My recollection of the DCS performance relative to the in-wall-plugged APL and even later when it was directly connected to the amps was simply too far removed for me to make a definitive vote. Am I asking for another session to take place? You bet I am.

Pete mentioned above that "when we auditioned a limited number of SACDs, the performance on both units was even better"... I find it difficult to agree with this because we did not first play the CD layer then adjust the player to playback the SACD layer of the same disc. It would have been great to do this. Until I hear this for myself I cannot make the conclusion he made.
Shadorne - "The fact that there were audible differences in such high quality players is really scary."

As you noted, the results were definitely audible. I do want to say that the sonic differences, though audible from one player to another, were not night and day differences. However, all present were able to identify which player they liked better, thanks to a very resolving system in San Diego.

The same differences from one unit to another were also discernable the following day in a different system using a different recording.

These are some of the very best digital playback units and the total retail cost of the 5 players involved is in the neighborhood of $80-90kUSD and I was glad to have been a part of it. However, sonic differences from one unit to another are independent of the prices of the gears being evaluated. Sonic differences should also be heard from more affordable units. Similarly, I would expect to hear differences from cartridges mounted on the same tt and arm (loading issues aside) regardless of whether you are comparing budget/entry level units or the very best transducers. Ultimately you will be dependent on the system used and its full range reproduction and resolving capability.
Bob_reynolds - I am definitely no expert in this area. So please pardon any of my obviously ridiculous and laughable remarks.

The SPL analyzer we used can measure to 0.1dB accuracy and the preamp was able to adjust level offsets for each unit. It has a high quality ladder attenuator using well-matched vishay resistors. The experience from past evaluations told us that we may have to settle with a level discrepancy of as much as 1 dB. However, in this shootout, we got really lucky. On 3 of the 4 pairings, we got virtually matching levels, thanks to the averaging function of the SPL meter. Only one pairing was off by an average of as much as 0.5dB and this is as close as we could get. In this pairing it was interesting that the player with the lower volume setting won decisively with a 9-1 vote.

Thanks for your tip on measuring voltages at the speakers. We will try this. However, due to the very nature of uncorrelated pink noise (UPN), I would suspect that the voltages will fluctuate the same way when one takes SPL measurements. When measuring voltages at the speakers using UPN, I know of no voltmeter that has an averaging capability, yes? Level matching to within 0.1dB is extremely difficult, what source media is used to achieve this? If it is a pure tone then we have another problem of using a limited/narrow frequency. One of the the advantages of SPL readings using UPN is that a braoder spectrum of frequencies is measured. Have you experienced situations where even though the voltages were matched at the speakers, the volumes still differed at the sitting position when music was played?
Lots of hand wringing and nitpicking over a listening session, IMO.

Agreed but the shootout suggests large and earth shaking differences in performance from several extremely expensive and extremely high quality players....to me this is at complete odds with accurate audio reproduction....how can they possibly sound so vastly different (unless there is deliberate sound coloration or major flaws from the various designs)?

I read the conclusion again...it attributes all the qualities of the audio at the listening session to the CD source (as if nothing else influenced the sound; speakers, music selection, room acoustics, amp, listener preferences...)

the APL 2.5T had all of the above, plus two deminsions that I feel make it TRULY UNIQUE. First of all, the life-like quality of the tonality across the spectrum was spot-on on all forms of instruments and voice. An second, and more difficult to describe, I had the uncany feeling that I was in the presence of real music -- lots or "air", spatial cues, etc. that simply add up to A SENSE OF REALISM THAT I HAVE NEVER EXPERIENCED BEFORE. When I closed my eyes, I truly felt that I was in the room with live music. What can I say.

Furthermore the shootout approach had to arrive at a clear winner (five identical players marked A, B, C, D, and E would also have resulted in a clear winner too, despite there being no differences!)

Unfortunately this sort of hyperbole is rampant everywhere in high end audio reviews, so it is pleasant that a few skeptics have spoken out (even if they weren't there and don't know what really happened and so are just expressing doubt). This makes Audiogon a great and balanced resource.

In conclusion: I do not doubt that each of these players tested are simply excellent! I am not trying to knock the winner (APL product) in anyway. I have no doubt that the APL product tested, in all probability, sounds absolutely fantastic! However, I remain somewhat skeptical that the other products are "dimensionless" in comparison.
Interesting. How many hours of operation had been previously logged on each player? E.G. my X-01 Limited continued to change/improve up to approx the 1200 hrs mark. Incomplete/uneven break in states would soften validity of any findings. I wish that a well broken in TEAC P03/D03 combo had also been included, as this player is priced similarly to the top contenders present to the event.
One could not be confident of agreeing with the majority even if the sample size were very large. Predictive value is not the point.