Blind Shoot-out in San Diego -- 5 CD Players


On Saturday, February 24, a few members of the San Diego, Los Angeles and Palm Springs audio communities conducted a blind shoot-out at the home of one of the members of the San Diego Music and Audio Guild. The five CD Players selected for evaluation were: 1) a Resolution Audio Opus 21 (modified by Great Northern Sound), 2) the dcs standalone player, 3) a Meridian 808 Signature, 4) a EMM Labs Signature configuration (CDSD/DCC2 combo), and 5) an APL NWO 2.5T (the 2.5T is a 2.5 featuring a redesigned tube output stage and other improvements).

The ground rules for the shoot-out specified that two randomly draw players would be compared head-to-head, and the winner would then be compared against the next randomly drawn player, until only one unit survived (the so-called King-of-the-Hill method). One of our most knowledgeable members would set up each of the two competing pairs behind a curtain, adjust for volume, etc. and would not participate in the voting. Alex Peychev was the only manufacturer present, and he agreed to express no opinion until the completion of the formal process, and he also did not participate in the voting. The five of us who did the voting did so by an immediate and simultaneous show of hands after each pairing after each selection. Two pieces of well-recorded classical music on Red Book CDs were chosen because they offered a range of instrumental and vocal sonic charactistics. And since each participant voted for each piece separately, there was a total of 10 votes up for grabs at each head-to-head audition. Finally, although we all took informal notes, there was no attempt at detailed analysis recorded -- just the raw vote tally.

And now for the results:

In pairing number 1, the dcs won handily over the modified Opus 21, 9 votes to 1.

In pairing number 2, the dcs again came out on top, this time against the Meridian 808, 9 votes to 1.

In pairing number 3, the Meitner Signature was preferred over the dcs, by a closer but consistent margin (we repeated some of the head-to-head tests at the requests of the participants). The vote was 6 to 4.

Finally, in pairing number 5, the APL 2.5T bested the Meitner, 7 votes to 3.

In the interest of configuration consistance, all these auditions involved the use of a power regenerator supplying power to each of the players and involved going through a pre-amp.

This concluded the blind portion of the shoot-out. All expressed the view that the comparisons had been fairly conducted, and that even though one of the comparisons was close, the rankings overall represented a true consensus of the group's feelings.

Thereafter, without the use blind listening, we tried certain variations at the request of various of the particiapans. These involved the Meitner and the APL units exclusively, and may be summarized as follows:

First, when the APL 2.5T was removed from the power regenerator and plugged into the wall, its performance improved significantly. (Alex attributed this to the fact that the 2.5T features a linear power supply). When the Meitner unit(which utilizes a switching power supply) was plugged into the wall, its sonics deteriorated, and so it was restored to the power regenerator.

Second, when we auditioned a limited number of SACDs, the performance on both units was even better, but the improvement on the APL was unanimously felt to be dramatic.
The group concluded we had just experienced "an SACD blowout".

The above concludes the agreed-to results on the blind shoot-out. What follows is an overview of my own personal assessment of the qualitative differences I observed in the top three performers.

First of all the dcs and the Meitner are both clearly state of the art players. That the dcs scored as well as it did in its standalone implementation is in my opinion very significant. And for those of us who have auditioned prior implementations of the Meitner in previous shoot-outs, this unit is truly at the top of its game, and although it was close, had the edge on the dcs. Both the dcs and the Meitner showed all the traits one would expect on a Class A player -- excellent tonality, imaging, soundstaging, bass extension, transparency, resolution, delineation, etc.

But from my point of view, the APL 2.5T had all of the above, plus two deminsions that I feel make it truly unique. First of all, the life-like quality of the tonality across the spectrum was spot-on on all forms of instruments and voice. An second, and more difficult to describe, I had the uncany feeling that I was in the presence of real music -- lots or "air", spatial cues, etc. that simply add up to a sense of realism that I have never experienced before. When I closed my eyes, I truly felt that I was in the room with live music. What can I say.

Obviously, I invite others of the participants to express their views on-line.

Pete

petewatt

Showing 28 responses by ctm_cra

I am lucky to have been a part of this San Diego event and I have no vested interest in any of the digital players evaluated. Thanks for the nice summary Pete. Unfortunately I could not attend the LA comparisons due to a prior commitment.

It is important to note the rationale behind the simultaneous voting via simple show of hands. This was done immediately after comparing any two players on a given track. We quickly, without discussions, listened to another track and evaluated the same two players. This approach guaranteed that no discussions took place until the comparison of each pairing was complete. All participants agreed that if discussions occurred prior to voting, comments from more outspoken, influential or respected individuals can potentially impact how others vote.

Both tracks used to evaluate the players are highly regarded reference recordings of acoustic performances and the recording process used minimal mic'ing and no compression. The first recording used had a mezzo soprano solist and choral ensemble with cello, oboe, flute, harp and organ accompaniment. The second track is a dynamic orchestral piece with lots of tempo, mood and dynamic contrasts using full orchestral ensemble as well as varied, individual instrument passages.

As to individual system preferences, we were split with about half using solid state gear while others had tube equipment. A few routinely connect their digital player direct to their amp and the rest use a preamp. This variance may account for the closeness in only one set of results, those between the Meitner and the DCS. However, in nearly all other pairings the results were fairly conclusive.

I'll try to answer some open questions and/or provide additional info...

Nsgarch - The AC regenerator (not only a passive filter/conditioner device) is the PS Audio Multiwave II+ and was set in Sine mode at 120V and 120Hz.

Ghostrider45 & Tedmbrady - Careful level matching was done for each pairing using a test disc with uncorrelated pink noise. Measurements were made with a highly sensitive, calibrated, pro spectrum analyzer with memory, averaging and EQ setting functions. [No EQ'ing was done and no equalizer was in the system. This is a feature on the analyzer that determines where FR adjustments should be made if one were to use an equalizer in their system]. Volume levels were matched in all pairings, except in one where a differential of no greater than 0.5dB was the best we could do.

Krisgel & Tedmbrady - For now let's just keep the focus on the digital players as our group worked really hard on keeping all other variables constant. The PCs, ICs and rack used, including the platforms on which the units stood, etc., were identical for each player. As to the system used, suffice it to say that it is full range and has been tuned, upgraded, improved and tweaked for the last 6-7 years and the endeavor to dial it in continues. This system is the same musical and well-resolved gear that has been consistently used by our local audio club in prior digital and other comparisons. At 20Hz and at 16.5kHz it is -3dB (-6dB at 20kHz) as measured at ear level from the LISTENING POSITION. Anywhere else in between 20Hz - 16.5kHz is virtually flat, with a +1.5dB measurement at 80Hz as the only thing worth mentioning. These measurements were done using the same uncorrelated pink noise track and spectrum analyzer mentioned above. [As an aside, we really should be demanding that reviewers provide similar FR measurements of their systems from their listening positions.]

Tedmbrady - We did not have access to the output impedance values for most of the players. So your question about the front end gear having an "issue (impedance, etc.) with the back end of the signal chain (preamp, amp, speakers)" remains open. However, level matching was strictly applied throughout all comparisons. Additionally, no performance issues with any unit occurred that would have made us look further into impedance or other incompatibilities.

Overall, it was a terrific, fun-filled 6-hour session. Thanks to the participants and especially to their significant others who tolerated our absence during this Saturday event. We had doubts about being able to get through all the comparisons across 5 players, using two different recordings, and doing so in a consistent, level matched and blind process. We not only accomplished this, but also had time for additional experimentation and listening. I find it very interesting that the results in a different system using completely different recordings mirrored those of the blind comparisons.
Tbg - Two of the five voters own APL modded Denon 3910. Alex was there because he brought the NWO 2.5T. But he did not vote, provided no commentaries during the comparisons and was not involved in the set up.

Because of the above, the player that started each pairing was scrambled/varied to as to keep the voters on their toes. This random selection of which player goes first when using either track 1 or track 2 was applied after the first pairing when it was obvious which player was used due to its location on the rack (even though it was covered and the voters could not see it). So after round 1 no one was really sure which player was playing despite where each unit was located on the rack. You can imagine how careful one must be to accomplish this. Thanks to the aid of a couple of blankets and remote controls when available, we were able to. In other words, we loaded both players simultaneously and started and stopped them similarly. Only one player had the CD of interest and was connected to the preamp. The other was connected but was playing a different disc as a "dummy" unit. We even switched the input connections for the players to be sure that there was no consistency in how one player was connected to the preamp. Now because they were kept from being seen, the voters could not see the front panels that would have shown which player had which track. It would have been great to have two copies of each CD we used. Please let us know if have suggestions on how the process can be improved.

The order the following day (when I could not attend) was completely different from that on Saturday. Despite this, their results were similar to those of the blind format.

As to the split votes, we really did not have time to hear out why some voters like one player of the other. The close results between the DCS and Meitner, for example, could easily have gone the other way with a different group of voters. However, in two sets of pairings - DCS/Opus 21 or DCS/Meridian - the results were virtually conclusive. The DCS is quite a player. The latest version of the Meitner performed well enough to earn the votes it received against the NWO. Along with the NWO 2.5T, this trio were in a performance class above the Opus 21 or the Meridian units.
Bob_reynolds - Level matching was done for each pairing using an uncorrelated pink noise track from a test disc played through each unit. Measurements were made using a pro spectrum analyzer taking SPL readings from the listening position. The audio chain is as follows: CD, pre, amp, speakers.
Tbg - The APL unit with its volume control and line input capabilities could replace my current digital player, preamp, 2 PCs and IC. However, I would still need at least another $10k to afford either of the top two players if I were to sell all this gear. This is too expensive for me.

I quickly put the same tracks in my system as soon as I could after the event to get a feel for how my set up fared against what I heard just 30-40 minutes earlier.

It did not take long, about two bars of music on either track, to notice the refinement in transparency and the smooth resolving ability the top three players had compared to my digital player. This was particularly evident in the mid-high to HF range. I have a maxed out Philips SACD1000 that was modded by APL. I did not start out looking for an APL player. In fact I originally wanted the Opus 21 or the Exemplar 3910. The deal on the Philips was too good to pass up so I got it instead. The SACD1000 is quite capable and is competitive in image focus, soundstaging ability, tonal balance, with an engaging combination of mid and midbass articulation and body, and bass extension and dynamics. Unfortunately it just does not have mid-high to HF performance that the top three units had in our blind sessions.

As to my take on the comparisons, the DCS had a more lively/dynamic presentation than either the Meitner or the APL, which is probably why I liked it better than the Meitner in the quieter choral piece. My vote changed for the Meitner when the orchestral track was played. Despite a less deep soundstage compared to the DCS, its smoothness and more natural presentation of instruments won me over. [This is really where the latest wersion of the Meitner really shines compared to what I remember of the previous version.] The DCS and APL presented a better delineated and deeper soundstage than the Meitner. The Meitner and especially the APL were better at presenting more believable natural tonal and harmonic characteristics of voices and instruments.

So wish me luck on my quest to find a more affordable digital player that can come close enough to the DCS, Meitner and APL.
Shadorne - "The fact that there were audible differences in such high quality players is really scary."

As you noted, the results were definitely audible. I do want to say that the sonic differences, though audible from one player to another, were not night and day differences. However, all present were able to identify which player they liked better, thanks to a very resolving system in San Diego.

The same differences from one unit to another were also discernable the following day in a different system using a different recording.

These are some of the very best digital playback units and the total retail cost of the 5 players involved is in the neighborhood of $80-90kUSD and I was glad to have been a part of it. However, sonic differences from one unit to another are independent of the prices of the gears being evaluated. Sonic differences should also be heard from more affordable units. Similarly, I would expect to hear differences from cartridges mounted on the same tt and arm (loading issues aside) regardless of whether you are comparing budget/entry level units or the very best transducers. Ultimately you will be dependent on the system used and its full range reproduction and resolving capability.
Bob_reynolds - I am definitely no expert in this area. So please pardon any of my obviously ridiculous and laughable remarks.

The SPL analyzer we used can measure to 0.1dB accuracy and the preamp was able to adjust level offsets for each unit. It has a high quality ladder attenuator using well-matched vishay resistors. The experience from past evaluations told us that we may have to settle with a level discrepancy of as much as 1 dB. However, in this shootout, we got really lucky. On 3 of the 4 pairings, we got virtually matching levels, thanks to the averaging function of the SPL meter. Only one pairing was off by an average of as much as 0.5dB and this is as close as we could get. In this pairing it was interesting that the player with the lower volume setting won decisively with a 9-1 vote.

Thanks for your tip on measuring voltages at the speakers. We will try this. However, due to the very nature of uncorrelated pink noise (UPN), I would suspect that the voltages will fluctuate the same way when one takes SPL measurements. When measuring voltages at the speakers using UPN, I know of no voltmeter that has an averaging capability, yes? Level matching to within 0.1dB is extremely difficult, what source media is used to achieve this? If it is a pure tone then we have another problem of using a limited/narrow frequency. One of the the advantages of SPL readings using UPN is that a braoder spectrum of frequencies is measured. Have you experienced situations where even though the voltages were matched at the speakers, the volumes still differed at the sitting position when music was played?
Guidocorona - If this opportunity comes up again, please let us know who can provide a TEAC P03/D03 combo. It would be a treat to hear this player.

Burn in on the APL player is on the order of about 300-500 hours because of the extent of modifications. Alex confirmed that he had about 350 hours on it prior to our evaluation. The Meitner's owner also confirmed that it has surpassed the burn-in time requirement. The DCS and Meridian are regularly used, but I have no specific info to provide. The same goes for the Opus 21.
Jfz - You bring up a very intriguing proposal...

"Wouldn't it be great if each of these players (or at least the top three) could be used in each of the 5 individuals' systems for a week or so? I'd love to hear their thoughts after that."

I would gladly participate in this exercise. So Alex and the owners of the Meitner and DCS please consider letting your units go for a week at a time to allow the voters of the blind comparisons to do more listening experimentations and additional comparisons across a broader spectrum of quality acoustic recordings... let me know ;-)

As to the warm-up of players, the time constraints of evaluating 5 players meant that each unit had only about 20-25 minutes or so to warm up once it was placed on the rack and powered up, level matched, disc loaded, connections double checked, the setup blanketed for the blinded eval, then gathering the group back into the listening room. The advantage went to the DCS player, which had the most warm-up time as it remained in the system against the first two players. Interestingly, the Meitner was only on for about 20-25 minutes (vs. greater than 2hrs for the DCS), and it beat the DCS. Similarly, the APL unit was only on for 20-25 minutes (vs. greater than 1 hr for the Meitner) and the APL won.

As to potential interactions of having other players plugged in simultaneously... The PS Audio P300 has a set of two well-isolated duplex AC receptacles. We made sure that the players were plugged into different duplexes. The players were one rack position away from the preamp, one above and the other below. Each player was no less than 9 inches away from the preamp, which has an outboard power supply. The input selectors on the preamp have superb isolation and careful attention was taken by the designer to match the inputs, down to carefully matching the components used (including the lengths of wiring). Throughout the sessions, we heard no sonic anomalies that would lead us to investigate any system issues having to do with two players connected and playing simultaneously.
Scottr - During the experimentation phase, the group unanimously preferred the APL over the Meitner using redbook. The difference between this and the blinded comparison (involving the same two players) is that the Meitner was connected to the AC regenerator and the APL unit was plugged directly to the wall, and both units were connected to the amps. We then switched to an SACD recording keeping the optimal AC connection the same as in the redbook comparison and unanimously concluded the same.

In another previous post I mentioned that the DCS had a more lively/dynamic presentation than either the Meitner or the APL. Recall that this took place during the blinded comparisons when all players were connected to the AC regenerator. What would have been great to do if we had more time was to also experiment with the DCS to see if it was better when plugged direct to the wall or to the PS Audio P300. Next, it would have been good to compare the DCS at its best AC connection against the APL plugged direct to the wall to see if the gap in liveliness/dynamics remained. My recollection of the DCS performance relative to the in-wall-plugged APL and even later when it was directly connected to the amps was simply too far removed for me to make a definitive vote. Am I asking for another session to take place? You bet I am.

Pete mentioned above that "when we auditioned a limited number of SACDs, the performance on both units was even better"... I find it difficult to agree with this because we did not first play the CD layer then adjust the player to playback the SACD layer of the same disc. It would have been great to do this. Until I hear this for myself I cannot make the conclusion he made.
Essentialaudio - Two sets of evaluations took place. A blind, level-matched comparison in San Diego, and a non-blind, level-matched session in LA.

I take this opportunity to commend the voters who participated in both sessions. I know the systems they each use at home and they differ significantly from system used in either session. Despite this (and their differing tastes in music), I appreciated that they really gave the evaluation of each pairing their honest appraisal. We all knew that not doing so would taint the results.

I have noted in a previous post and will do so here again that I purposely left out system details so as to keep the focus on the players and the results. The opportunity to repeat these comparisons will present itself and we’ll definitely take into consideration forum member recommendations.

We did not set out to create the most scientific of comparisons, but instead made the most with what we had to work with while making sure that we had as level of playing field as possible for each player involved. It just so happened that the fellowship and hospitalities ended up being very enjoyable too.

We also did not seek to create a definitive set of results that can be extrapolated so that it could be relevant to yours or anyone else's system. In the end the results of the San Diego blind comparisons reflect the opinions of 5 voters on two great classical recordings played though 5 highly regarded digital players in a resolving full range system on that day.

Similarly, the LA results reflect the opinions of 4 voters (two of whom did not participate in the San Diego comparisons) on one great jazz recording played though 3 highly regarded digital players in a completely different full range system the following day.

I’d do it again in a heartbeat!
Shadorne -

"but the shootout suggests large and earth
shaking differences in performance from
several extremely expensive and extremely
high quality players"...

The results of both sessions, particularly the blinded comparisons do not suggest this at all. There were audible differences and these differences were certainly noticeable enough for each voter to confidently select which player he preferred in the four pairing conducted. As noticeable as these differences were, they were not night and day differences and certainly not large and earth shaking.

"I read the conclusion again...it attributes
all the qualities of the audio at the listening
session to the CD source (as if nothing else
influenced the sound; speakers, music selection,
room acoustics, amp, listener preferences...)"

Now you know why we asked the voters to give their simultaneous votes in the form of simple show of hands. Asking them to provide commentary leads to other issues and I addressed this in detail in my first post on this thread. It can also lead to misinterpretations or inconsistent conclusions on the part of forum members.

Certainly the downstream components in the system used have a certain sonic character, but this was consistent across all players. It is interesting to note that there was not an instance when a voter said that one player was thinner or warmer than another suggesting that tonality is not really the area of greatest discernable difference between these players. Having no tonality issues across two very different systems is a compliment to the designers of each of the players we evaluated.

If I would have had the chance to speak to Pete prior to his initial post I would have asked him to simply report the results and refrain from giving his personal assessment. Water under the bridge...

Instead of trying to get an understanding of Pete's assessments I can only attempt to steer you to focus more on the overall voting results on day 1 and day two. Please see my response to Essentialaudio and at the end of this post I attempted to summarize what this all means.
CORRECTION for Guidocorona - The APL NWO 2.5T Alex brought with him had 200 hours of burn in as a 2.5 only model. As as a 2.5T it barely had 30 hours.
Musicfirst - Below are the details on the recordings selected for the blind comparisons.

Tvad - It is my understanding that you may be near our neck of the woods. You will have to join us next time. If you do, please promise to bring the recordings of the "little tabla and kalimba ditty, followed by some rollicking Bossa Nova" ;-)

For the SD blinded comparisons we used:
1) Pie Jesus (track 7) - Requiem by Rutter, Reference Recordings, RR-57CD
2) Overture to Candide (track 1) - Bernstein, Reference Recordings, RR-87CD

For the experimentations we continued to use 2) above plus the following SACD recording:
Symphony #4 - Mahler, San Francisco Symphony 821936-002-2

The following was used for the LA comparisons (amazing how close Tvad got):
Tubby - True Stereo, Unprocessed Analog Recordings, Naim LC: 0794
Tlday wrote -

"Interesting that the last to be tested was the preferred. Results might change if the order was different. Psychology always affects perception. Also, once the group decided that the last choice was best, was this player then compared back against all of the previous? If A if preferred to B, B over C and C over D, it is not 100% sure that A will be preferred over D."

Your points are spot on! Sorry for not addressing it sooner. If we had more time we would have gone back to compare The APL to the Opus, Meridian and DCS. If we were to do this again, we will do just that.

Ryder wrote -
"The best and most expensive will always be saved for the last, and it usually comes out tops."

The most expensive player evaluated is the latest version of the Meitner and it did not win. Also, recall that each player was assigned a letter code written on a piece of paper and placed in bag. The order of the players was determined by the consecutive order of letters pulled from this process. By chance it turned out the Meitner and APL were last. None of the voters knew the letter assignments, however, until after the comparisons were complete.

The premise of the most expensive item being last and winning has not been the case in previous level-matched digital comparisons we've done. From my own personal experience, this is also not with speaker cables, IC and PC, phonostages and amps. It just happens to be true for cartridges and preamps, but this is my experience only and of course the quest for excellent, high value products continues. ;-)
Arthursmuck - A few partipants of this comparison have evalutated the Opus 21 previously. In stock form it has outperformed an older version of the Audio Aero Capitole, an Audience modded Denon 3910 and a stock Esoteric DV-50. It also competed surprisingly well with the older version of the Meitner DAC6/DCSD. So we wanted to know how well the latest version with Steve Huntley's Great Northern Sound Company reference mods would do against top shelf players. You have every right to be proud of your little guy!
Metralla - I can only account for the blinded comparisons in SD since I could not attend the LA session. Our approach is as you indicated. Throughout most our listening evaluations, Alex stayed out of the dedicated listening room, which was set up only for the 5 voters. He often waited in the adjacent kitchen area and he was to provide no comment or any vocal expressions until the blind phase of the comparisons were complete. He occasionally peeked to have a listen, but was still out of the line of site of the voters.

We confidently proceeded primarily due to the following logistical details:
1) blind comparison format in which the voters did not know which player was being used,
2) assignment of letter IDs for each player used and these are not revealed until the completion of the comparisons
3) immediate voting process via simple show of hands (without discussion) after each track per pairing.

Alex, or any other manufacturer, who is willing to put his product up against the very best and allow others to try and objectively evaluate it is always welcome. As an fyi, we recently hosted Raul Iruegas. He presented his Essential 3150, a superb full-function preamp to our audio club and stayed longer to allow club members to evaluate it in their systems. Like Alex and Nick Doshi, Raul wears many hats as designer, manufacturer, distributor and dealer. Because of the latter two roles he is solely responsible for showing his products. We thank Raul for providing the chance for us to try to objectively evaluate it against our own preamps and phonostages, and we are equally grateful to Alex for giving us the listening opportunity with his player.

I appreciate your understanding of this and taking the time to comment on these comparisons.
Jfz - Nice approach you have. We did a mixture of this actually.

During round 1 we learned a number of things. Important among these was the fact that the voters (although they could not see the players) could tell the one being used because they can see which one received the CD we wanted hear. So we decided to mix things up as detailed in my previous response to Tbg. Although we kept evaluating the choral track before the orchestral recording, we mixed up which player started each pairing and THIS WAS DONE FOR EACH TRACK. Thus, for any pairing being evaluated we did not necessarily begin with the same player for the NEXT CD used. After round 1, the voters did not know which player was playing at any given time.

I wanted to address your comment about hearing more the subsequent times you listen to a recording. I agree and some would also say that their focus changes during the second or third tries, when compared to the first time they hear a recording. There was really no way to address this equally for all voters. Three of the five voters know the Rutter piece very well as we have used it in previous evaluations. Only two of the voters know the Bernstein recording. We felt it was important for each voter to have many opportunities to hear each track so they can confidently cast their votes. Here is a little more detail on our listening process for EACH TRACK...

1) With the correct input selector and volume levels set, we loaded both players, one with the test CD, the other with a dummy CD. We then listened to a predetermined point on player A. For the Rutter piece this was at the 2:08 mark and at 2:44 for the Bernstein piece.
2) Rewind and listen again but for a shorter time. Up to 1:13 for Rutter and 1:38 for Bernstein.
3) We asked is any voter needed additional listening and, if so, we would repeat 2) above
4) Unload both players, switch CDs (the location of the one being tested is not revealed/visible to voters)
5) Reload both CDs and select the appropriate line input of the preamp and make the necessary volume adjustments as predetermined by the level matching done during the set up for both players.
6) Repeat 1, 2 and 3 above for player B.
7) We asked if any voters wanted to go back to player A, and we would repeat steps 4, 5, 1, 2, and 3 for everyone. This option was done only for one pairing – the DCS vs. the Meitner.
8) Immediately vote with show of hands (no discussions)
9) Repeat the process for the next recording.

One more thing to note, for subsequent pairings and even in between each pairing, we also switched the input selector to which the players were connected on the preamp. Even though we were assured by its manufacturer that Line 1 is identical to Line 2 in every way (materials/parts as well as specs), we wanted to vary this too, just in case ;-)
Newbee – You ask some important questions. All five voters sat in the same seats throughout all comparisons, so the perspective of each listener never changed.

Certainly none of the side positions is as revealing as the sweet spot, which can accommodate two people -- one in front of the other. However, we spent a considerable amount of time in advance of the event to position the other seats so that acceptably focused images and a convincing soundstage is perceived from the other three positions -- two on either side of the sweet spot in the back row and one to the side in the front row. None of the voters raised the concern about the lack of image focus or not being able to hear dimensional details. Later when discussions were allowed, the three voters sitting at the side positions were surprised they were able to tell each player’s ability to present a focused image and believable soundstage. These side perspectives may not be correct, but it was the best we could do given our time constraints.

I do not know if the other voters prefer nearfield listening. However, I know they can easily recognize good, focused imaging and excellent soundstaging when they hear it. However, stereo imaging and soundstaging capabilities, although important, are only two of the many criteria each voter had to keep in mind as when they listened. In fact we did not prediscuss or define these sonic parameters. We simply asked each voter to listen, compare and honestly and confidently cast a vote as to which player they liked in each pairing.

The speakers used are neither dipoles nor horns. The drivers are not horn loaded, do not use ribbon tweeters, and has excellent off axis response. It is appropriate at this point to provide the room dimensions (hopefully this info partially addresses other member’s curiosities):

width - 12 feet
length - 15 feet (see * more info)
front row - ~9.5 feet diagonal from front of each speaker
back row - ~11.5 feet diagonal from the front of each speaker

The wall behind the seats has a central window, which is covered with 2 layers of fairly thick curtains. The floor is wool carpeted with foam insulation underneath and this is supplemented by another 6x8 ft area rug on top. The cement foundation is underneath the carpet. Spikes are used at the rear of each speaker to couple them to the foundation. A single Finite Elemente Cerapuc is used in front of each speaker as a vibration control treatment.

*There is no wall behind the speakers and this contributes greatly to this system's superb imaging/soundstaging. The lack of wall behind the speakers also partially contributes to a flat FR response measurement from the sitting position. The room node interactions are negligible at +1.5 dB at 80Hz and flat at nearby frequencies. My very first post includes details of the very good in-room, from-the-listening-position SPL measurements. These data also clearly detail that there are no “slight mid-range recession” or a “slight elevation of the high frequencies”.

I do not know if there is “shortening of the decay time of the signal (imparts a fast sound and a clarity due to the shopping off of the trailing edge of the signal”. Please pardon my obvious lack of technical knowledge, but I can only guess this is more perceived than measured, yes? This system has never been described as fast, slow or muddy. Besides its superb imaging, soundstaging and layering/delineation capabilities, it is also dynamic and articulate, while also having a tonal balance that results in a believable representation of real instruments or voices. These along with the system’s overall musicality and resolving ability are the reasons why we keep using it for our comparisons.

I cannot confirm “the excellence of the sound is simply the absence of any distortions”. We never measured this system in this regard so we have no meaningful information to share. Suffice it to say that there is distortion (what system doesn’t), but none of its symptoms have ever noticeably/audibly surfaced. We’ve used other systems in the past so this is not the only one with which we have experience. However, during last four years we’ve done comparisons using this system, no one has ever commented something that would lead us to investigate if distortions are an issue.

As to the type of listening fatigue I think you described, not one of the voters mentioned or commented anything that had to do with system edginess or harshness. Another member already raised concern about careful AB comparisons for 6 hours. We took plenty of breaks in the kitchen and family room areas, while the set-up, level matching, and blinding was being done for each pairing. Fatigue of a different kind eventually set in. We would have kept going were it not for one voter having to leave, another voter needing to join his family and the others wanted to go out and get steak ;-)
Nilthepill & Tvad - Nice try, and I completely understand. However, if the DCS, Meridian, Meitner or Resolution Audio players had been the winner, this thread would read like and ad in their favor.

BTW - You have just volunteered your systems to be the site for the repeat blind AB comparisons...

BWAHAHAHAHAHAHA!!!
Tvad wrote -

"Just out of curiosity, why is it that Petewatt authored the thread and has not been heard from since, and Ctm_cra is answering all the questions concerning the session?"

Great question! Our buddy Pete is frolicking in the sun somewhere as he travels. He left the day after his last post and asked that I follow-up and respond to the questions. Pete said he would check in from time to time as his ability to access the internet allows. At least two other participants, other than me, have already chimed in. I can step aside if you like, but it may be a while before Pete can post.

If you had been with us, you too can answer the questions, which would be great actually. I could use a break, especially since there appears to be a few members interested in hearing about this event. So next time we'll save a spot for you ;-)

In fact, we are currently planning our next blind AB comparisons. Will fill you in on the details once the test subjects and logistics are ironed out.

Kind regards.
Socrates - Thanks for your post. I would really like to respond. However, I just do not quite know how to [no joke and with a very sincere approach] without the risk of this thread going off to tangents, that although related, would take the focus away from what Pete and our group intended in sharing our findings with this community.

You raise some VERY IMPORTANT topics:
1) digital gear that recording engineers use (i.e. - gear with which the music was created, master, mixed, engineered, approved, etc on,),
2) what the engineer intended,
3) coloration-of-choice,
4) pro reviewers who actually engineered the track.

To these I ad the following:
A) on a given track, if the mastering engineer is different from the recording engineer, whether or not they share the same goal(s)
B) one's definition of the "live event" (as simple and as personal as this is, it really needs to be further clarified/defined by each audiophile)
C) one's goal in setting up his 2 channel system relative to how one defines B) above.

I have great interest in all these topics and the potential controversies surrounding them. So if you are in the southern Calif. area please email us and schedule a time listen to some great music, perhaps attend a live event, grab some chow, have some wine (or microbrew or fine port, tequila, rum or single malt), and continue to discuss these excellent topics.

I do want to say that in sharing the process and results of our comparisons with members of this forum, our goal is NOT to declare some absolute '"truth" and absolute "facts" on "the best" and "winner[winning]" pieces of gear'. Additionally, none of us have claimed, in this thread or elsewhere, that we are 'some great audiophile expert, a golden ear, or an all-star “truth-hearer” in the "field"'. As to what all these comparisons ultimately mean? Well, my take on it appears in the last three paragraphs of my reply to Essentialaudio on 03-02-07.

We would, however, like to accomplish more than just amuse or entertain forum members. In fact a secondary, but VERY IMPORTANT, purpose of ours is to obtain suggestions on what improvements can be implemented when we do this or similar future blind evaluations. Some of the above discussions have helped us in this regard. So we look forward to hearing your and the other members' recommendations.

Kind regards.
Greetings,
I wanted to thank the forum members who independently contacted some of the participants involved in this comparison. They provided excellent suggestions we can use in our next blind evaluations.

Pete says hi. He contacted me from a cruise ship, from which he admits, "It's very difficult, expensive and slow to stay [on the internet]".
Best regards!
Dear Rackon - We did not set out to "find a $500 player that "trounces" the Meitner or APL". Our goal was to compare top-of-the-line CDPs.

As an FYI, a non-voting, non-manufacturer participant did compare an Opus 21 player (and I do not recall at the moment if it was stock or modded) to the Reimyo, and he preferred the Resolution Audio player. The Opus 21 is the same player that outperformed the players specified in my 03-05-07 post to Arthursmuck. So we had no problem including the Great Northern Sound modified version in the lineup.

Having said that, it would have been great to have included the Reimyo, the latest Audio Aero Capitole, the best offering from Ayre, etc. So this could mean that another evaluation session is in order. I know someone already mentioned the top of the line Teac units and the single box Meitner. Feel free to recommend other top shelf players, including transport/dac combos.

Eljaro - Your points regarding value and relative priorities when it comes to making audio product purchases are well taken. I previously mentioned that I cannot afford any of the gear we compared (the least priced unit being just under $6k), but I and the rest of the participants could not pass up the opportunity for hear them against one another. Our caparisons were not an exercise to address affordability, nor were they an attempt to justify the prices of certain components.

Hens - We were crazy enough to evaluate the top-of-the-line digital players that we could get our hands on. This also partially explains why other top quality players were not included. We now a significant list of other quality players and we will consider adding the Sony DVD unit when we get another opportunity. Anyone within driving distance to SD is welcome to join as a host, moderator, guinea pig, set-up guy, or observer (must bring a CD player to be evaluated).

Regards!
Ramy - Thanks for providing a tong-term perspective on the three highly regarded units you own. Please post your findings if you have the opportunity to spend a significant time with a dcs, APL, or other top-of-line offerings from TEAC, Ayre, Audio Aero, Sony, etc.

Here is an another idea, since you already have three excellent digital players are you willing to host similar blind comparison? The other participants can bring one or more of the top three of players we ranked and perhaps others that were not available to us at the time. If we revisit this, we will try to include the Zanden and Reimyo units.

Regards,
David12 - It is great to hear that you will compare phonostages in a similar manner. I will certainly look forward to the results. Perhaps I could be swayed to go on my first UK visit. On the other had you may want to consider SD as alternate location :-)
Raquel - Thanks for the vote of confidence. Perhaps you and others can assist me with a current dilemma. See here:

http://forum.audiogon.com/cgi-bin/fr.pl?eanlg&1177856152&openflup&38&4#38

Best regards!
Eljaro - No argument here when it comes to your ears, eyes (yes aesthetics matter) and your wallet being the ultimate judge of an audio component. I think this is well understood... but perhaps not well practiced (myself included) ;-)

I am sure portions of the following have been published in print and online. Pardon the lengthy approach but your post provides a great opportunity to discuss the importance of software selection.

So when it comes to our use of only two pieces of well recorded (minimally mic'd, uncompressed, no “effects” applied, careful mic placement, etc.) acoustic pieces to compare the equipment, I agree that a broader representation of music would be more ideal. However, the logistics of the event simply prevented the introduction of one more recording. As simple as this would have been, we barely got through the two selections we had to work with, and this took nearly 6 hours to complete. We were lucky to have some time for additional experimentation after we completed the blind comparisons.

As to fatigue, I already posted that we took plenty of breaks since careful level matching needed to be done prior to evaluating each pair of CD players.

Having said all that, the participants agreed upon the two selections ahead of time. There were number of reasons for going this route. We could have chosen any acoustic performance, but we selected classical because it covers a broader spectrum of instruments and voices than say a guitar/vocal piece. It could also have been a highly regarded album that is the result of an outstanding recording studio project, however only a couple of participants have ever been in a studio and only one of us has ever experienced a recording session (in both control and recording rooms). We could have also included a live jazz, folk or rock album and there are many good ones. However, most of these are recordings of amplified instruments and voices through PA systems… just way too many variables to account for.

Further, our concern for the tricks (mixing, multi mic, compression, reverb, etc.) applied to most studio and live projects often result in an unnatural sonic presentation making it tough to evaluate what sounds “real”. Don’t get me wrong, there are studio recorded albums that are outstanding and thoroughly enjoyable. It would be interesting to see how the results compare when evaluating these players with such recordings. However, even the very best of them would not be my first choice if I had limited time to compare CD players of any gear.

If an audio component or system reproduces a well recorded acoustic music in the most impressive and believable manner, then I know I am going in the right direction. HERE IS THE MOST IMPORTANT PART: You will only know if the component or system is performing well if you go to enough live performances so you can relate these experiences to what you are hearing from your gear. In this way you can more accurately evaluate the quality of the recording, the naturalness of the instruments & voices and nuances (harmonics, textures, dynamics, micro and macro passages, phrasing, etc.). Furthermore, when well recorded acoustic music is done right, I can accept that I can better evaluate the CDs from studio recording efforts (not the other way around). In this manner, it is the live event or how you experience it that becomes the reference, not the recording and it is the combination of keeping this live event perspective in mind along with the well-selected recordings used that will reveal the less precise, more analytical, and more generally, the inferior players.

Too often audio gear is selected based on which ones make our favorite recordings sound good, but the listener rarely goes out to a live concert. Despite owning different systems and having varied musical tastes, all of the participants have been to multiple concert and symphony halls and cathedrals. The orchestral and choral (with small instrumental ensemble and organ) pieces selected served us well as the means for the group to evaluate the performance of these players.

Kind regards,