Clever Little Clock - high-end audio insanity?


Guys, seriously, can someone please explain to me how the Clever Little Clock (http://www.machinadynamica.com/machina41.htm) actually imporves the sound inside the litening room?
audioari1
Zaikesman and Tonnesen, as you know, tests of statistical significance are sensitive to how big the sample is. With a sample of 25,000, any relationship will prove statistically significant. Since we are all too willing to say that a relationship that is statistically significant is also significant, we are in danger of saying as you think I am saying that you can prove anything with statistical significance tests. These tests were developed to answer the simple question of whether an unusual random sample from a population where there was no difference could have given us sample results where there is a difference.

In the tests that you both propose as to whether subject hear or don't hear the CLC is present, a large sample of say 10,000 would achieve statistical significance even were there no difference, although I would not predict in which direction, such as whether the CLS helped or hurt.

I am not being anti-science or anti-logic, I am merely saying that such tests may not be a valid method to prove or disprove whether the CLC does anything. I am also saying that those who claim it does nothing cannot claim the high ground by saying that those hearing a difference are delusional as those hearing no difference may also be affected by prior conceptions.
Tbg: I'm still not following your point about sample size, but it sounds like you are saying that it is possible to manipulate statistical methods to prove anything one wants to by the way one chooses the sample size...?

If the effect of the CLC were profound, as some people claim, you could prove this with a very high level of confidence with a very small sample size - there is a 1 in a million probability that someone could correctly guess, purely by chance, whether the CLC was in the house in 21 tries.

A larger sample size is only required if the positive effect is very small, in which case you need a large sample size to show that the small benefit is statistically significant.

I expect that 10 tests would be enough to convince most people that this device does not have much benefit, and a larger number of test would only show with increasing certainty that it has no benefit at all.
Tonnesen, you are actually arguing against statistical significance but are right. If you get a very large random sample of people, there will be statistically significant differences heard between with and without the CLC. But you are wrong that 10 subjects should be enough to convince people that the device has no effect. If you had 10 non-randomly chosen individuals with "good ears'" you might well question whether their hearing a difference can be generalized to all listeners. Similarly, were you to have 10 who doubt the benefit, others might well legitimately question your findings. Even with 10 randomly chosen individuals much would depend on the strength of the treatment effect.

I am not arguing that one should not attempt such tests, but I am arguing that they may not necessitate others heeding them as proof that the CLC does nothing.
Tbg, I'm afraid I hope you're more clear (and relevant, I dare say) with your students...

From what I can tell though, you've got at least one thing basically wrong there:

"I am merely saying that such tests may not be a valid method to prove or disprove whether the CLC does anything"
To me this isn't correct. If enough trials are run, with the clock randomly inserted or removed from the listening environment without the subject seeing which condition it is (I think at least 30 trials would be preferable, which could be divided between 2 or 3 different sessions on different days), its absence or presence should be correctly reported at a rate significantly higher than just 50/50 chance if it's actually doing anything like what the believers maintain. If on the other hand the results hewed pretty close an even 50/50 split, it would be strongly indicative that nothing is audible.

This is a different question than whether whatever the clock may do is 'good' or 'bad', which is a subjective judgement, and not important as to whether the thing can be *detected*. For illustration, I just flipped a coin 30 times, twice. The first set of 30 I got a 17/13 split, the second set a 16/14 split. Whether the splits favored heads or tails is unimportant (as it happens, it was one of each), the relevant point is that my splits only deviated from the mean of 15 by 1 or 2, strongly indicating random outcomes.
Oh, you're talking about random choosing of listeners? I don't think this is necessary, or even desirable. I'd rather limit the listener pool to audiophiles who believe in the audibility of these tweaks.
I have found that even loose testing conditions minimize my ability to hear differences that I previously thought obvious.

Recently, I did some blind testing with a neighbor who has a Krell integrated amp. It drives me nuts, so we dropped in a Modwright 9.0 SE, using the Krell as amp-only. We were both immediately floored. The sound, to me, was soooo much better. He started talking about buying one.

Then, I left it with him for a couple weeks. He did many A/B tests and determined the differences were extremely minor. He blind-tested me and I was fairly ineffective in picking which arrangement was working. He decided the Modwright didn't improve his system and wasn't worth the money.

What does this indicate? Well, my visits to his sound room are again rife with dissatisfaction. The etchy glare is back and I don't really like going over there to listen. Yet, the tests failed to show differences that were obvious in a stress-free environment.

I think this is where testing falls down. How does one know when stress is influencing perception? Further, who wants to subject themselves to testing? It is diametrically opposed to what we normally use our systems for - relaxation and experience.

The idea of a large-sample test does sound promising, and a positive result would be hard to refute. But, it would be nearly impossible to achieve and I'd be suspect of any determination of negativity.

Yeah yeah, making excuses when there isn't even a result yet. . .
Miklorsmith: I agree (and have detailed before) that there can be problems with formal testing methodologies as applied to subjective auditioning. I do think some kinds of testing can introduce a "confusion factor" that may actually serve to artificially raise the floor for perceivability of low-level differences. And I think it's to a large extent possible to ameliorate biasing effects due to external factors without resorting to blind tests, though it can take repetition over time and a certain self-questioning mindset (that I'm learning a lot of audiophiles seem to lack). As for how test conditions might significantly differ from normal use conditions, this can be good or bad -- I don't listen to music for enjoyment by performing rapid A/B comparisons, but doing them can really help nail down (or dismiss) some elusive observations concerning gear.

But, when faced with a product or claim that appears to carry all the hallmarks of snakeoil, and audiophiles buying into it using the most casual and fallible auditioning methods, I don't think it's inappropriate to call for some demonstrable degree of rigor to be brought to bear. I also think that experiences like the one you relate above are valuable for putting things in proper perspective every once in a while.
I think that the real telling statistic would be the actual percentage of purchasers who took advantage of the 30 day money back garantee....
Zaikesman, I think that Miklorsmith made a very valid point. In fact, I was part of an even more baffling experiment. This was a system with VTL monoblocks, a VTL Reference preamp, and Wilson speakers. Everyone was floored by how amazing the VTL pre-amp sounded, I had to agree, it sounded pretty damn good. Then someone in the group made a comment that listeners would not be able to detect in a blind listening test when the VTL was in our or out of the system.

So we set up an experiment substituting the VTL preamp with a $200 NAD preamp while people listened blindfolded. Not a SINGLE person in the group was able to consistently tell which was the VTL or the NAD. One is around $10,000 and the other is $200.

This brings up the inevitable conclusion that listening evaluation is probably extremely flawed in these types of tests no matter how you slice it or dice it.

I think the more proper way to evaluate equipment is to put in a component and the listen to it for several days and then make the switch. For some reason, rapid A/B switching doesn't allow the brain to make the adjustments quickly enough.

Otherwise, how would you explain these results?
This is evidence of how flawed double blind testing is. Despite the attempts people put into giving so much value to blind testing, it is inherently flawed. The advise above to listen for a period of time is a good one. Often times the differences are very clear, other times it has more to do with musicality than sonics. Musicality is tapping your foot, and only time can identify this important aspect of our hobby. Too often we rely on quick comparisons, when this is never how we actually listen to music. Those looking to acquire equipment might put great value in blind testing. Those who just want to enjoy music must take the time to discern the attributes of a system
Audioari1

This brings up the inevitable conclusion that listening evaluation is probably extremely flawed in these types of tests no matter how you slice it or dice it.

I think the more proper way to evaluate equipment is to put in a component and the listen to it for several days and then make the switch. For some reason, rapid A/B switching doesn't allow the brain to make the adjustments quickly enough.

Otherwise, how would you explain these results?

Boy do I ever agree with that!!

Long term listening is the only way to achieve satisfaction with one's system. Quick switching is not how we enjoy music and certainly not how to decide on the equipment either.

This does not imply my acceptance or denial of the ability of the Clever Little Clock. At this point I cannot tell who is in favor of it and who is having fun in this thread.
Albert,
You want some fun? How about trying the CLC "Clever Little Carp" You mount it on the wall and everytime you walk by it sings Patricia Barber songs - thereby recalibrating your ears prior to a listening session. I would suggest you introduce to your listening group prior to your normal Tuesday night sessions.
Listen Slipknot, Patricia Barber songs "Carp" on me enough, even without the "Billy Bass' Replica (or is that Trout Mask Replica?) hanging on the wall.
Joeylawn36111, Audioari1, and Albert, I could not agree more. This is why I suggested that such present or not present tests may not validly assess whether the CLC is working or not.

Zaikesman, you are assuming that no one can hear a difference, but I am assuming that some can hear a difference. Certainly, some do say they hear a difference. With a small sample where some hear a difference, as I see it, you will not be able to dismiss the issue of whether your sample is unusual but comes from a population that cannot hear a difference. With a larger sample again where some hear a difference, it will be very improbable that the population cannot hear a difference. Statistical significance would make it more difficult to support your position. I am sorry if these ideas are difficult to convey, but they are the basis statistical significance and dealing with type one errors.

Given what I say above, a small sample with some hearing a difference could be dismissed as sampling error. A large sample would, however, lead to the conclusion that people in general can hear a difference.

Remember also that you think that there is no difference with the CLC present or absent, but you are only testing whether people can hear a difference. Even if they don't hear a difference, there may be one, or if they do hear a difference, there may be none. Unlike a coin where all would agree on heads and tails, this probably would not be the case in what you propose.

Overall, again I would state that your proposed tests do not resolve the issue.
Hey, I'm not taking sides here, just agreeing about long term listening.

Zaikes is suspicious about this thing working and so am I, but I have not tried it. Because of that, I've avoided comments except for fun remarks with the rest of you guys.

The CLC guy has probably sold about 200 clocks over this thread if it has a fraction the power the "Home Despot" (Lenco Turntable) thread.

Hell man, their fighting over them at Ebay (the Lenco, not the clock :^).
audioaril,

"For some reason, rapid A/B switching doesn't allow the brain to make adjustments quickly enough."

Perhaps, there is an analogous phenomenon for aural experience that exists for our visual experience. Namely, if you look at a colorful object for a while and then close your eyes, you see an "after-image" of the complimentary color that lingers for a while. While it lingers, the after-image color interacts and mixes with your subsequent visual experience (with eyes open). The brain is not able to instantaneously wipe itself clean between two sucessive visual experiences. I have two paintings that demonstrate this phenonon dramatically. Anyone who has looked at them for a few seconds reports the same thing: colors blend, new colors and forms emerge and there is movement of the nebulous forms. If our brain reacts in a similar dynamic way to aural experience, that may explain in part why rapid A/B switching is not an appropriate methodology to testing audio components. If the "after-image" of A lingers in the brain and mixes with the experience of B, we may get a more homogeneous result that clouds differences.
Again, I don't disagree with the point everybody makes about auditioning and testing. But I still say an experience like the one Audioari1 relates about the $10K preamp vs. the $200 one -- if true and valid (meaning if this actually happened, and if the test was done well) -- is a valuable reminder to any audiophile about not just the limitations of A/B testing (which I think audiophiles sometime tend to overblow, while ignoring the equally significant foibles of long-term auditioning), but also the quite real limitations of what we're actually doing in high end audio.

But I'm getting a little off the track here. There is most definitely a way to test the CLC that doesn't raise the possibility of criticisms like you guys are mentioning (and I already thumbnailed it somewhere here before). All you would need are, say, three outwardly-identical clocks: One would be an actual CLC, with its supposed "proprietary technology" and "special" batteries, while the other two would be the same model of clock, unmodified except for having identical stickers placed on their fronts as the CLC, and with "regular" (but same brand) batteries. The test administrator would need to have some kind of identifying mark to reference the CLC; I'd suggest maybe tiny pieces of tape placed inside the battery compartments of the two stock clocks only.

Then simply leave all three clocks with an audiophile who maintains he can hear a positive effect from the CLC, to audition however he pleases, at his leisure (with the understanding of course that he wouldn't try to open up the clocks or otherwise try figure out which is which through non-auditory means, and the proviso that he removes the two clocks not currently being auditioned from the listening environment in accordance with Machina Dynamica's guidelines). When he's finished and indicated his preference, the administrator would remove the three clocks and note which one he chose, then bring them back mixed-up and do the same thing over again (without, of course, letting the subject know the running results while the test is still in progress).

If, after maybe 10 times around with this routine, the subject couldn't correctly identify the CLC significantly more frequently than 1/3 of the time, I don't think that audiophile could argue about its lack of audible effect. And if he could identify it reliably (and hadn't cheated), no one could argue that it's probably really doing something after all. (I think the single best candidate to run this test with would be Mr. Kait, were it not for the fact that he would have an infinite incentive to cheat, and the means to easily do so!)

I'm not advocating going through this kind of crap for every choice an audiophile makes (I've stated above why the CLC [and "Intelligent Chip"] deserve a higher level of skeptical scrutiny), I'm just saying that in principle it's hard to criticize or dismiss this test (or at least, in the case of negative results only, as it could apply to that one listener). And yes, I've stipulated before that this whole debate is likely nothing but great for Mr. Kait's business -- while it lasts (meaning the business, and the debate! ;^)
And BTW, about that preamp shootout, I once lassoed my girlfriend into doing the same kind of blind test. She's not an audiophile and couldn't care less, but she was easily able to immediately and consistently discern at least some of the sonic differences between the $3K and $6.5K solid-state preamps I was A/B-ing (volume-matched, of course). To my slight chagrin she preferred the former even though I preferred the latter (not that I told her so until we were done, although it's possible she could've picked it up from my body language since the test wasn't double-blind). But I did agree 100% with her descriptive assessment of what she heard, and could see why she might've preferred it with those particular test recordings. In fact, she blew me away by stating the overall situation much more congently and concisely than I had been able to form it in my own mind. When I told her why I liked the more expensive one, she blithely dropped something like, oh sure, she could tell that you could hear much more through that one, but that was why she'd found it less pleasant to listen to (I had played CD's). Just amazing, aren't they fellas?
Puremusic: That might seem a reasonable hypothsis, but I doubt it's actually true -- listening to a piece of music once and then again is not analogous to staring at a static image until it's 'burned' on your retina. Otherwise, if it were, we not only couldn't hear changing sounds such as music very well, we'd have trouble seeing changing images within a constant environment, which as far as I know is exactly the opposite of how we actually respond. Maybe a better analogy would be listening to a static sinewave tone for minutes, although I don't know this. Anyway, I believe the ways the ear and the eye operate as sensors, and how the brain processes each, are too substantially different for such analogies to hold much water. The other problem with that conjecture, for me, is that I personally find A/B testing to more often highlight than to obscure subtle differences. Of course, I'm doing this by myself in my own system, which means it's not blind, so you could always object that I'm fooling myself.
To All - Let's go back in time to 1991 when Stereophile published Thomas J. Norton's review of the Tice Clock, a product that bears more than a slight resemblance to the CLC (or the CLS for that matter) - i.e., a digital clock. As one might expect, the Tice Clock was heavily drubbed by audiophiles and some reviewers of the time (and currently).

Unlike the CLC, the Tice Clock plugs into the wall outlet and supposedly operates by influencing how the electrons flow in conductors. Now, whether or not that theory is true I can't say, and have no experience w/ the Tice Clock myself. More importantly, I'm not suggesting the Tice Clock operates at all like the CLC. However, the hoopla that surrounded the Tice Clock way back when certainly bears a strong similarity to that of the CLC in many ways -- if I do say so myself :-). Here is the link to Norton's review of the Tice Clock:

http://stereophile.com/miscellaneous/784/index4.html

GK, Machina Dynamica
Post removed 
Zaikesman, Most analogies break down when analyzed at fine enough detail. Your interpretation of my analogy took it further than what I intended. I suggested the limits of my analogy by the last sentence of my post above. If the brain does not instantaneously wipe itself clean after each experience, then what remains is what I refer to as the "after-image". As in the case of the visual experience, the after-image is subtle enough not to incapacitate one's functioning within a constantly changing environment (as your comments suggest), but may be enough to cloud subtle subjective experiences. Most of us have had the experience of a song or a tune being 'burnt-in' into our brain that "we can't get it out of our mind". Even such a stronger "after-image" doesn't disable our ability to function. Perhaps some neuro-biologist reading this thread could shed some light.
Zaikesman,

I think that your proposed test of 3 clocks is just as flawed. This is because all 3 clocks will look visually identical.

Suppose something else for a moment. Let's say I take a Boulder 2008 ($40K) phono pre-amp chasis, take out the electronics, and replace them with an NAD $150 phono stage. Then I will put it into your system without you knowing I changed the inards. I would bet the dickens that you will sit there in amaziment telling me without end how wonderful the sound is. Inevitably, the visual impression has a very strong affect on perceived hearing.

This is also similar to another effect that has been demonstrated time and again. If I take a $5 bottle of wine and pour its contents into a bottle that belongs to a $300 wine and serve it to you, you will probably think it is the best and most complex wine you have ever tasted.

I think it is clear that the psychological anticipation of the event is more then half the battle.
Zaikesman, the test with your girlfriend does not have the visual element taken out as I had explained above, since obviously she was not blind folded. And the result of her conclusion, only further proves my point, she preferred the cheaper pre-amp.

It is also entirely possible, that if you had the $6500 preamp and a $200 preamp side by side, and they both looked visually impressive without her knowing the cost of either, that she would have prefered the $200 piece!
PM: Thanks for the clarification, but I understood, I just didn't agree. It's your proposed effect, as well as your analogy about mechanism, that seems out of whack with reality to me. Assuming you're seriously positing this hypothesis as a reason why A/B testing allegedly doesn't work or is misleading, than both cause and effect are fair game for critical examination. If you think I'm taking it too far, it's just that I see the possible extrapolation here -- that your proposed effect implies instantaneous comparison is less reliable than audio memory, and I don't buy that.

For the record, here's how I see this A/B vs. long-term question in its totality: A/B's I think are great for identifying differences, and degrees of change. If performed against a well-known reference they can be a good indicator of relative strengths and flaws (or in the case of bypass testing, absolute strengths and flaws). But that's not always the same as determining which presentation you prefer, and it's never the same as determining whether that preference will ultimately meet your listening needs and expectations. Long-term auditioning I think is necessary (and anyway unavoidable, let's not forget) for determining preferences and ultimate satisfaction. I agree that quick A/B's often don't reveal nearly as much as there is to hear. The solution in my experience is not to throw away A/B's altogether, it's to keep doing them until the finer differences emerge, which they do if you have determination and patience. Once heard, as I said, this method most clearly eludicates differences and degrees of change, and more reliably so than depending on long-term auditioning and audio memory. In practice I prefer to use both methods for their own virtues and not just rely on the latter.
Man, what a relief, I thought you were going to tell us about a double blind test with your girl friend, against another woman and that you were not able to tell the difference. I was going to complement you on finding such an understanding open partner...
Sorry, that product is called the CLG (Clever Little Girlfriend) [Patent Pending]. . . and it will be marketed by me! it will consist of a little very highly modified pendent bejeweled clock that you will let your girlfriend wear, and. . . you know the rest. . .
Zaikesman,

"I agree that quick A/B's often don't reveal nearly as much as there is to hear."

None of the threads on Audiogon that I read, including this one, or the Stereophile article on DB testing contain an explanation for your statement that is supported by scientific data. Until some neurobiologist/psychologist becomes interested enough to devise an illuminating experiment, we can only conjecture.

"The solution in my experience is not to throw away A/B's altogether, it's to keep doing them until the finer differences emerge, which they do if you have the determination and patience."

I agree with your solution! Perhaps (Oh,Oh. Another conjecture is coming up.), repeated A/Bing results in the formation of new neuronal connnections or the strengthening of weak ones that facilitate the discrimination of finer differences that were perhaps (!) previously masked by "after-images":-)
These two articles are always fun.

The first link is John Atkinson's article in Stereophile (circa 1991) in which he ruminates about the Tice Clock:

http://stereophile.com/miscellaneous/784/index5.html

The second link a letter to the editor (also Stereophile) in which George Tice defends his clock:

http://stereophile.com/miscellaneous/784/index7.html

~ GK
Instruction set at a molecular level, eh? yes, very entertaining indeed. Unfortunately it's complete unspecific childish rubbish.
PM: Not sure if you're playing devil's advocate when you take exception to my statement about the limitations of "quick" A/B's, but maybe I need to clarify: By "quick" I didn't mean the rapidity of the switching itself -- I'll often focus on rather short musical passages when I compare this way. What I was refering to is the total time invested, and by implication number of musical examples, trials, and sessions employed. I think there's a bit of a perception out there that when we talk about A/B-ing, we're talking about a short-cut, whereas when we talk about long-term auditioning, we're talking about taking our time. My point was that time needs to be invested no matter what if you want to get a worthwhile result. Quality A/B-ing is really harder work than long-term auditioning, but it yields solid results in particular respects. And I don't know about scientific studies, but I do know that oftentimes when I start out A/B-ing something where at first any differences might seem vanishingly subtle at best, perservering almost always shows otherwise. The ear/brain needs time to fully suss things out, and repetition (including on different days) for confirmation. It's nothing but gaining familiarity and weeding out false impressions, and you can't do it in 20 minutes at the hi-fi shop.
Guidocorona - I take it you must mean as opposed to specific mature rubbish?
Zaikesman, Given your clarification, I believe we are now on the same page, or at least close to it. My posts above refer to quick A/Bs that involve rapid switching as is the case in some double blind tests. Repeated short-term A/B tests in which you have at least a few minutes with each option are extremely important to delineate the differences. I have performed hundreds of such tests while refining my audio system. During these short-term tests, my powers of discrimination and analysis usually play a primary role and my affective nature a secondary one (unless the difference that emerges involves feeling). However, after this analysis is somewhat definitive, I do long-term testing (leave a component in a few days and then remove) in which the roles are usually switched: enjoyment level and feeling become primary while analysis takes a back seat.
Post removed 
Post removed 

I'd like to know roughly how many of these CLC's have been sold, and why aren't more present users jumping into this thread to discuss their thoughts? The majority of actual regular users of the CLC on this thread is what, one?
"Mob rule"? More like democracy with freedom of expression -- and criticism. Leaving aside the posts intended to be humorous, what would you change in order to make the discussion "meaningful"?
Audioari1 -- First, the good news: Let me pay you a backhanded compliment and commend you for no longer *seeming* to post in the manner of a troller as of late, so that I can respond. The bad news: We seem to be talking right past each other, so either you might need to re-read my posts more carefully before responding, or I might need to do a better job of communicating.

About your critique of my proposed three-clock test, I think you miss the point. You must think that it involves comparing any one of the three clocks to no clock at all. That option exists, but isn't relevent to the design of the test. This test is intended only for *advocates* of the Clock, to be a method which allows for personalized, long-term auditioning, thus avoiding the criticisms some have leveled against conventional blind A/B testing. (This test is still blind, but can be self-administered during the auditioning phases.) The observed effects of three clocks would be reported as compared against *each other*, not against the absence of a clock. That they look outwardly identical is fundamental to the test. It is *only* a test to see if anything Machina Dynamica claims for their CLC can be audibly differentiated from a regular clock. (If you want to posit, as Tgun5 does in the other thread, that a regular clock has just as much supposed benefit as a CLC, then this test cannot help in refuting that.)

In case I'm still being hard to follow, let me put it another way, expanding on what you've suggested: Instead of giving me one Boulder with the guts of an NAD inside and asking me what I think of it (thus in all likelihood deomstating the existence of the placebo effect, but nothing about the supposed superiority of Boulders), my test is proposing that you give me three Boulders, two with NAD guts on the inside (plus a little ballast! ;^), and leaving me alone for as long as I want to determine whether I can hear any differences between the three. This is not a test intended to demonstrate the placebo effect -- the existence of which you are of course perfectly correct about, but which is where you got confused here -- it's a test intended to discern whether real audible differences actually exist. By making appearances identical, the placebo effect cancels out, leaving only real effects. We've eliminated the potentially onerous need to make the test "blind" on the outside by making it blind only where it matters, on the inside. So again I say, if you're one of those who think Machina Dynamica is actually selling something of unique value here, something justifying not only its cost but its allegedly proprietary nature, then the test I described in the above post should be able to confirm or refute that notion. (BTW, the choosing of three clocks for the test, as opposed to two or four or whatever number greater than one you care to name, was based solely on test manageability vs. expedience and is irrelevent; what matters is that only one of them be a "real" Clock.)

About the "test" story with my girlfriend and the preamps: A) It wasn't a formal test, just an anecdote that I thought appropriate in light of your preamp shootout story; B) She actually did not know which of the two preamps I was running at any given time, and furthermore did not know which was more expensive, who made what, or even who the makers were. She doesn't even carry any preconceived notions regarding makers anyway, since she's not an audiophile. In fact, I'm not sure she even knew that it was preamps I was testing, or if she could even tell you exactly what a preamp is. I just asked her to come listen to two things while I switched back and forth and for her impressions of what she heard, that's all. But as I said, she could see and hear me, and I can't rule out that I communicated something about my feelings without intending to, so it wasn't a controlled test in that regard. That she preferred the opposite one from me could be taken as an indicator that in fact I didn't influence her, but in any case her personal preferences were of secondary importance to me -- I was more interested in her knowing her subjective characterizations, which while I thought they turned out to closely match my own, she didn't know any of in advance (of course she didn't communicate them in audiophile-speak however!).
Geofkait, yes you could call it 'mature' rubbish in a way. Or perhaps even more precisely 'ripe' rubbish. it all depends if the term rubbish is more pertaining to the mind of the author or of the intellectual scent emanating from the end product: childish in the former sense, yet quite well ripened in the latter.
Zaikes, Guido, and others, I hope that that all this time spent on the computer writing about the clock is not keeping you from listening to music!!
Sherod, from its conception this thread has not been to discuss the CLC but rather to dismiss it. There is considerable more heat than light generated here. As a reading of the thread will also show, the principles involved would dearly wish it would die.

Just looking at those that have sold on auction, there must be many out there with personal experience with it, but few would tolerate the sarcasm that Wellfed has endured here.
Russ: No!! :-)

Tbg: As you wish. I'll stop discussing, er, I mean dismissing this with you, my light-generating friend! Wow, my head hurts less already...
OMG. For various reasons, not the least of which is a bad back, I have been absent from this site and audiophilia in general for several months. I came to this thread thinking this was all a joke and I needed a laugh. Well, it IS a joke, but in a different way and with a different laugh than I expected.

I nominate this one for the annual PT Barnum Award.