Anyone that can not hear sonic differences between 320 Kbs and CD lossless sound has a crap audio system, or only listens to ear buds and Bose Wave radios!
Perhaps these listeners are typical, broke, Millennials that have no taste in music? |
Agreed, unless your preferences and the reviewers are the same. Then you'd have a baseline to go by. Even then, what I got from the contrarian articles is that a person's preference is baked in, whether they admit it or not.
On another note, I loved the observation from Alan Watts which went something like: you can't judge a river by taking a bucket of water out and staring at it. Everything must be taken as a whole, or as best one can.
All the best, Nonoise |
This is a quote from one of the articles cited above:
"Clearly any analysis that chooses to discount individual results in favor of the group result is to ignore the most basic and most important ingredient of listening to music on the hi-fi—our preference."
I agree and disagree. When it comes to knowing what you like, sure, I agree. But magazines and websites are full of reviews about specific components and such. Some of which come with astronomical prices and, quite frankly, incredible claims. If the only thing such a review can give me is one person’s preference then it has extremely limited value even if I trust the reviewer.
|
The types of tests I'm referring to would be quantification of subjective responses. And the test would have to consist of more than simply saying this is Tidal on my system or this is a ripped CD on my system. I've only been at this a few months and can do that sort of thing myself.
The type of test I would find useful is testing several controls; several sets of cables or several streaming services, a few DACs etc. Common criteria would be set and presented on paper with gradations on a scale of 1-10 or 1-5 for each presentation of the variable to be ranked by the subjects. Music (one piece) would be played to test subjects multiple times for each variable in randomized order, in other words you could hear the same variable several times in a row, etc. The test would have to be double blinded meaning the test designer would not know which variables were being tested at the time of the test. That would be the only significantly tricky part.
For grins I would have each variable reviewed by each of the test subjects in open fashion (they would know what they are reviewing and would not be comparing it to anything else at the time they listened) before the double blinded test.
All of this could be completed in an afternoon with one system in one room with maybe one or two "blinded" assistants and a computer to randomize the order of variable presentation.
Assemble the data and report it. Application of statistical analysis would probably not even be needed if the number of subjects were small enough, say 8 or 10.
Such a study would have some inherent weaknesses and its scope would be very narrow. There is a chance that the data would not give a clear statement on the variables........but as stated above THAT in itself would be valuable to those looking for something other than someone's opinion about potentially expensive gear and/or services.
|
|
I have done a blind test on Tidal vs the actual cd. I picked and preferred Tidal all three times. The CDs were ripped into my Sound Science music server using DB poweramp. I used the same server to stream Tidal. |
@cleeds, such a study for audiophiles does not have to be comparable to drug studies and such. I think a broad range of listeners is an okay idea but I don't think it is necessary. I think any group of audiophile journalists would be where you would start. One type of music. Test no more that three variables. Start slow and simple. Maybe 10 listeners/subjects. The point being, there are no decent studies at all. _Anything_ would be an improvement.
mattlathrop, agree about photography. I think immediately about lens MTF charts. They don't tell the whole story about a lens.....but they tell and awful lot. |
@n80 It is neat to find another audiophile who also likes photography! I completely agree with your point about photography. Imagine choosing to buy camera A or B based on seeing a set of test images taken with both cameras where you have to look at black for 2 seconds every time you flip between them. |
@cleeds
First off I have conducted and participated in psychological studies relating to human perception. So I do get what is truly needed. But a simple blind test as I have described would be an excellent start given that no one seems to have even done that. Sure the results won't be getting published in any journals, but it would be a good jumping off point for a more formal study.
the results are often vague, or inconclusive.
This statement is exactly why I want people to do this. My hypothesis is the same as yours. I think people will find that the results aren't a clear cut answer. BUT if you were an audiophile starting out and read these forums you would think that if you don't listen TIDAL you have just wasted your money on an expensive stereo (this *literally* happened to me when a dealer who will remain nameless told me he didn't want to work with me if I only listened to Apple Music...) Just to share my personal experience: I feel I can tell the difference between TIDAL and Apple Music when I switch back and forth, but when I took the NPR test with some Audeze headphones I didn't do better than randomly guessing. |
n80Double-blinded randomized tests of this sort need not be overly complicated. Yes, they will be tedious. Yes, the design would need to be just right. Yes, the stats can be a little mind numbing. But it would not be expensive or even time consuming ... If you’re testing a cross-section of listeners - which is necessary to have sufficient sample size - it is very time consuming. Each listener must be accorded multiple trials of whatever duration they need. So if only because of the time factor, it is indeed an expensive undertaking. It is a bit strange to me, as a new audiophile, that this type of testing is conspicuously absent even as I see many people asking for it. Everyone who asks for such a test is free to conduct such a test. Why don’t they? Perhaps because it’s time-consuming. Tedious. Challenging to do properly. And - when you’re done - the results are often vague, or inconclusive. |
Double-blinded randomized tests of this sort need not be overly complicated. Yes, they will be tedious. Yes, the design would need to be just right. Yes, the stats can be a little mind numbing. But it would not be expensive or even time consuming.
I think there are other reasons these tests are not done: audio gear vendors and retailers do not want it done. It is so much easier for them to have a high tech, high cost niche in which virtually every important characteristic is seen as and accepted to be subjective.
It is a bit strange to me, as a new audiophile, that this type of testing is conspicuously absent even as I see many people asking for it.
In my other hobbies (photography, for example) there is much more rigorous testing and reporting, even in magazines/sites that stand to lose advertising, than in the audiophile industry.
|
Conducting a scientifically valid double-blind listening test involves much more than just having a buddy "switch between lossless audio and compressed without your knowledge." And if the test doesn't follow scientific protocols, its results are of no value at all.
Have you ever participated in a real double-blind listening test? It's r-e-a-l-l-y tedious, even if you're just a test subject. Organizing, conducting and tabulating the results of such a test are even more work. That's part of why you don't see more blind testing by hobbyists, outside of the scientific and commercial domains.
|