While this discussion is interesting, I think it is getting a bit off track. If I may presume to reinterpret Bryon's question, I don't think he cares whether the word is "neutrality" or "transparency" or "coloration" or how, exactly, one defines the terms. Rather, if you replace a component in your system there are three possible outcomes: 1) system-induced coloration is increased, 2) system-induced coloration is decreased, or 3) system-induced coloration doesn't change. I think Bryon's question is: How do you tell which outcome you achieved? (Bryon, feel free to correct me if I'm wrong.)
I think in the real world, Bryon's definition is workable. (There is, obviously, the theoretical possibility of a system that makes instruments and music all sound very different, but is also incredibly, incredibly wrong. But I think we can ignore that possibility in the high-end audio world where we're already within an epsilon of the truth.)
Experience is another good answer. If you know what something really sounds like, you should be able to judge the differences which a fair degree of competence. But that requires a fairly specific kind of experience and a very special recording about which you know a great deal. That's not really practical for most of us. And there's always the possibility that the system works well in one area, and not another.
I don't have a better answer than the OP's, but I'd like to know: if I change a component, I may like the result, but is there a way to know if I'm hearing the music better, or just my system? |
Kijanki wrote: "no I would not adjust sound for individual songs but rather pick affordable system that sounds best to me on average with the type of music I listen to."
Affordable? What does affordable have to do with anything? I thought this was an audiophile discussion. |
Kijanki writes: "Would "neutral" system sound the same to young Hindu and old Latino?"
No, it would sound different to every person that listened to it, but that's not the point. The point is whether or not it recreates their version of reality, and only a neutral system could do that for everyone. If each person listens to a guitar, each person hears something different. But if the goal is to make a recording sound, as much as possible, like the source, neutrality seems important in achieving it.
"Are we trying to find best tasting or most neutral wine?"
In our case, "wine" is the music, and our systems are the glass you drink it from. Do you want a glass that flavors all of your wine? |
Kijanki wrote: "Well - I'm my own playback engineer and I choose the sound I like."
I think most of us do this to some extent, since we put a fair amount of effort into modifying our system and keep the changes that we think make things sound better.
But what makes things "sound better?" Sometimes it's a change in resolution. Sometimes it's a shift in tonal balance. Sometimes it's improved dynamics. And so on. In each case, we use our experience and our taste to make a judgment. The point of this thread is to talk about another aspect of one's system that may make things sound better.
We each weight these things as we see fit. Some people might not care about anything but resolution or pinpoint imaging, to the exclusion of everything else. Some people, like Newbee, don't think neutrality exists. So these people weight neutrality zero when considering system changes. You seem to prefer some degree of coloration, so there are at least some aspects of neutrality you don't care about. But to me, if I make a change and different instruments sound more different (while, obviously, remaining true to what they are), as Bryon suggested in his original post, then I've affected something that may make the music more enjoyable.
As for being one's own engineer, it's intriguing to think that we could individually EQ every song in our collection. It would even be worth the effort on some tracks. But, honestly, I think compression is our biggest enemy in the source, and I don't see a way to restore that without the cooperation of the record companies. |
I like this definition, but what about imaging? Couldn't a system have a high degree of both neutrality and resolution, but have fuzzy image focus? That would tend to disrupt the impression of a live event or a well-integrated studio recording, and make the system fail to disappear as required by transparency. Or does resolution (in stereo) necessarily require imaging? |
The more technical improvements poured into each down unrelated analog & digital paths, the closer they converge on the same sound. And this convergence may be as good a demonstration of neutrality as any other.
This idea is fascinating. You mentioned it in your first post in this thread and, although no one ran with it, it stuck with me. I wonder how other posters feel about it...
I just want to bump Bryon's question, because I, too, wondered about this concept. It seems to me if you listen to two different sources (digital and vinyl) through the same system, you've actually eliminated the variable of system neutrality from the equation, and what you are experiencing is source convergence. That may, I suppose, be referred to as a kind of neutrality (and perhaps a worthy goal) but even if you were to achieve it perfectly, what would that say about the overall system's neutrality? |
With water, although all sources are indeed contaminated, we can identify the definite impurities, and there is no debate on the subject, because science call tell us what truly pure water would be like. Actually, I think the water analogy is pretty apt here. Science rarely, if ever, delivers absolute truth -- send a bunch of water samples around to different labs, and you get different answers and those answers will all come with error bars. And, as you say, we can't filter out everything in water any more than we can build perfect audio components. But just because science can't deliver absolute truth (any more than an engineer can deliver a perfectly neutral audio component) should we throw away the whole concept of science? |
If one were to wear yellow glasses while skiing during an overcast day, visual improvement in the snow's light and dark shadow detail would be apparent. Those same glasses on a bright day would not be beneficial... The improvements in your system may have actually increased the level of contrast above and beyond the original instruments of the musician.
Here, Hamburg is challenging the idea that items (1) and (2) are indices of neutrality. I thought this to be one of the more effective and relevant challenges to my original post, but no one seemed to run with it.
I think he has a point, and that point raises an existential question. Let's look at the continuum of neutrality as defined in this thread: At one end of the spectrum, you have a system that plays back, say, a 1kHz tone, no matter what the source. This is the anti-neutral system: everything sounds exactly the same. At the other end of the spectrum, consider a hypothetical system that processes the source, and through the use of pattern recognition, seeded pseudo-random number generators, and large variety of sampled sounds, effectively replaces the source with something else. One violin might sound like a subway train, another, slightly different violin might sound like a jackhammer, a cello sounds like a babbling brook shifted one octave up and slowed down by 20%. So we satisfy criterion #1: different instruments sound more different. (They just sound nothing like what they really are.) Similarly, using the same system, we look at the first n bits of any recording (where n is large enough to insure uniqueness over the body of recorded music) and use those bits to seed our random numbers to insure that each recording sounds completely different from all of the others. So now we've satisfied criterion #2 of neutrality: any music collection sounds more diverse. This absurd system would be, by our operation of the term, more neutral than anything any of us currently has. But I don't think it would lead to improved musical enjoyment. So clearly, within the idea of neutrality we are making assumptions about truthfulness to the source, and consistency of playback. Which brings me back to Hamburg's point. If we consider a system that smooths out recording artifacts, we also risk smoothing out sounds that are real features of the music, making the system less truthful (i.e., it suppresses real contrast). In Hamburg's example, the divergence from the truth is the unwarranted exaggeration of contrast. While neutrality, as operationalized here, resists the suppression of contrast, it doesn't appear to resist its exaggeration. So somewhere in the operation of "neutrality" there is a necessary condition that the playback system maintain truthfulness to some reference point or points. How, exactly, one codes that constraint, I don't know. But if it were coded, would it, in and of itself, be a sufficient condition for neutrality? |
Learsfool writes: My position is that there is no such thing as an absence of "coloration" in music and/or music playback and/or an audio component; therefore Bryon's "neutrality" couldn't ever actually exist, even as he defines these terms. You've asserted this point about a half dozen times in this thread, and each and every time someone (usually Bryon) points out that neutrality, as used here (and in the audio world in general), is a relative term. A component may be either more or less neutral (which is exactly synonymous with saying that it may apply either less or more coloration to the source). It would seem an entirely uncontroversial assertion. Neither Bryon, nor anyone else on this thread has suggested that absolute neutrality (i.e., zero coloration) either exists or is achievable in a playback system. All that has been suggested is that components may have either more or less coloration, and that it may be possible to distinguish one of those conditions from the other. |
I think Bryon and Almarg addressed most of Learsfool's comments, but I'd like to add something on this point: To be clear, I am not suggesting that room correction is worthless, indeed these systems can make a big difference; I just wonder - how do you know when it has been corrected? Again, only you can answer that for yourself, and your answer may be very different from any other given audiophile's. Typically with room correction there is a fair amount of objectivity in the process. You play frequency sweeps through the system and then look at the response curve, with the goal of setting filters to reduce peaks caused by room modes. Some systems do this entirely automatically, though I believe that the manual approach is still better. But I don't think this is the same as setting the system so it sounds good to the individual. It is set to neutralize room modes, and as a byproduct the system sounds better. This is really not all that different from voicing a system by moving speakers around and looking at the results on a real time analyzer. A properly treated room with well-placed speakers is an attempt to minimize the coloration caused by the room. But a lot of folks have limited options for treatments and speaker placement, and for them, room EQ is a viable alternative for achieving less system coloration. Almarg wrote: If a system is truly accurate yet still results in a lifeless/soulless sound, then it seems to me that there is a problem with the recording(s) being listened to. In that case, it seems to me to be perfectly legitimate to introduce some modest degree of inaccuracy into the system, such as non-flat frequency response, to compensate. The price that will be paid is that other recordings which are more accurate and transparent will then no longer be reproduced to their full potential. I mentioned in an earlier post that it is now possible to provide different EQ for every song in your library. (The capability is a bit crude now, and would be difficult to implement for analog sources, but there are no technological hurdles to this capability.) This is the have your cake and eat it too scenario. You could fix up the recordings that need it, and leave the others alone. One could imagine adding other tools besides just EQ: volume graphing, dynamic range enhancement, etc. On that same point, this capability appears to be one (less Rube Goldbergesque) way of achieving greater contrast within and among recordings, even to the point of exceeding the contrast in the source. Which would, by the definition given in the OP, increase neutrality. (While also being less accurate, and possibly more or less transparent.) So, again, do we need to rein in neutrality with some counterinfluence beyond a simple monotonic relationship with contrast? There are a couple of approaches that one would ordinarily use: 1) Instead of a simple linear function, you would add a saturation term. Lets use "N" for neutrality and "C" for contrast. Lower case letters will be constants. We have something like N = a + b*C. But we could add a term to cause neutrality to saturate and even reverse: N = a + b*C - d*C^2 (where "C^2" is C squared). Here, d would be small, so that for small C the linear term dominates, but when we get to larger C, the C^2 term dominates. Thus, for increasing contrast, you get increasing neutrality to a point, then the function rolls over and neutrality starts to decrease. 2) You can leave the function alone, but introduce another function whose behavior is in the opposite direction. Say the parameter in question is X, then you have X = c + d*C, where d is negative. Note that it doesn't have to be C, contrast, but could be some other parameter tied to C. You then adjust the coefficients so that the intersection of the two lines is ideally neutral and ideally X. On one side of that point you want to increase contrast, on the other, you want to decrease contrast (or the related parameter). The problem with both of these approaches is that you need a reference point of some sort. In #1 you need to know how much contrast is too much. In #2, you need to define ideal neutrality (and ideal X) so you can set your intersection. I confess I don't know how to do that, though I think the answer might be found in knowing what things actually sound like. But that gets back to my earlier question: If one could define that point, would it alone be a sufficient condition for neutrality? And if one can't define the point, how do we know when too much contrast is too much? |
Bryon, I have some more thoughts on the "excess contrast" issue, but in thinking about it, I realized there were some holes in my understanding of the operationalization (is that a word?) itself:
1) In the original post, you mention instrument timbres specifically sounding more distinct from one another, and then go on to say whole songs and albums sounded more unique and your collection, more diverse. Is that all a consequence of the change in timbres, or were there other characteristics that contributed to the uniqueness/diversity? (If it is reducible to timbre, then wouldn't the operationalization of neutrality be, "Instrument timbres sound more distinct?" And then wouldn't criteria #1 and #2 be consequences of increased neutrality rather than standards by which we identify it?)
2) Is criterion #2 a consequence of, in whole or in part, criterion #1? If so, and in whole, then a similar reduction might be possible. If not, or only in part, what are the additional characteristics that contribute to #2? |
Bryon attributes the improvements to 'neutrality' and I think that is where we start to go different paths. I think it would be more accurate to characterize Bryon as attributing certain kinds of improvements to neutrality, and that those improvements should be weighed with other system characteristics in tuning playback to maximize listener enjoyment. Unfortunately, there are often underlying issues inherrent in this type of thread which are often decried loudly and crudely. I do not know that this is the case here, but frankly I concluded long ago that this thread was an artful construct to further an unattractive goal. What does "artful construct to further an unattractive goal" mean? Is there some nefarious plot going on here of which I was previously unaware? I thought we were discussing ways of identifying the relative impact of a particular system characteristic. |
The changes in uniqueness/diversity that I noticed were not limited to timbre. They included nearly every aspect of the recordings. Some of those changes are, no doubt, attributable to improved RESOLUTION, but I believe that others are the result of improved NEUTRALITY. Which still leaves me with the question about timbre. It’s something I’ve noticed before with my system, and it’s specifically mentioned as the first thing you noticed with your new, more neutral system: This theory occurred to me one day when I changed amps and noticed that the timbres of instruments were suddenly more distinct from one another. With the old amp, all instruments seemed to have a common harmonic element (the signature of the amp?!). With the new amp, individual instrument timbres sounded more unique and the range of instrument timbres sounded more diverse. I went on to notice that whole songs (and even whole albums) sounded more unique, and that my music collection, taken as a whole, sounded more diverse. So why isn’t there a rule: 0a) Instrument timbres sound more distinct from one another (or “unique”). 0b) The range of instrument timbres sounds more diverse. In other words, is relative timbre distinctness a sufficient criterion for judging relative neutrality? If not, why not? Your argument, as presented above, almost makes it seem as if song/album uniqueness and collection diversity were the consequences of timbre distinctness, though I think that was not your intent. But it does lead me to wonder whether relative timbre distinctness might also be a necessary criterion. But that brings me to: …wouldn't criteria #1 and #2 be consequences of increased neutrality rather than standards by which we identify it? This is a false contrast.
True, when taken out of context. But what I wrote was: If [detecting the degree of neutrality] is reducible to timbre, then wouldn't the operationalization of neutrality be, "Instrument timbres sound more distinct?" And then wouldn't criteria #1 and #2 be consequences of increased neutrality rather than standards by which we identify it? In which case it is the distinction between the primary observable and the byproducts of that observable. You could, for example, study the solar spectrum by observing its reflection from the moon but, when direct observation is available, simpler, and more accurate, why would you? Now, you have suggested that there are other aspects, besides timbre, that contribute to uniqueness/diversity, so there may be reasons to consider the other criteria. And you may argue above that timbre distinctness is not sufficient for detecting any degree of neutrality, so then #1 and #2 would be back to being the primary observables. But I hope they don’t turn out to be because they are harder to apply and therefore inferior to rule #0. Generating absolute uniqueness in a complex system is easier than doing so in a simple system because it requires only a single change in a larger number of characteristics. But judging relative uniqueness becomes harder with a complex system because you must consider, and weigh, all of characteristics, many of which may be different. Judging (relative) timbre uniqueness is easier than judging (relative) song uniqueness, which is easier than judging collection (relative) diversity. Is criterion #2 a consequence of, in whole or in part, criterion #1? No, criterion #2 is not a “consequence” of criterion #1, because the relation between criterion #1 and criterion #2 is not CAUSAL.
I don’t understand how increasing the uniqueness of the songs in a collection would not automatically increase the diversity of that collection. That is not to say that there are not other ways of increasing collection diversity, but increased song uniqueness certainly seems like one. Can you elaborate? |
Learsfool writes: It is apparent that there is already disagreement even between the three of you on exactly what is a "coloration" and what is not. Though these differences may be minimized some by further discussion, I don't think they can be eliminated. So going back to your definition of "neutrality" as the absence of coloration, if there can be no consensus on "coloration," there cannot be on "neutrality," either. What one person may see as a coloration, another will not, as I have said all along. I feel that despite your valiant attempt to expand into different categories of colorations, the early disagreement illustrates this. I don't know how Bryon will respond to this, but for my thinking, I don't see that complete agreement is required in order to achieve a better understanding of the processes we are discussing. We are dealing with terminology that, in general usage, is not exact. If we insist on rigid conformance to our personal usage, you are right, agreement will never be reached. And even then, the discussion may be useful. "Coloration," for instance, is a term for which my usage is considerably narrower than is Bryon's. My understanding of the term is closer to the visual analogy than is his: a coloration is a band-limited (or narrow-band) distortion. I found this definition on line, which is even narrower than my understanding of the term: Coloration: Change in frequency response caused by resonance peaks. Things like speaker cabinet resonance and room modes fit my definition (as per the above), but so do descriptions of systems that are "bright," "dark," "warm," "bass-heavy," etc., because these characteristics tend to be the result of excess or insufficiency (relative to the source) within a particular frequency range. I would put things like intermodulation distortion and crosstalk under a broader category of "distortion" or something like that (because, despite possibly being frequency dependent, they tend to wide-band), and put "coloration" as a sub-category within that category. But my understanding may not reflect the general usage within the community. It may be that Bryon's usage is the more normal. In which case "coloration" is the broad category, and "narrow-band coloration" is a sub-category. But in either case, I understand the term "neutrality," as used here, to apply to the broad category. When Bryon talks about playback system coloration, I just substitute "playback system distortion" because I know that is the way he is using the term. If I come to believe that my understanding of the term is non-standard, I'll adjust my thinking accordingly. If I become convinced that I'm right, I'll suggest to Bryon that he adjust his terminology. As to the question of whether one kind of resolution loss should be assigned to one category or another, I don't see it as being enough to derail the general progress toward a clearer understanding of the topic. In my own classification scheme, I'm not even sure where I'd put harmonic distortion. But I can live with that. As a musician, you must be comfortable with shades of gray and non-absolutist thinking, even though the notes are set down in ink on a piece of paper by the guy who wrote the music. |
I think your approach ultimately is only useful for each individual on his or her own, to come up with his or her own "personal reference point." I don't think there could ever be a generally accepted sense of "neutrality," even as you have refined it with the different types of colorations. I'd like to point out that the title of this thread is "How do YOU judge YOUR SYSTEM'S neutrality?" [Emphasis mine.] It is not, "I'm going to compel you to make your system conform to my idea of neutrality." It seems an obvious point, but it appears to have been lost in much of the discussion throughout this thread. Likewise, Bryon, I think that although it may be possible to come up with a very lengthy list of different categories of colorations that many audiophiles could agree upon IN THEORY, I doubt that there would ever be much consensus on this IN PRACTICE, the result being that very few audiophiles would end up coming up with the same sense of "neutrality." I disagree. My own experience with my system leads me to believe that if you could A/B the various types of coloration in an otherwise constant system, almost all audiophiles would prefer the more neutral system. Maybe Dgarretson or Almarg could tell us if such a test is possible on some of the forms of coloration (for instance, is there a way you could introduce and remove intermodulation distortion, harmonic distortion, crosstalk, etc.?), but how many audiophiles are going to prefer a boomy speaker cabinet, room modes, comb filtering, etc.? I had the opportunity to listen to reduced jitter in my system in two stages (first my adding a Monarchy box between my computer and DAC, then going with a DAC with asynchronous USB), and with each improvement, there we significant improvements in sound quality that I can't imagine any audiophile not preferring. I liked my sound before, but after reducing this form of coloration, I can not imagine going back. I think personal preference plays its strongest role when tradeoffs are required. For instance, if one has to choose between an excess of speaker cabinet resonance, or having poorer resolution, it is likely that audiophiles would be split. But the choice between more or less cabinet resonance is simple, and I think most audiophiles would choose less. |
Dgarretson writes: It would be particularly interesting to hear from designers of boomy cabinets. Heh. Relating to continuousness, movement toward neutrality implies a more organized presentation. This is an interesting notion. If we consider the source as maximally organized information, then each stage in the audio chain has the potential to disorganize some information. The extent to which we don't corrupt the information determines the organization of the final presentation. So for a system, the greater its neutrality, the lower its entropy. Thanks, I hadn't really thought about it like that. It helps explain why upstream improvements (i.e., toward the source) often seem to have the biggest impact: the reduction in entropy is carried through more components, maximizing the potential gain across the entire system. |
Similar to Barnard Malamud's Roy Hobbs in the "The Natural", who tried with futility to hit a hectoring dwarf (troll?) in the grandstands with line drives from his Wonderbat. Many message boards give you the option to put trolls on ignore. It cuts down the clutter. I think I'll suggest it to the Audiogon folks. |
Learsfool says: To grossly summarize, our position would be that although colorations exist, this does not mean that neutrality does. We don't believe that there could ever be a piece of audio equipment, let alone an entire system, that has no coloration, meaning therefore that "neutrality" is an abstract concept, not something that has or could have real material existence. By this argument, you also believe that pressure exists but vacuum does not because nobody has (or ever will) make one. So all these threads discussing "vacuum tubes" should really be corrected to be about "very low pressure tubes." Good luck with that. |
The appearance of this thread certainly could be evidence of the activity of a troll. LOL! By posting an on-topic discussion on the application of a term that is in common use within the community, he's a troll? Somebody alert the authorities. I thought the term Troll was reserved for folks who posted threads on controversial subjects in which people are known to have strong diverse opinions and resolution is not possible. No, in this context a troll would be someone who posts off-topic, insulting, disparaging, and generally rude comments, with no other goal than to disrupt an otherwise civil discussion. |
Meanwhile, back in the on-topic world, I think I’ve come up with a theoretical explanation for Bryon’s observations. My earlier mention of entropy got me thinking about another kind of entropy: Shannon entropy in information theory. The entropy I mentioned previously was thermodynamic entropy, for which organization and entropy are inversely related (i.e., more entropy implies less organization, and vice versa). But Shannon entropy describes the predictability of a variable (or process).
The prototypical example to demonstrate Shannon entropy is of a “fair” coin. (A fair coin is one with an equal probability of coming up heads or tails when flipped.) Such a coin is maximally unpredictable and, because there are two possible outcomes, has one bit of entropy (i.e., you need one bit of information to communicate the result of the next flip). A coin that always comes up one way (either heads or tails), is entirely predictable, and therefore has zero bits of entropy. A coin that is biased (i.e., one result is more probable than the other) has entropy somewhere between zero and one, depending on how biased it is.
The main point here is: Higher entropy => less predictability Lower entropy => more predictability
What does this have to do with music and playback systems? Everything. Consider the information in the source (the music) to have some amount of entropy, X. (Interestingly, and perhaps helpfully, X will be a measure of how much the source can be compressed without loss.) The colorations/distortions are processes that reduce that entropy. Why? Because those processes are predictable. This is not to say they are fixed, or constant (we’ve discussed processes that are frequency dependent, for example), but they are predictable in that their effect on a signal may be known. And because they conceal/corrupt/eliminate some source information and replace it with predictable information, they reduce (at output) the original entropy of the source to something less than X.
Consider a system that when you play a source, it puts out a 60Hz hum. This system delivers a zero-entropy playback. It is maximally predictable. If you improve the system so that some of the source material starts poking through the hum, the entropy increases. Entropy is maximized when the source is played back with minimal predictable content, (and the only source of unpredictable content is the source itself).
So, getting back to Bryon’s observations, the reason that a more neutral system causes timbres/songs/albums to sound more unique and their ranges sound more diverse, is because they literally are more unique/diverse upon delivery to the ears. Which is to say they have higher entropy.
This also explains our intuitive notion that while some colorations may be desirable, they will still tend to homogenize the music.
This also helps put to rest my concerns over the issue of excess contrast requiring a modification of the terms of the operationalization. The Rube Goldberg machine (as Bryon put it) that I proposed was meant to be one endpoint in the continuum of contrast (the sine wave generator being the other). But to enhance contrast my machine replaced sounds from the source (and more generally the set of all recorded music) with sounds from the (larger) set of all recorded sound. So, I effectively increased the entropy, but I did it by bringing non-source information into the system. Which is cheating because real audio systems don’t do that. The only source (of which I am aware) of outside information (other than the source) that enters an audio system is the power. Power fluctuations and noise on the line, to the extent that they are stochastic processes, would act to increase entropy (and to the extent that they are not stochastic processes, would decrease entropy). But my guess is that their nature is such that they would not act to increase perceived contrast in the music. In any event, I think the notion that the operationalization would push us toward systems of excess contrast can be dispensed with. |
(1) Decreasing entropy = Increasing predictability. (2) Increasing predictability = Increasing coloration. (3) Increasing coloration = Decreasing neutrality. .....Therefore: (4) Decreasing entropy = Decreasing neutrality. .....And also: (5) Preservation of entropy = Preservation of neutrality.
Is this correct?
Yes, that is essentially the argument. #1 and #3 are by definition. The reason for #2 is I am asserting that the coloration processes are not stochastic. This assertion is consistent with the definition you quoted from Stereophile: Coloration: An audible "signature" with which a reproducing system imbues all signals passing through it. Which is to say that colorations are replacing/concealing/corrupting musical information with a "signature" (i.e., more predictable information). This understanding of the entropy-neutrality relationship is somewhat in contrast to something I wrote earlier (first quoting Dgarretson): Relating to continuousness, movement toward neutrality implies a more organized presentation.
This is an interesting notion. If we consider the source as maximally organized information, then each stage in the audio chain has the potential to disorganize some information. The extent to which we don't corrupt the information determines the organization of the final presentation. So for a system, the greater its neutrality, the lower its entropy.
Here I was talking specifically about the entropy of the musical organization. However I failed to consider that the colorations were, in fact, more organized than the music. So while the music was becoming less organized *as music*, the overall presentation was actually more organized (i.e., it had lower overall entropy). So I had mistakenly reversed the neutrality-entropy relationship. Bryon writes: INNACCURACY: Alterations to the playback chain that eliminate, conceal, or corrupt information about the music. My only problem with this definition (aside from the typo) is a nit pick: "Alterations to the playback chain..." sounds like you are talking about changes to the hardware. More precise might be something like, "Alterations to the source (or music) as it passes through the playback chain..." and then drop "...about the music." Or maybe just change the word "to" to "within." COLORATION: Inaccuracies audible as a non-random** sonic signature. I very much like this definition as it closely matches my thinking about what a coloration is, without the restrictive "narrow band" constraint that I was considering. |
One possibility is that, according to some posters, this thread is "philosophical" and "academic." This is the part I find most puzzling. I realize that there is a certain anti-intellectualism running rampant in certain circles in the US these days, but I'm surprised to find it in the audiophile world of high-end music, aesthetic appreciation, and outrageously expensive equipment with no other purpose than personal enjoyment. All of which activities are, in a word, elitist. Leaving aside what historically happens in countries that let anti-intellectual demagoguery gain political sway, it would be hard to find a country that has benefited more from academic exercise than this one. From our wealthy, intellectual, elitist founding fathers dabbling in political philosophy and coming up with the Constitution, to a bunch of egghead scientists who for decades pondered quantum mechanical weirdness that had no practical use... until it did, to people like Nelson Pass sitting around trying to figure out which transistor "sounds better," we are the daily beneficiaries of activities that were, or are, largely academic. And that says nothing of the value of purely academic ideas in educating the minds of all the millions of people who, by learning to think rigorously, went on to do something "practical." Personally, I have found this thread enormously valuable. I have had to think, in detail, about a number of concepts that I only had a vague notion of before, which has helped clarify my thinking and improved my understanding of the role of these concepts (and their interrelationship) in the audio chain. And by discussing them and holding them up to scrutiny, I feel I better understand their limitations. As for practical use, the thread started with a practical suggestion, and others have been made as the discussion progressed. My own conception of neutrality in terms of entropy (which is probably not original, but I don't know otherwise), has the potential to be a usable technique. Entropy, as discussed, is an actual, measurable quality of information. Were it to be measurable to a degree that would allow the detection of playback colorations (and I think it probably is), and were it to be correlated with listener experience (and I think it could be), it could become a quantity that was reported alongside other component measurements, like THD, channel separation, frequency response, etc., to help people choose the best component for their needs. Tvad: This discussion is analogous to juggling water. Where audiophiles are concerned the analogies that come to my mind have more to do with bringing horses to water, and herding cats. As Bryon points out, there are seventy eight thousand threads on this site alone. Is this one really so dangerous and disruptive? |
A while ago Bryon produced some equations. Among them: 1. CA = (1/L+N+D). A COMPONENT’S ACCURACY is determined by the amount of loss, noise, and distortion within the component. More specifically, a component's accuracy is INVERSELY PROPORTIONAL to its loss, noise, and distortion. Just a nit pick here: operator precedence being what it is, the equation as written would be evaluated as CA = (1/L) + N + D. But your intent to have all component accuracy be inversely proportional to all three of loss, noise, and distortion would be better written as CA = 1/(L+N+D). 3. CR = CA + FR. A COMPONENT’S RESOLUTION is determined by the accuracy of the component and the format resolution of the source. Specifically, a component's resolution is DIRECTLY PROPORTIONAL to its accuracy and the format resolution. I've been wrestling with this one because I don't think of a component's resolution as limited by the resolution of the source -- that is, the output at any given moment may be limited by the source, but that is not be the component's inherent resolution limit. It is only when the source resolution exceeds the component resolution that you can know anything about the component resolution, at which point the source resolution ceases to be a factor. Or maybe I'm missing your point. 4. SA = SoCA. A SYSTEM’S ACCURACY is determined by the sum of its components’ accuracy. Specifically, they are DIRECTLY PROPORTIONAL.
5. SN = SoCN. A SYSTEM’S NEUTRALITY is determined by the sum of its components’ neutrality. Specifically, they are DIRECTLY PROPORTIONAL.
I have a couple of thoughts on these "sum of" relationships. 1) Some types of errors may not be simply propagated through downstream components, but may actually be reinforced by them. This kind of error may result in an exponential relationship, rather than a simple additive one. This would be an example of bad synergy among components. 2) In some cases, the entire chain may be limited by a single component. Resolution, for instance, may well be a function of the least resolving component in the chain, rather than the sum of small losses in several components. Neutrality, on the other hand, is likely the sum of the components contribution. I realize that you did not intend these to be strict mathematical relationships, but these are some ideas that occurred to me about other types of relationships among components. |
Nice concluding post. It does, however, raise the question of the accessibility of the truth. For instance, how do I know what is the musical event and what is my system? So now I have to come up with a way of determining how much, and in what way, my playback system alters the source material. Any thoughts on that? :) |
>
Like Bryon, I took the red one. But that blue one can be oh-so-seductive at times, especially with those recordings that are just a bit overproduced. |
As soon as anyone else listens to it, it does technically become a performance. Actually, technically, that's a playback of a performance unless the listening is in real time. In that case you have to make the distinction between recording the sound coming out of a speaker (a live event) and recording the data produced by the device (a virtual event). For an electronic device sending a signal directly to the recording medium, the Objectivist viewpoint is impossible because there is nothing to compare the playback to. There was never a "sound" that was recorded. It would be like running a random section of a computer's hard drive through a DAC and asking how it compared to the live event. What live event? As for the rest of your post, I'll let Bryon respond, but he's talking about a continuum that runs from live and acoustic to virtual and electronic, not about placing every recording into one of two categories. As you move across that continuum the Objectivist approach is either more or less valid, not simply valid or invalid. |
Learsfool, I don't think I can be as accommodating as Bryon on your definition of "performance" to include playback. A performance is an event, unique in time and space, and as such, can never be repeated. The performance can be recorded and played back, but that is (to use Bryon's terminology) a representation of the performance, not the performance itself. (Unless you are considering your audio system's speakers, for example, as participating in the performance, in which case I think you are conflating the terms "performance" and "playback," and our disagreement is more semantic than philosophical.) If, as you suggest, musicians consider playback to be performance, then I submit that that belief is idiosyncratic to that group, and not consistent with the ordinary understanding and usage of the term "performance." Are you saying that a Subjectivist cannot evaluate the truthfulness of a recording?? A Subjectivist can evaluate the truthfulness of a recording, but he is acting as an Objectivist when he does so. |
Newbee, there's a lot to be objective about. In GET BETTER SOUND, Jim Smith makes a case (tip #171) that the "personal taste" argument is flawed. He argues that if there were perfect speakers, almost everyone would prefer them. In fact, his whole book is dedicated to getting the system out of the way so you can get closer to the "live" sound of your music.
I don't even understand the subjectivist argument. I can imagine listening to a specific piece of music and thinking "they should have mixed the percussion higher" or "that should have been a cello not a bass" or "they should have upped the tempo there." But I can't imagine saying "all music should have more X," where "X" is some factor imposed by the playback system (except, of course, where X = "fidelity to the source"). What would it be that you'd want your system to add (that isn't in the source) to everything you listen to? Bass? Treble? Harmonics? Rap lyrics? It's all distortion that might make some music sound better to you, but other music will certainly sound worse.
And this is, I think, the source of Bryon's observation in the original post. When you remove a bit of system distortion, different things sound more different because a common element has been removed from everything you hear. |
"The word 'unique' as you have used in your original post, is absolute, it cannot be (should not be) modified further by using terms like less or more as is so commonly done."
This is simply no longer true. Traditionalist grammarians didn't like it, but modern usage recognizes and allows qualification of "unique."
But even if it were true, it seems an odd issue to take when the meaning in the original post was clear. What point are you making about the application of the word "neutrality?" Do you want to substitute another word for "unique" in the original post? How would that affect the points being made?
"But then, I listen to the MUSIC in the first place, so would never make these errors."
I don't see any need for this kind of hostility. This is a discussion about defining and applying some terminology. Is there any reason it can't remain civil? |
If the settings of an equalizer are changed from Setting A to Setting B, as I see it that amounts to a change in the system, which should be evaluated similarly to how substitution of one component for another component would be evaluated. I agree with this. But I was mixing two points. My main point was more about the use of equalization (or some other process) to enhance contrast beyond what actually exists in the source or even the live performance. If we assume that neutrality is a characteristic to be maximized, and increasing contrast increases neutrality then, barring some counterbalancing force, we will always work to increase contrast. So, for instance, if I'm listening to a violin concerto, and I happen to know that the timbre of violins is controlled within a certain range of frequencies, I could cleverly EQ the recording to make the different violins sound more different from one another than they actually do. (The same argument can be made for inter-recording contrast as I've just made here for intra-recording contrast by using recording-specific EQ.) By the rules introduced in this thread, I've achieved greater neutrality, which is something we're trying to maximize. But the result is not desirable. So, assuming that excess contrast is possible, what can we introduce to counterbalance the drive toward always increasing contrast? |
Learsfool writes: It most certainly does NOT follow that just because I don't believe in neutrality, that therefore I don't believe in coloration! (The same goes for the "neutral room"/ "room coloration" thing). The only way this could possibly be true is within the context of your own personal definition, which is precisely what is under debate here. No, the thing being debated is how one judges the relative neutrality of one's playback system. The neutrality of a playback system has been defined as the degree of the absence of coloration added by that playback system. If "DoN" is the degree of neutrality of a playback system, and "DoC" is the degree of coloration of a playback system, then (DoN = 1 / DoC) is the assumption of this thread as stated by Bryon. If you believe that playback systems can add more or less coloration to a system, then you implicitly believe that a system can be more or less neutral, as defined here, whether you believe you believe that or not. You can't believe in speed (distance/time) and not believe in slowness (time/distance) and remain logically consistent. If you want to change the definition of playback system coloration or playback system neutrality so that the above equation doesn't hold, feel free to do so, but please do so explicitly and be aware that your definition isn't the thing under discussion here. As for the "coloration" part: you are using this term in an extremely narrow sense. Yes, he is. He has stated numerous times that his is talking about certain types of alteration of source information by a playback system. There is certainly no such thing as a "neutral" violin. A Strad, which costs millions, is not more "neutral" than a $500 school instrument, though of course all would agree it sounds far better, and has a very different "coloration." A violin is not a playback system, it is a musical instrument. It therefore falls outside the scope of coloration and neutrality as discussed here. The sound of the musical instrument in its recording environment is the subject of our playback systems, not the object. Throughout this thread you have consistently equated playback system neutrality with musical neutrality, but that has never been the suggestion of the thread. Again, as Kijanki and I keep asking, how do you know what anything is "supposed" to sound like? I believe he has stated that aural memory is at least one route to this goal. But I don't even think that is necessary. If my system adds a 60Hz hum (a form of coloration) to everything it plays back, there is no guarantee that the removal of the hum will make what comes out of my speakers sound more like the things they are, but they're not going to sound less like them. So, objectively, by removing coloration (i.e., increasing neutrality), my playback system stands a better chance of accurately reproducing the source. Will it "sound better?" That's for me to decide. But it will be more neutral by the terms of this thread. |
Bryon writes: I think this is accurate, insofar as I have been ignoring ways that systems can sound different that are NOT attributable to differences in playback colorations. I will call those differences COLORATION-INDEPENDENT CHARACTERISTICS. A coloration-independent characteristic is sonic characteristics of a component/system that is:
(1) VARIABLE, in the sense that multiple values of the characteristic are possible, and (2) COLORATION-NEUTRAL, in the sense that, for at least a limited range of values, differences in the value of that variable have either (a) no effects or (b) identical effects on the concealment and corruption of information about the music. I'm not sure I agree with #2. We've already identified resolution as existing outside of neutrality/coloration, but it would not pass part b of this test, because low resolution would conceal information. Coloration-neutral characteristics would seem to demand a definition that speaks in some way to their frequency independence, and could then include things like dynamic range (headroom?), scale, and microdynamics. Although, honestly, there is very little in audio that is frequency independent, so the definition will have to be a matter of degree. |
Mrtennis writes: ...better is a subjective term... Yes, it is. But it also isn't the subject of this thread. We're talking about neutrality. ...since there is no known reference in audio as the sound of a recording is completely unknown... If you truly believe that, a few posts ago I proposed a hypothetical system (now referred to as the "Rube Goldberg machine") with which you could replace your current system. Given that the sound of a recording is "completely unknown," I assume you wouldn't notice the difference. You should, in fact, be satisfied with the sound of pounding on your recordings with a hammer because, arguably, that's what they really sound like. thus, two audiophiles will disagree as to which audio system is closer to "neutrality". Ignoring the non sequitur (see my first point), that point would only be valid if judging neutrality required an absolute reference. The OP proposed a means that required only a relative reference. In my experience, most audio reviews and personal judgments are made on the basis of the relative merits of components and systems, and don't require an absolute reference. But also in my experience is a lifetime of hearing things, human voices and musical instruments included, and I can tell live/real from recorded/reproduced, and I can tell a better reproduction from a poorer one. Can't you? |