I think Bryon and Almarg addressed most of Learsfool's comments, but I'd like to add something on this point:
Typically with room correction there is a fair amount of objectivity in the process. You play frequency sweeps through the system and then look at the response curve, with the goal of setting filters to reduce peaks caused by room modes. Some systems do this entirely automatically, though I believe that the manual approach is still better. But I don't think this is the same as setting the system so it sounds good to the individual. It is set to neutralize room modes, and as a byproduct the system sounds better.
This is really not all that different from voicing a system by moving speakers around and looking at the results on a real time analyzer. A properly treated room with well-placed speakers is an attempt to minimize the coloration caused by the room. But a lot of folks have limited options for treatments and speaker placement, and for them, room EQ is a viable alternative for achieving less system coloration.
Almarg wrote:
I mentioned in an earlier post that it is now possible to provide different EQ for every song in your library. (The capability is a bit crude now, and would be difficult to implement for analog sources, but there are no technological hurdles to this capability.) This is the have your cake and eat it too scenario. You could fix up the recordings that need it, and leave the others alone. One could imagine adding other tools besides just EQ: volume graphing, dynamic range enhancement, etc.
On that same point, this capability appears to be one (less Rube Goldbergesque) way of achieving greater contrast within and among recordings, even to the point of exceeding the contrast in the source. Which would, by the definition given in the OP, increase neutrality. (While also being less accurate, and possibly more or less transparent.)
So, again, do we need to rein in neutrality with some counterinfluence beyond a simple monotonic relationship with contrast? There are a couple of approaches that one would ordinarily use:
1) Instead of a simple linear function, you would add a saturation term. Lets use "N" for neutrality and "C" for contrast. Lower case letters will be constants. We have something like N = a + b*C. But we could add a term to cause neutrality to saturate and even reverse: N = a + b*C - d*C^2 (where "C^2" is C squared). Here, d would be small, so that for small C the linear term dominates, but when we get to larger C, the C^2 term dominates. Thus, for increasing contrast, you get increasing neutrality to a point, then the function rolls over and neutrality starts to decrease.
2) You can leave the function alone, but introduce another function whose behavior is in the opposite direction. Say the parameter in question is X, then you have X = c + d*C, where d is negative. Note that it doesn't have to be C, contrast, but could be some other parameter tied to C. You then adjust the coefficients so that the intersection of the two lines is ideally neutral and ideally X. On one side of that point you want to increase contrast, on the other, you want to decrease contrast (or the related parameter).
The problem with both of these approaches is that you need a reference point of some sort. In #1 you need to know how much contrast is too much. In #2, you need to define ideal neutrality (and ideal X) so you can set your intersection. I confess I don't know how to do that, though I think the answer might be found in knowing what things actually sound like. But that gets back to my earlier question: If one could define that point, would it alone be a sufficient condition for neutrality? And if one can't define the point, how do we know when too much contrast is too much?
To be clear, I am not suggesting that room correction is worthless, indeed these systems can make a big difference; I just wonder - how do you know when it has been corrected? Again, only you can answer that for yourself, and your answer may be very different from any other given audiophile's.
Typically with room correction there is a fair amount of objectivity in the process. You play frequency sweeps through the system and then look at the response curve, with the goal of setting filters to reduce peaks caused by room modes. Some systems do this entirely automatically, though I believe that the manual approach is still better. But I don't think this is the same as setting the system so it sounds good to the individual. It is set to neutralize room modes, and as a byproduct the system sounds better.
This is really not all that different from voicing a system by moving speakers around and looking at the results on a real time analyzer. A properly treated room with well-placed speakers is an attempt to minimize the coloration caused by the room. But a lot of folks have limited options for treatments and speaker placement, and for them, room EQ is a viable alternative for achieving less system coloration.
Almarg wrote:
If a system is truly accurate yet still results in a lifeless/soulless sound, then it seems to me that there is a problem with the recording(s) being listened to. In that case, it seems to me to be perfectly legitimate to introduce some modest degree of inaccuracy into the system, such as non-flat frequency response, to compensate. The price that will be paid is that other recordings which are more accurate and transparent will then no longer be reproduced to their full potential.
I mentioned in an earlier post that it is now possible to provide different EQ for every song in your library. (The capability is a bit crude now, and would be difficult to implement for analog sources, but there are no technological hurdles to this capability.) This is the have your cake and eat it too scenario. You could fix up the recordings that need it, and leave the others alone. One could imagine adding other tools besides just EQ: volume graphing, dynamic range enhancement, etc.
On that same point, this capability appears to be one (less Rube Goldbergesque) way of achieving greater contrast within and among recordings, even to the point of exceeding the contrast in the source. Which would, by the definition given in the OP, increase neutrality. (While also being less accurate, and possibly more or less transparent.)
So, again, do we need to rein in neutrality with some counterinfluence beyond a simple monotonic relationship with contrast? There are a couple of approaches that one would ordinarily use:
1) Instead of a simple linear function, you would add a saturation term. Lets use "N" for neutrality and "C" for contrast. Lower case letters will be constants. We have something like N = a + b*C. But we could add a term to cause neutrality to saturate and even reverse: N = a + b*C - d*C^2 (where "C^2" is C squared). Here, d would be small, so that for small C the linear term dominates, but when we get to larger C, the C^2 term dominates. Thus, for increasing contrast, you get increasing neutrality to a point, then the function rolls over and neutrality starts to decrease.
2) You can leave the function alone, but introduce another function whose behavior is in the opposite direction. Say the parameter in question is X, then you have X = c + d*C, where d is negative. Note that it doesn't have to be C, contrast, but could be some other parameter tied to C. You then adjust the coefficients so that the intersection of the two lines is ideally neutral and ideally X. On one side of that point you want to increase contrast, on the other, you want to decrease contrast (or the related parameter).
The problem with both of these approaches is that you need a reference point of some sort. In #1 you need to know how much contrast is too much. In #2, you need to define ideal neutrality (and ideal X) so you can set your intersection. I confess I don't know how to do that, though I think the answer might be found in knowing what things actually sound like. But that gets back to my earlier question: If one could define that point, would it alone be a sufficient condition for neutrality? And if one can't define the point, how do we know when too much contrast is too much?