Theoretical Pre Amp Question


Real world answer would be to listen to it both ways and pick, because execution matters, but theoretically...

If a source has a choice of high (2V) or low (1V) output, then at typical listening levels the pre amp will be attenuating the signal to much less than 1V. Which source output level SHOULD be better? Is there likely to be more distortion or noise from a pre at lower or higher input level, even though either would use less than unity gain? If specifically using a tube pre amp, SHOULD the source level have an impact on how much “tubiness” comes through even though there is negative gain? What about potential interconnect effects? Wouldn’t a higher level signal be more resistant to noise as a %?

In an ideal theoretical case there is no distortion or noise. In a real world, empirical test the implementation dictates results. I’m just curious about the in between case of typical expected results based on standard practice and other people’s experience 


cat_doorman

Showing 1 response by atmasphere

For a tube pre I think case 1 would impart more constant tube character because it is running at constant power and only attenuating after.
@cat_doorman   The harder you run the tube circuit the more distortion it will make. So its not a matter of how much signal the preamp is getting since that goes through the volume control first. Its more a function of how loud the system is playing. That said, tube preamps tend to have very low distortion figures relative to amplifiers.