Theoretical Pre Amp Question


Real world answer would be to listen to it both ways and pick, because execution matters, but theoretically...

If a source has a choice of high (2V) or low (1V) output, then at typical listening levels the pre amp will be attenuating the signal to much less than 1V. Which source output level SHOULD be better? Is there likely to be more distortion or noise from a pre at lower or higher input level, even though either would use less than unity gain? If specifically using a tube pre amp, SHOULD the source level have an impact on how much “tubiness” comes through even though there is negative gain? What about potential interconnect effects? Wouldn’t a higher level signal be more resistant to noise as a %?

In an ideal theoretical case there is no distortion or noise. In a real world, empirical test the implementation dictates results. I’m just curious about the in between case of typical expected results based on standard practice and other people’s experience 


cat_doorman

Showing 1 response by erik_squires

You keep blocking both hypothetical and realistic evaluation. If there’s no distortion or noise, then why does it matter?

I think it may help you to understand a little about how preamps are usually (but by no means always) designed.

There is an input buffer, a gain stage and then at the end the volume control.
For historical reasons, preamps of the past had what we should consider far too much gain today.  If you imagine what it was like for radios to pick up very weak stations for instance, you'd understand why so much additional gain might be desirable.

99.99999% of the additional noise in a preamp comes from the unavoidable gain stage. So, if you can significantly reduce the gain, a trick you should consider in older tube pre-s, you have a much cleaner signal on the output, regardless of volume control setting.