Theoretical Pre Amp Question


Real world answer would be to listen to it both ways and pick, because execution matters, but theoretically...

If a source has a choice of high (2V) or low (1V) output, then at typical listening levels the pre amp will be attenuating the signal to much less than 1V. Which source output level SHOULD be better? Is there likely to be more distortion or noise from a pre at lower or higher input level, even though either would use less than unity gain? If specifically using a tube pre amp, SHOULD the source level have an impact on how much “tubiness” comes through even though there is negative gain? What about potential interconnect effects? Wouldn’t a higher level signal be more resistant to noise as a %?

In an ideal theoretical case there is no distortion or noise. In a real world, empirical test the implementation dictates results. I’m just curious about the in between case of typical expected results based on standard practice and other people’s experience 


cat_doorman

Showing 1 response by cat_doorman

I didn’t think through the circuit. The answer is pretty obvious after that. Of course there are 3 basic categories 
case 1:buffer, gain, attenuation - this might have issues with variable output impedance similar to a passive depending on implementation
case 2: buffer, variable gain, buffer - I now remember something about the PS Audio Gain Cell varying gain instead of attenuating signal.
case 3: attenuation, gain, buffer - this keeps the gain and output impedance constant
For a tube pre I think case 1 would impart more constant tube character because it is running at constant power and only attenuating after. Case 3 would be more dependent on implementation of the gain stage. With sufficient bias a linear response wouldn’t color the signal more at higher volume than lower. 
Seems like running hot is the way to go. Unless there ends up being another reason not to.

Thanks for pointing me in the right direction guys.