Is there a rule of thumb that guides the balance between digital input level, analog pre-amp input level and analog amp attenuation for reach your desired listening volume?
This is a really interesting thread. I recently decided to try a tube pre (Zesto Leto) in my set up. It pairs with my balanced DAC (MHDT Pagoda), SS Amp (Krell XD Duo), Alta Audio Alec’s and the streaming is handled by a Auralic Aries G1 streamer.
My other Pre is a Krell Illusions ii.
When I introduced the Tube Pre, found the synergy / match on again / Voltage was off - as noted on this thread some Balanced DAC’s come out hot and that’s the case with my DAC. Without any adjustment I barely have to move the volume on the Pre. What this causes is similar results to having too low a gain, I’ve got a hum and the highs overpower the mid’s / bass. When I put a attenuator into the mix at the input stage of the Amp (- 10 DB) dramatically lowered the hum and balanced off the sound. Even at - 10 DB it’s still a bit hot, can’t take the volume knob above a quarter turn. I really like the sound of the tubes on the front end so my solution long term is having a custom box built that will allow for 3 different setting on voltage and a bypass setting. I could send the Pre in to Zesto and have the gain adjusted but then if I decide to switch amps or my DAC I may have mismatch the other way. This set up will allow some flexibility.
The moral of my story, findings is that the synergy / marriage of the pre and amp as it relates to gain / voltage is important. Can have a significant impact on sound quality and the noise floor. Most will point toward issues when something in the chain is too low but if too high, causes issues as well.
Happy Holidays and Good luck on your Audio Journey!
First, my listening enjoyment on a system costing only $2K between a recapped old Crown PS200 and the new Wharfedale Lintons, a Bluetooth DAC and unpowered preamp switch with a pot has benefited from all this info.
Basically, pushing in higher level digital and living with analog attenuation of 50% at the preamp and 25% at the main amp has pleased the clever Peter Comeau designed heritage speakers and delivered more detail to my listening. So thanks!
Second, I’m interested in the issue of lossless iTunes format from Apple. My little BlueMeHD Bluetooth DAC combination is a well reviewed accessory for older amps, and seems to compare well to the more expensive TASCAM CD400U rack Mount in my other stereo downstairs in our living room.
My best reading so far, however, sounds like Apple does not say that their format survives Bluetooth transmission. I have a couple iPads snd I could potentially rig one of then as a hard connected high speed WiFI input device. Is that worth a new thread?
Then set amp such that your preamp can be run between 10-2 o'clock depending on desired volume ( therefor around 12) . This is usually the sweet spot on most variable pots used in a preamp.
Data?
I have always been advised that one should set iTunes at full volume.
Compression is LOW-FI See ieLogical Lossy for a real life iTunes example
Lossy music on a high resolution system can be literally nauseating due to the constant atemporal image shift.
I am not suprised at your findings. I have always been advised that one should set iTunes at full volume. Then set amp such that your preamp can be run between 10-2 o'clock depending on desired volume ( therefor around 12) . This is usually the sweet spot on most variable pots used in a preamp.
One problem you have is the total gain you have between preamp and amplifier causing you to attenuate much of the available gain in the amplifier. Your compromise seems the best fit.
My initial question is well solved thanks to this discussion. The practical info about gain helped. With the analog preamp pot at 50%, iTunes slider at 0 db, and amp at 25% attenuation everything was more forward at a final volume comparable to before my question. Before, with the slider low and preamp pot way down so the amp could run at 100% it seems I had it all backwards…starving things out and pinching the signal to noise ratio.
Biggest proof? The amazing difference from my Wharfedale Lintons that I can induce at this new setting just by sliding the iTunes level between 85% and 100%. Volume changed just a little, but mid and treble imaging exploded in a good way every time I took it back to 100%!
Already useful. Had a little time today for some ear tweaking. iTunes slider at 0dB, preamp analog pot at 12 o’clock, Crown attenuators at 8 o’clock. As compared to my previous setting, Wharfedales were more forward, tweeter a bit brighter, bass tight and rich. Unscientific impressions were that the signal was more complete without noticeable distortion. Less relaxing but more engaging LOL
Every analog stage has overload conditions. So it really helps to understand the rough signal level (e.g. RMS Voltage) expected at each link in your chain, and to ensure you leave ample overload margin for the downstream component to handle that signal level (overload margins are often spec’d relative to a given RMS Voltage input level).
Digital sources these days often have very hot output (sometimes well in excess of 4 Volts on XLR outputs), and I wouldn’t worry about the using a well-implemented digital volume control (lots of bit depth, 64 or more bits and dithering) to keep the output levels reasonable, or just to do some L/R channel rebalancing for your system - as long as you’re not using it to attenuate TOO much.
Depending on the gain structure of your system, if you have too much gain and very sensitive speakers, sometimes it can help to have additional analog attenuation further downstream in order to push down the noise floor relative to signal.
Yup, that's the reality. One could almost say that the quality degradation of the audio signal from recording to reproduction at home is equal to the ignored information about the causes for this degradation. Something could be done because in the end it's all about gain. Using active speakers and intelligent volume control would be the easiest way to approach this problem at the consumer level.
If your power amp has input level controls, as some vintage Crown amps did, I would suggest attenuating them to allow your analog preamp volume control to operate with normal listening at the noon mark or a bit above, and your DAC or source unattenuated (-0dB). That way you avoid the loss of resolution from the source, and the nasties that most VC pots exhibit near full attenuation…at least try that and judge for yourself.
Apple Lossless is an interesting direction…although when I read their FAQ, they seemed to note that their format works via Bluetooth, but the benefits are not applicable?
@ieales: Just because you don't understand something doesn't make it automatically "utter nonsense". Please!
Maybe the following article can shed more light on the topic although it doesn't explain everything that could be discussed here.🙂There are different results based on practical listening tests and not everyone has to agree with everything even if it's mathematically correct (like the 24bit signal and digital volume control ).
If it's analog a rule of thumb is no higher than 50%. Digital is messier if the attenuation is done digitally rather than in the output analog stage but the 0 db point mentioned throughout is also a good rule of thumb.
To be clear yoyoyaya, I'm using a BlueMe Bluetooth Receiver from my iPhone as a DAC, with its output going into an un-powered Shiit with a volume pot. I've been keeping my old Crown PS200 amp at 100% attenuation and am experimenting with varying the iMusic output levels to the BlueMe and the Shiit pot to control the overall volume of the rig.
This is probably not at the level of most systems discussed here but it sounds very good for my level of interest - with a REL T5i sub and a pair of Wharfedale Lintons. Just interested in making sure I'm not mucking up overall signal quality by playing too much with the input gain.
FWIW. My dCS Bartók DAC is set at .2v output gain, the volume control is set at 0db ( all the way up) Roon is also at 0db. Volume is controlled by volume pot on Backert preamp. This gain staging was recommended by Backert’s Andy Tebbe. I would swear that brought about an improvement having the digital gain ; streamer and DAC all the way up.
You should run the power amp with no attenuation and use your preamp as your volume control - that's what it is designed for.
I don't understand the reference to changing the gain on your iphone in the context of a feed to your preamp unless the Schiit has a built in DAC. If that is the case, you should be running the output from the source at a fixed gain of 0dB.
Very helpful summary…and on the money to my learning curve. Since I enjoy my Wharfedale Lintons and ample vintage Crown power at different volumes in our cozy bungalow, am I correct in saying that using the final attenuator knobs on the amp are the safest choice to preserve signal quality at various listening volumes…as opposed to lowering the gain from my digital music source or the preamp?
DAC = Digital to Analog Converter. If it has attenuation, it can be done mathematically before DA or post DA with digital attenuation or simply a pot, hopefully buffered.
As far as -10dB for 16 bit or -20dB for 24 bit being hinge points, that is utter nonsense.
In any system, matching gain structure is paramount and is best determined by inspection [listening or noise measurements].
I think of digital volume control as the digital signal being altered which is something audiophiles seem to go to great lengths to avoid.
I think of analog volume control as signal attenuation which narrows the gap between the signal and noise (i.e. reduced signal to noise ratio) which is also not desirable in the next stage of amplification.
I think the pro audio rule of thumb for gain structure is to have the input signal as high as possible to maximize the signal to noise ratio (I might be saying this incorrectly).
With these things in mind, I would try to set everything to "zero" and let the preamp do its job.
This is one reason that I had a DAC that doesn't have any volume control and set my bluesound node to have a fixed output level.
I need to be more precise: Using digital volume control will reduce the original resolution of the audio file but it will only become clearly audible at some point (A/B comparison) compared to analog control. Those settings are -10dB and -20dB.
Digital volume control reduces resolution of the signal and is therefor depending on the original resolution of the audio file. With 24 bit you could decrease the digital gain more than with a 16bit file before you will lose resolution. As a rough rule of thumb 16bit files shouldn't be played with settings lower than -10dB, 24 bit not lower than -20dB.
The analog volume control also does something "bad" to the audio signal. Normally it is a voltage devider circuit which shortens some part of the audio signal to ground. Everybody knows that the volume control should be set at least around 11 o"clock or higher in order to get a good sounding signal at the output. The subtle information in the audio signal will suffer first if the analog volume control has to be set too low for comfortable SPL levels.
Ideally one would use a combination of digital and analog volume control: setting the digital volume control for full resolution of the digital signal and using the analog volume control as little as possible.
How effective one can combine these two settings depends of course also on the gain of the preamp, input sensitivity of the power amp, and sensitivity of the speakers. If everything lines up one could set the digital gain around -5dB, the analog gain to almost fully open and the SPL would be at 80-90% of full volume for loud listening.
There have been attempts to work around the neg. effect of analog volume control like putting the voltage divider after the input stage, using current instead of voltage control, or in the tube compartment one can use the 6386 (normally used in pro audio gear comressors)and control /adjust the gain instead of shorting a big amount of the signal to ground.
Depends on the source. Being old school, I run the DAC at 100% up to my integrated amp, even though it has an analog volume control I could use.
I also use Roon for DSP correction though, and it does all the math at 32 bit precision so I could probably use it as a volume control as well without being able to tell any degradation in sound.
Understood. When playing streaming music from my iPhone I can control the final volume of listening by changing the output level of the app (which I assume is digital) or changing the analog output level on my Shiit pre-amp to my analog amp or change the amp attenuation.
I’ve running my old Crown at 100% but am wondering which of the other gain sources is best to maintain clarity of signal. As I understand things excessive gain in the digital realm creates clipping and excessive analog gain creates distortion. How can I avoid these pitfall?
If my DAC has gain like my RME , I will experiment with the output voltage and set the volume to "0dB " on the DAC and use the preamp volume ,
For most RCA installs I had that DAC set to +5 dBu and it put out about 2 v to my Preamp. Set it too low and you invite noise by having to crank up the volume due to low gain. Set it to high and you can overload the preamps inputs.
If the DAC has no provision for adjustment, I set set the volume to 0db. If the volume does not read out in - dBu , like "-99 through 0" I will set it anywhere between "-10 and -3 "
Not sure I follow you. What do you mean by digital gain? Digital sound level is set on the recording. If you mean that your DAC has a gain on it, that it the gain on the analog side of your DAC and is often used without a preamp. As for which one to turn up, I'd experiment. Set my DAC volume at low, med and high for a day each. You may find it is a way to control harshness.
You must have a verified phone number and physical address in order to post in the Audiogon Forums. Please return to Audiogon.com and complete this step. If you have any questions please contact Support.