Dual Differential / Balanced?


Hey all I’ve got that itch to upgrade power amps, and was wondering how valid the dual differential aka "balanced" monoblock or dual mono design is in terms of increasing fidelity compared to a conventional SE amp. note my preamp is also fully balanced

how much noise is avoided by using a fully balanced system?

right now I use 2 haflers horizontally biamping NHT 3.3. using mogami gold XLR
p4000 200wpc mids/highs p7000 350wpc lows

from what I’ve read it only matters if both the preamp and power amp are both truly balanced

I have a nice Integra Research RDC 7.1 fully balanced pre/pro, it was a collab with BAT, I would go for the matching RDA "BAT" amp but its pretty much unobtanium

So far I’ve looked at classe ca200/201, older threshholds, older ksa krell, as fully balanced monoblocks/ dual mono stereo

I was also told to look at ATI amps, they look very impressive but expensive

I’m looking to spend 1500-2500 preferably used products, I dont have an issue with SE amps I just want to exploit the fact my pre is fully balanced, and perhaps get better sound. If anyone has recommendations for awesome dual differential power amps. the NHT 3.3 are power hungry so at least 150wpc, class A/AB

I’ve also come across the emotiva XPA-1 monoblock, I can get a good deal on one of them I wonder if its worth picking this up and praying for a lone one to come on classifieds on ebay- note this is the older model in the silver chassis 500wpc 8ohm / 1000 4ohm

for context prior to the realization that I should use a fully balanced system I was looking at brystons and mccormack amps.. thanks
nyhifihead

Showing 2 responses by kirkus

There appears to be quite a bit of overlap in this discussion between the role of balanced equipment interconnections, and circuit topologies that use differential signal paths. This is actually quite understandable -- when one observes many of the design practices in contemporary "balanced" high-end audio gear, it seems that a large percentage of the people who design them are confused about the difference.  But "balanced" interconnects and "balanced" circuit topology are SEPARATE subjects, as they have DIFFERENT reasons for existence.  This post deals with the former; that is, the use of balanced interconnection between equipment.
 
To clarify the basics . . . "balanced connection" (one that usually has an XLR connector in the high-end audio world), what we're talking about is a connection that has two signal conductors, each of which has the same IMPEDANCE to ground, thus the impedance is "balanced" between them.  The fact that there are two signal conductors allows two modes of signal to coexist . . . the mode that pertains to both of the conductors together with respect to ground ("common-mode") and the mode that pertains to the voltage difference between the conductors ("differential-mode").

The overwhelming source of the noise we're trying to reject comes the flow of AC power leakage currents when active electronics are connected together (i.e. preamp to power-amp). With very few exceptions (i.e. Krell CAST), an analog audio signal is defined as the VOLTAGE at the equipment's output.  Of course in the real world, some signal current flows as a function of this signal voltage across the receiving equipment's input impedance and the connecting cable's reactance.  But put simply, in the world of audio interconnection . . . when we say "signal", we're talking about a voltage.

The shortcoming of unbalanced interconnection is primarily the resistance of the shield, as it functions to connect equipment grounds together.  As noise CURRENT flows across the shield resistance, a corresponding noise VOLTAGE appears at the ground of the receiving end.   Very little of this noise current flows through the signal conductor because the signal's input impedance is much higher, and the difference in impedance (hence the term "unbalanced") means this noise current manifests a noise voltage on top of the signal voltage.  In a balanced system, the same noise voltage appears as a result of the shield resistance, but the idea is that it cause identical noise voltage to appear on two signal conductors rather than one, and the signal can be defined as the voltage BETWEEN the conductors rather than the voltage between either conductor and the shield . . . and thus the receiving equipment can tell the difference between the signal and the noise.

But as stated above, in the real world any voltage produced at an input must also result in some current flow, as a result of its input impedance, and as such the balanced connection must have identical impedances between both of its signal conductors and ground for the noise-rejection scheme to work.  Otherwise, a different amount of noise current will flow through one signal line than the other, and the noise voltage will appear as the voltage between the signal conductors, just like the signal.  Put another way . . . the amount to which the impedance is unbalanced is the amount to which it starts to behave like an unbalanced interconnect.

For any balanced interconnect, we can thus predict the exact degree of this behavior from three impedances (common-mode, differential-mode, and differential-mode-imbalance) for each of the source electronics, destination electronics, and interconnecting cable.  The best explanation on this I've found is in Bill Whitlock's paper " Answers to Common Questions about Audio Transformers", available here as AN-002: http://www.jensen-transformers.com/application-notes/.  There are points I'd like to reinforce from this:
1. The balance of signal voltage between the conductors, with respect to ground, DOES NOT MATTER for the noise-rejecting capabilities of a balanced line.  It's the balance of the impedances that's critical.
2. The sensitivity of a system to impedance imbalances is a function of the ratio of the differential-mode (signal) impedance to the common-mode (noise) impedance.  Thus, a balanced input can be made less sensitive to impedance imbalances by increasing the common-mode impedance, or reducing the differential-mode impedance.

A **well designed** balanced interface may be advantageous regardless of whether or not the internal signal paths of the connected components are balanced or not. And a **well designed** balanced internal signal path within a component may be advantageous regardless of whether or not the internal signal paths of other components in the chain are balanced.
The main reason for my diatribe above is that I think many audiophiles would like to have some clarity on what constitutes a "**well designed** balanced interface" . . . so I'll throw my thoughts out there for consideration.

First, an input stage should at least "work well with most stuff people will hook up to it" . . . or better yet, "allow the preceding stuff to work at its best".  For consumer high-end audio, I think this means a signal input impedance of at least 10K through the entire audioband, and for a balanced input, we need to maintain decent noise rejection (maybe min. -40dB) with a source impedance imbalance of at least 5% . . . as this parameter is usually determined by four 1% resistors working in series, plus some cable and connector imbalance.

Second, I think that an input stage must "allow the rest of the unit to work its best under all conditions".  For a balanced input, this means that the input stage shouldn't rely on the rest of the unit (or the equipment that follows) to continue with the task of rejecting the common-mode noise that appears at the input jack.  And if the rest of the unit requires matching differential signal voltages to work properly, then the input stage should be able to provide this regardless of the presence/absence of noise at the input, or whether or not the signal voltage at the input jack is balanced with respect to ground.

It's this second requirement for which an alarming amount of high-end audio gear achieves an epic fail . . . and it's actually much MORE of problem with equipment that advertises itself as being "fully balanced", "differential circuit", or whatever.  I can support Ralph's specific observation on some of ARC's gear, and the fact that they're not alone . . . this is the situation for many, many very well-respected and premium-priced products, both tube and solid-state.

I can only speculate as to why this is the case . . . perhaps many designers become enamored with differential circuit topologies for other reasons, and these circuits require a pair of differential voltages to work.  They then feel that simply wiring the inputs to pins 2 and 3 of an XLR connector has a certain simplicity and elegance about it, but don't fully consider the real-world requirements for a balanced input.  Or they see that other manufacturers have "balanced" designs, and respond in making their own creations by simply stuffing two of everything in the box and thinking that it constitutes an "upgrade".  Or they become smitten with the way their schematics look when they're arranged in a symmetrical fashion about the horizontal axis . . .

Whatever the reason, it's good advice for purchasers of "fully balanced" equipment to do some investigation as to its specific input requirements, and kudos to Al for his time spent in research and assistance on Audiogon forums to this end.
Speaking of Bill Whitlock, since the interface-related noise that is being discussed may in many cases be caused or contributed to by ground loop effects, you'll probably find pages 31 through 35 of this paper to be of interest. To whet your interest, its introduction states that "this finally explains what drives 99% of all ground loops!"
Excellent link, Al . . . a great comprehensive document that covers many aspects of Whitlock's papers over the years.  The dedication to the late Neil Muncy at the beginning is nice, too . . . his AES paper from the mid-1990s is the definitive work on one of the most prevalent equipment design issues in the audio industry.

I also read Whitlock's AES Paper that's the source for the content of pages 31 through 35 of the presentation paper . . . and this is some brilliant and thorough work as well.  I've long been suspicious of the randomly-arranged conglomerations of THHN that populate the conduit on commercial jobs, even though it's not too clear what's the best route to pursue during construction to avoid it . . . asking union electricians to twist THHN into pairs before each pull seems like a great way to get kicked off the jobsite.  The ideal solution would be a pre-twisted cable with fillers and an overall jacket -- something that could be easily specified on the prints, relatively easy to pull through the conduit, and with established fill tables to make sure the conduit sizing and labor costs are predictable.  In absence of this . . . using multiple smaller conduits or runs of MC where necessary may be the best route, or at least over-size the ground wire(s) for some brute-force reduction may be all that can be achieved in the real world.

I do think that we're lucky in that the "conduit transformer" issue is far lower on the list of worries for a residential single-room audio system than it is for i.e. a large commercial building with high-power active line arrays or digitally-steered columns placed hundreds of feet from their source electronics.  The main susceptibility for high-end audio systems would be where multiple dedicated branch circuits are employed . . . and this can be eliminated as an issue by using Romex, running all circuits to a single multi-gang non-metallic wall outlet box, and connecting all the grounds together at this wall box in addition to the panel.  This eliminates the chance of any voltage differential between the third-prong AC grounds in the system, yet still provides the benefit of fully separating the current flow between the circuits.  Also, I've seen many breaker panels where the connection bar(s) are shared between ground and neutral connections for the various circuits, and this can needlessly impart random voltage differences between the grounds of different outlets, and additional ground-to-neutral noise.

My only slight disagreement with Bill Whitlock would probably be that I don't share the same level of distain for unbalanced interconnection, in the context of the systems that we discuss on Audiogon.  Here, we're usually talking about audio systems that don't share grounds with equipment in other rooms, and can easily be plugged into AC outlets that are all on a single power strip, or ganged together in the wall . . . and the audiophile's idea of a "long" interconnect is . . . maybe 15 feet?  I've just seen so many ill-conceived applications of the 3-pin XLR connector on high-end audio gear that I frequently feel I need to look at a schematic before deciding if it's even usable.   Much of the time I think equipment with poorly-designed "balanced" inputs would work so much better with simpler circuitry, and a plain 'ol RCA jack for the input.
 . . . amplifiers are quite good and accept a balanced input correctly (many high end audio amps do not, likely because the designers don't know that there is a standard for balanced line operation, defined by AES file 48).
Ralph, I'm assuming that by "a standard for balanced line operation" you're talking about the 600-ohm termination resistor . . . and I'm sorry to pull a Kryten here, but AES48 says nothing about signal impedances; rather, it's a culmination of grounding practices derived mainly from the work of Neil Muncy, whom I referenced above.  None of the AES Standards documents mention balanced signal impedances . . . but the Bill Whitlock presentation for which Al provided the link references IEC standard 61938, according to which "all professional and broadcast line amplifier inputs" should be >= 10K ohms.  He also goes on to address the origins of 600 ohms as a specification, and a few reasons why he feels it's inapplicable to modern audio systems . . . and I can't find any reason to disagree with him on this point.

I know you've maintained that the addition of a termination resistor makes cable characteristics non-critical . . . but I'm still unclear as to what electrical mechanism you feel is responsible for this.  In my own experience, while some of my system iterations over the years have seemed more sensitive to cables than others, I haven't noticed a correlation between this and low termination impedances.  I also have 100K, 600 ohm, and 150 ohm termination impedances all a mouse-click away from each other on the Audio Precision system . . . and I've had my share of times where some measureable artifacts that end up being related to cables and interconnection remain completely unfazed by the setting.  So if you have any details on this that I haven't thought of, I'd be very interested to know.