DIY balanced interconnects


I want to build some balanced interconnects.
1. Has anyone compared Swithcraft, Vampire and Neutrik XLR plugs?
2. Any comments on Mogami Neglex 2534 vs Vampire CCC-II vs Oyaide PA-02 cables.
3. Should the ground shield on these twinax cables be connected on both ends, only on the source ends, or on the preamp ends?
Thanks for your comments.
oldears

Showing 7 responses by kirkus

Another good value is Belden 8412 - this is the professional standard microphone cable. It doesn't reject hum quite as well as star-quad configurations, but has a lower capacitance, which is a good tradeoff for most line-level interconnect applications as well. It works perfectly with the strain-releifs on both the Neutrik and Switchcraft connectors, both of which I like as well.

Some cable manufacturers do indeed tie pin #1 to the shell, but I don't like this practice - it can create ground loops within the equipment (though if it's properly designed this shouldn't happen). Also, if done at the male (destination) end, it will usually defeat the operation of an input ground-lift switch, if the equipment has one.
I remember seeing a local recording engineer set up the mics a good 150 feet from the recorder, and the recording was fabulous. Years later I had that event in mind in setting the goal to support the same operation.
It's interesting that you say this . . . typical mic preamp inputs have usually strived to provide an input Z many times the source impedance. Old (transformer-coupled) tube consoles are usually in the area of 1200 to 1500 ohms, and were driven by microphones with very low sources impedances (an RCA 44BX is like 30 ohms). Nowadays, most microphones are standardized to have 150 ohm output impedances, and console input impedances have risen as well - 2.5K to 5K is common, and some have higher (like 10K). The main reason is that microphones generally have a flatter (or at least more predictable) response into higher loads, because their source impedance (especially in dynamics and ribbons) can vary significantly with frequency. 600 ohms is about the very lowest input impedance that even a microphone is likely to see (maybe when splitter transformers are used) . . . and consumer audio balanced outputs very frequently have higher output impedances than a microphone's 150 ohms.

I think we've been down this road about 600 ohm terminating resistors before, but I would still like to remind everybody that 600 ohms is a MEASUREMENT standard borrowed from telephone equipment - which are power-transfer systems (equal source and termination impedance). The standard in audio interconnection, whether balanced or unbalanced, has always been a voltage-transfer system. Audio measurement equipment has frequently had 600-ohm source and termination impedances so that they could accurately measure signal level in dBm (dB refererred to 1mW into 600 ohms).

I absolutely agree with you that high-end consumer audio equipment, that has XLR outputs, should be able to drive a 600 ohm load with full performance. I also understand that there exists much equipment with transformer-coupled inputs, where the designer has chosen to increase the transformer step-up ratio to improve noise performance, resulting in an input impedance that may be as low as 600 ohms.

But adding a 600 ohm terminating resistor doesn't make cable reactance disappear, and it's certainly NOT a "standard". Even if you wanted to treat the interconnect as a transmission-line (say you had 1000-foot runs between your preamp and amp), then both the source and termination impedances should be the same as the cable's intrinsic impedance, which is more like 150 ohms. (That's why AES/EBU is balanced and operates at 110 ohms . . . it was designed to use standard studio microphone or interconnect cables).

In the end, when a manufacturer chooses to put in a 600 ohm terminating resistor, it's a good bet that the equipment driving it (especially if its from a different manufacturer) will not be performing at its best. Their main effect is to swamp the effects of any impedance unbalances in the input circuitry, thus improving the equipment's common-mode rejection performance. A crude trade-off, IMO, and again, definately non-standard.
For example, my Neumann microphones are set up to drive 150 ohms. What this means is that if you don't load them at 150 ohms (if instead you have a load of 1000 ohms or higher), the output transformer will express the inter-winding capacitance rather than the turns ratio, and you will get coloration and no bass.
Atmasphere, the vast majority of Neumann mics have an output impedance of 50 ohms, and they've traditionally specified a loading of >1K. It's only some of the classics (i.e. U67) that are switchable to lower impedance, and they still need to be loaded at at least 300 ohms.
Clio09, there are two common meanings of the term "balanced" in high-end audio these days.

First, there's "balanced" as it applies to interconnects, where the idea is that there are two signal-carrying conductors, each with equal impedance (though not necessarily voltage) to ground, the signal being defined as the voltage between the two conductors. Both the driving source and the receiving equipment are responsible for maintaining this impedance balance, and the receiving stage is responsible for separating the signal voltage that appears between the two conductors (the "differential mode" voltage) from any noise voltage that happens to develop equally between both conductors and ground (the "common mode" voltage). The performance of the receiving equipment in performing this task is usually expressed as "common-mode rejection ratio", which is the difference in sensitivity between the same voltage, applied common-mode vs. differential-mode.

"Balanced" or "differential" as it applies to circuitry inside the equipment usually refers to the fact that there are actually two equal (voltage and/or impedance) and opposite-polarity signal paths inside. It is possible to have an unbalanced input feeding a differential circuit, or vice-versa . . . and ditto on the output side.

It does seem that a huge percentage of high-end audio manufacturers are unaware of the distinction between these points, as it's common to build a "balanced" preamplifier by simply building two non-differntial circuits, and connecting one each to pins 2 and 3 of the input and output XLRs. Equipment designed this basically takes all the incoming noise, sometimes amplifying it, adding some noise of its own, and "passes the buck" to the next piece of equipment in hopes that it may have some common-mode rejection capability. Frequently, that next piece of equipment ends up being the speaker.

From past threads, I think that Atmasphere and I are both in agreement about the need for equipment to have good common-mode rejection. We're also in agreement about the need for balanced line output stages to have a low output impedance, and excellent performance into low-impedance loads. Where we differ is in the specifics of how to design a balanced input stage.

My main problem with the 600 ohm terminating resistor is that it places a very high current demands on the preceeding electronics, which in all likelihood will have degraded performance into a 600 ohm load. It is relatively ineffective at reducing the effects of cable reactance - this is determined mainly by the source impedance.

The 600 ohm resistor may show a slight improvement on the common-mode rejection ratio, but the same or better results can be obtained by raising the common-mode impedance instead of lowering the differential-mode impedance . . . without affecting the performance of the preceeding equipment. And the only argument left is that of transmission-line effects . . . which is irrelevant for typical (<100 feet) lengths in a voltage-transfer system.
Atmasphere, I like your paper . . . it's interesting and eloquent; a good read. And I understand how it's an alluring perspective, especially as far as speaker impedance is concerned, for the manufacturer of an amplifier with a high output impedance. It's just too bad that the historical data doesn't support it - but we're never going to convince each other the opposite, so I'll drop it.

But maybe you could shed some light on why, if you're advocating a power-transfer approach (as is common on video and RF transmission systems), you're not using something like 110 - 150 ohms (source and load), the intrinsic impedance of a typical balanced interconnect? Because that's how a power-transfer system is supposed to work, no?
Well, in order to really get into the specifics of your paper for analysis, there are a couple of issues that plainly need to be separated from each other.

For line-level interconnection, are transmission-line effects a significant factor in domestic hi-fi applications? What are the major electrical mechanisms that cause audibility of cables?

For the amp-speaker interface, the question is what are the primary motivations for a speaker designer to choose a particular voice-coil arrangement, cabinet alignment, and crossover network design, thus determining its impedance characteristics?

I feel that these issues can be investigated independently from that unsolveable audio argument - that of negative feedback in amplifier design. But negative feedback is the cornerstone of the perspective you outline in this paper . . . so an effective rebuttment of your paper is impossible without separating this out. How 'bout this . . . can you make the argument work without mentioning feedback?
The most important rule is how we perceive loudness, which is done by listening for the 5th 7th and 9th harmonics. Our ears are so sensitive to these harmonics that we can easily detect alteration of only 100ths of a percent.
No argument here, it's just irrevelant in as far as impedance/reactance in the cable interface is concerned, as all of this (including any transmission-line effects) are definable in linear network theory, meaning that additional harmonics can't be created. We're stuck with spectral balance, transient response, noise pickup, and source loading/resonance as the possible effects (which are headaches enough). But it does fit in with my biggest objection to 600-ohm loading: for random-audio-product-off-the-street-that-can't-drive-600ohm-load . . . its output stage will produce more of these noxious high-order harmonics.

So in respecting your preferences for a power-transfer approach . . . I would suggest a best practical way to implement your preferences for amp/preamp interconnection would be for your amplifiers to leave out (or raise the value of) the differential-mode termination resistor . . . thus improving the scenario for other manufacturers' preamps. You could then sell specific cables for which you had verified the intrinsic impedance, and adjust the output impedance of your preamplifiers to match. The cables would then have the appropriate termination resistor in the male XLR end, adjusting the total termination to match the cable impedance. This would seem to give the best possible performance in a wide variety of hookup scenarios . . . "automatically" adjusting impedances in the manner that studio/broadcast engineers have been doing manually for decades.

Now the whole amp/speaker thing is definately a different thread . . . but maybe another time. Been sitting at home with an injured back for a few days now, and I'm starting to go a bit nuts.