DIY balanced interconnects


I want to build some balanced interconnects.
1. Has anyone compared Swithcraft, Vampire and Neutrik XLR plugs?
2. Any comments on Mogami Neglex 2534 vs Vampire CCC-II vs Oyaide PA-02 cables.
3. Should the ground shield on these twinax cables be connected on both ends, only on the source ends, or on the preamp ends?
Thanks for your comments.
oldears
I'm sure you'll get some helpful responses here, but you might also want to pot this over in the cables section of Audio Asylum. Lots of DIY cable folks over there.

FWIW - I really like the Switchcraft XLR connectors and I have a preference for OCCC solid core copper cable. My set of Oyaide PA-02 came pre-terminated with Switchcraft connectors.
If I were building a set of diy balanced interconnects, I'd strongly consider the Furutech xlr's that Cryo-parts.com & others sell. They seem to be on a much higher level of construction than most xlr connectors.
Post removed 
Post removed 
Oldears - I'm not sue why you want to build your own cable. How can you obtain low inductance, capacitance, dielectric constant and metal purity without using exotic materials like foamed Teflon in oversized tubes or 9N copper. As for grounding shield on both ends - it never should be done. Unfortunately it is done in XLR interconnects causing problems in recording industry (ground frames). There are many theories where the shield should be grounded but common thing in electronics is to ground shield on the receiving end only.
Here are the instructions for proper connections of both RCA and XLR cables.

http://www.diycable.com/main/pdf/Canare.pdf

Kijanki: This simple low cost DIY cable kit has everything you mentioned. It's a great kit - highly recommended!
Thanks for the responses so far. Thanks to Danmyers for the post on construction (even tho I have no interest in Cardas). I note these instructions say to connect the shield to chassis ground on the source end only, as opposed to Kijanki who recommends the load end only. I still think connecting the chassis grounds to a common (preamp) end would have some merit (and a reason to DIY).
I see 2 votes for Oyaide and as I am intrigued by the possible benfit of single crtstal/OCC cables. The Vampire CCC-II is also a cast Cu cable at half the cost of the Oyaide. I am surprised Kijanki would not consider these to be quailty materials.
Of course the real reason for DIY is $$!
BTY the new cables would be compared to Kimber KCAG cables originally bought as RCA and coverted with Neutriks to balanced as i changed components. I have had these cables at least 12 years.
Also note that I tend to agree with Ralph Of Atmasphere and do not expect to find huge differences.
Oldears - I'm probably more lazy and making things doesn't thrill me as much as it used to. When I was about 16 years old I was building EL34 100W amps and really enjoyed it. Now I don't object anymore to prepacked goods - perhaps because I can afford it and maybe because I'm more lazy as well.
Kijanki-Maybe when you retire you will become intersted in DIY again! I started audio building Dynakits and have now gone full circle.
Oldears - It interested me before it became my profession. Now I'm left without a hobby (and time for it) but should be enjoying "project" when retired.

Technology goes toward SMT and often is very layout dependant. Building class D amp, for instance, would require extensive study of PCB layout. There would be always something, I guess, to put my hands on - maybe tube amps just for the fun.
Oldears, connect the shield at both ends, which will always be pin 1. Any other connection (or lack of connection) is not standard and could cause troubles. The way it is supposed to work is that there will be no signal current in the ground- all the signal will be in the twisted pair.

The Neglex 2534 is excellent cable; I saw a set beat a pair of $15,000 cables in a system just a few days ago. You will find also that the Neutrik connectors are excellent. A major advantage compared to a lot of high end connectors is that they are built to the right dimensions and with excellent materials. I have seen several 'high end' XLR connectors that had fit problems, and damaged the connectors that they were plugged in to because the fit was too tight.

Balanced lines allow longer cable runs with lower noise and less HF losses, so they still have advantages over single ended, even without termination. However you may well hear differences between cables. To eliminate the audible differences, cable termination is a must. The industry standard for decades has been 600 ohms and is still common today. Most high end audio products do not support driving such a low value without bass response losses or overall output losses due to high output impedance issues. But without the termination, you will hear differences between cables, essentially obviating the main reason for using balanced lines in the first place!

So if you really want to get everything balanced line has to offer (read: no more expensive cables, lengths as long as you want), your equipment should support all aspects of the balanced line standard.
Post removed 
Tvad, you got that right. I've not watched the solid state part of the market, but I know that Roland did support the 600 ohm standard. Generally its fairly easy for semiconductors. Wadia supported it in the old days- I've not kept up with them though.

As far as tubes- in high end to the best of my knowledge we are the only ones that support the standard. There are others that can drive 600 ohms, but there will be a loss of bass response- the output impedance issues I mentioned earlier, IOW the 600 ohm load combined with the output coupling cap will produce a rolloff. There are certain transformer-coupled tube products, like Modwright, that may well support the standard too (if they have a 600 ohm secondary on the output transformer), you would have to check with them to be sure.

The reason we support the standard is we had a lot of exposure to recording/broadcast gear prior to the introduction of the MP-1, and we saw the results. I remember seeing a local recording engineer set up the mics a good 150 feet from the recorder, and the recording was fabulous. Years later I had that event in mind in setting the goal to support the same operation. I think a lot of high end companies got in for other reasons and without that exposure.
Post removed 
The new Rowland preamps, Criterion and Capri, put out much less than 600 ohms.
Ralph, thaks for the respone and you may have saved me some money. I was going to order the Vampire cable and XLRs from Michael Percy, but I can get no response from him by phone nor email. I can get the Mogami and Neutrik for much less money from Markertek.
The components I want to connect are a BAT VK30 preamp,Pass Labs aleph amp, Cary CD, and theta gen V DAC. I feel pretty confident they all meet fully balanced criteria and probably 600 ohms (if not let me know please).
I assumed to use the braided shield to pin 1,(Thats the way the present KCAG cables are connected as they have no shield), but just to be sure, I guess I should not tie either end to the ground on the XLR cover?
I also will build an AES/EBU cable, do you have a specifc recomendation for the 110 ohm standard?
thanks again!
Another good value is Belden 8412 - this is the professional standard microphone cable. It doesn't reject hum quite as well as star-quad configurations, but has a lower capacitance, which is a good tradeoff for most line-level interconnect applications as well. It works perfectly with the strain-releifs on both the Neutrik and Switchcraft connectors, both of which I like as well.

Some cable manufacturers do indeed tie pin #1 to the shell, but I don't like this practice - it can create ground loops within the equipment (though if it's properly designed this shouldn't happen). Also, if done at the male (destination) end, it will usually defeat the operation of an input ground-lift switch, if the equipment has one.
I remember seeing a local recording engineer set up the mics a good 150 feet from the recorder, and the recording was fabulous. Years later I had that event in mind in setting the goal to support the same operation.
It's interesting that you say this . . . typical mic preamp inputs have usually strived to provide an input Z many times the source impedance. Old (transformer-coupled) tube consoles are usually in the area of 1200 to 1500 ohms, and were driven by microphones with very low sources impedances (an RCA 44BX is like 30 ohms). Nowadays, most microphones are standardized to have 150 ohm output impedances, and console input impedances have risen as well - 2.5K to 5K is common, and some have higher (like 10K). The main reason is that microphones generally have a flatter (or at least more predictable) response into higher loads, because their source impedance (especially in dynamics and ribbons) can vary significantly with frequency. 600 ohms is about the very lowest input impedance that even a microphone is likely to see (maybe when splitter transformers are used) . . . and consumer audio balanced outputs very frequently have higher output impedances than a microphone's 150 ohms.

I think we've been down this road about 600 ohm terminating resistors before, but I would still like to remind everybody that 600 ohms is a MEASUREMENT standard borrowed from telephone equipment - which are power-transfer systems (equal source and termination impedance). The standard in audio interconnection, whether balanced or unbalanced, has always been a voltage-transfer system. Audio measurement equipment has frequently had 600-ohm source and termination impedances so that they could accurately measure signal level in dBm (dB refererred to 1mW into 600 ohms).

I absolutely agree with you that high-end consumer audio equipment, that has XLR outputs, should be able to drive a 600 ohm load with full performance. I also understand that there exists much equipment with transformer-coupled inputs, where the designer has chosen to increase the transformer step-up ratio to improve noise performance, resulting in an input impedance that may be as low as 600 ohms.

But adding a 600 ohm terminating resistor doesn't make cable reactance disappear, and it's certainly NOT a "standard". Even if you wanted to treat the interconnect as a transmission-line (say you had 1000-foot runs between your preamp and amp), then both the source and termination impedances should be the same as the cable's intrinsic impedance, which is more like 150 ohms. (That's why AES/EBU is balanced and operates at 110 ohms . . . it was designed to use standard studio microphone or interconnect cables).

In the end, when a manufacturer chooses to put in a 600 ohm terminating resistor, it's a good bet that the equipment driving it (especially if its from a different manufacturer) will not be performing at its best. Their main effect is to swamp the effects of any impedance unbalances in the input circuitry, thus improving the equipment's common-mode rejection performance. A crude trade-off, IMO, and again, definately non-standard.
I use Mogami Gold between CD and amp. Clean, and available at the nearest Guitar Center.
Oldears, I don't think I've seen any cable wherein the barrel of the XLR was tied to anything. I would ignore that connection. The only ground you need to be concerned about is pin 1, which is always the shield at **both** ends of the cable.

Tvad, its not really a VHS/Beta thing- the point is that if you want to get off the cable merry go round, using the low impedance standard is the ticket. Otherwise feel free to pay whatever you have to to get an unterminated cable to do the same thing :) the way I see it, you either pay for the technology to make it happen, or you pay for the cable to make it happen. Seems to me using the technology to advantage is cheaper.
Post removed 
Post removed 
I'm a little curious. Let's go with Ralph's theory for a bit. If I own an MP-3 preamp and M-60 amps, then logically, the cable between these should not be an issue with regard to cable artifacts since the MP-3 conforms to the 600 ohm standard. However, let's assume I put my DAC into the equation which in balanced mode via it's XLR outputs has an output impedance of 100 ohms. In this case cable artifacts could be an issue running from the DAC to MP-3. Do I have this correct?

From my perspective I thought the key aspect of a balanced design was that it is differential with the circuit balanced throughout. BAT and Rowland meet this criteria, but not the 600 ohm standard. If I read Kirkus' post correctly, the 600 ohm standard should not be relevant.

I'm more confused than ever.
Clio09, there are two common meanings of the term "balanced" in high-end audio these days.

First, there's "balanced" as it applies to interconnects, where the idea is that there are two signal-carrying conductors, each with equal impedance (though not necessarily voltage) to ground, the signal being defined as the voltage between the two conductors. Both the driving source and the receiving equipment are responsible for maintaining this impedance balance, and the receiving stage is responsible for separating the signal voltage that appears between the two conductors (the "differential mode" voltage) from any noise voltage that happens to develop equally between both conductors and ground (the "common mode" voltage). The performance of the receiving equipment in performing this task is usually expressed as "common-mode rejection ratio", which is the difference in sensitivity between the same voltage, applied common-mode vs. differential-mode.

"Balanced" or "differential" as it applies to circuitry inside the equipment usually refers to the fact that there are actually two equal (voltage and/or impedance) and opposite-polarity signal paths inside. It is possible to have an unbalanced input feeding a differential circuit, or vice-versa . . . and ditto on the output side.

It does seem that a huge percentage of high-end audio manufacturers are unaware of the distinction between these points, as it's common to build a "balanced" preamplifier by simply building two non-differntial circuits, and connecting one each to pins 2 and 3 of the input and output XLRs. Equipment designed this basically takes all the incoming noise, sometimes amplifying it, adding some noise of its own, and "passes the buck" to the next piece of equipment in hopes that it may have some common-mode rejection capability. Frequently, that next piece of equipment ends up being the speaker.

From past threads, I think that Atmasphere and I are both in agreement about the need for equipment to have good common-mode rejection. We're also in agreement about the need for balanced line output stages to have a low output impedance, and excellent performance into low-impedance loads. Where we differ is in the specifics of how to design a balanced input stage.

My main problem with the 600 ohm terminating resistor is that it places a very high current demands on the preceeding electronics, which in all likelihood will have degraded performance into a 600 ohm load. It is relatively ineffective at reducing the effects of cable reactance - this is determined mainly by the source impedance.

The 600 ohm resistor may show a slight improvement on the common-mode rejection ratio, but the same or better results can be obtained by raising the common-mode impedance instead of lowering the differential-mode impedance . . . without affecting the performance of the preceeding equipment. And the only argument left is that of transmission-line effects . . . which is irrelevant for typical (<100 feet) lengths in a voltage-transfer system.
Tvad, what you are missing is the output impedance is not the spec. For example, the Modright might have a 100 ohm output impedance- if the output winding is designed to drive 600 ohms then it would work fine. Ditto Roland and Wadia. The output impedance is something very different from what load the circuit will drive.

Its a good idea for an electronic circuit to have about 1/10th the output impedance vs the load it has to drive, however there are some exceptions where the output impedance can appear to be much higher, yet it will drive the load just fine. For example, my Neumann microphones are set up to drive 150 ohms. What this means is that if you don't load them at 150 ohms (if instead you have a load of 1000 ohms or higher), the output transformer will express the inter-winding capacitance rather than the turns ratio, and you will get coloration and no bass.

If we take the example of the Modwright, a similar situation exists- its measured output impedance is one thing, the load it drives (and is optimized for) is another. I suspect it has that load built-in, much like the old Ampex tape machines did, so that their output transformers would be properly loaded.

The Cary has sufficiently low enough output impedance to drive 600 ohms, but if it employs a coupling cap, its likely that you will get a low frequency roll-off if you try to do it. IOW the only tube units that will drive 600 ohms properly will have:
1) a low output impedance (well withing the range of several of the units already mentioned) and
2) will either employ
a) an output transformer, or
b) a very large coupling cap, or
c) be direct-coupled.

It turns out a large coupling cap is impractical, so you can see how the realities of trying to do this limits the field.

In the world of transistors, its quite easy to get semiconductors to drive 600 ohms, so there should be lots of the examples there.
Post removed 
I second Kirkus about the dubious need for 600 ohm termination resitor over short distances. However, I agree with Ralph that if you want to go exceptionally long distances then 600 Ohm should get better results. Horses for Courses.

What length does Oldears need?
Kirkus, thanks for the explanation. According to the designer of my DAC, the outputs are fully balanced (truly electrically balanced) in that as leg 2 goes up in voltage, leg 3 goes down by the exact same amount. So it sounds like it is differential.

Now, my VAC Auricle Musicblocs have balanced inputs. If I recall correctly according to one of the engineers at VAC this is accomplished via the "Williamson Method" (which I think is a name used in one of their older amp designs) and I believe uses an input transformer.

Perhaps you or Ralph can comment on this. From the sounds of it this does not appear to be a differentially balanced design.
Thanks to Atmasphere and Tvad for the help.
I also checked my manuals:
The aleph ONO spec is listed at 150/150 ohms which Pass states is low enough to drive most any load.
The cary 303/200 spec is not given in my manual probably beacause the user can change jumpers between 3 or 6 V out or the digital volume control which means the output impedance would vary by choice.
The Theta DSPro Gen 5 spec out is 13 ohms/phase
The BATVK30 balanced input spec is listed at 100Kohms/phase.
The BATVK30 output spec is listed as 750 ohms (differential?)... but I have modified the output caps to 4.33 uF per leg instead of 1uF/leg to make sure I would not lose low freq into the aleph 0s, enen tho V. Khomenko states the 1 uf caps will drive any load over 10K.
The Pass Labs aleph 0s input impedance is specified as 12Kohms differential.
At any rate, I am hoping the technical solution using $69 of parts for 3 XLR pairs and one AES/EBU cable will give me enough satisfaction in my system, to be able to put the well reputed Kimber KCAGs up for sale to someone who might appreciate their characteristics.
thanks again!

ebu
For example, my Neumann microphones are set up to drive 150 ohms. What this means is that if you don't load them at 150 ohms (if instead you have a load of 1000 ohms or higher), the output transformer will express the inter-winding capacitance rather than the turns ratio, and you will get coloration and no bass.
Atmasphere, the vast majority of Neumann mics have an output impedance of 50 ohms, and they've traditionally specified a loading of >1K. It's only some of the classics (i.e. U67) that are switchable to lower impedance, and they still need to be loaded at at least 300 ohms.
Right, Kirkus, my Neumanns are U67s. After doing some measurements with them, we found that the transformer was more linear when driving a lower impedance- even 600 ohms was too high.

The 600 ohm standard arose from the characteristic impedance that was created by spaced telegraph cables, and was later adopted by the phone companies, later still by recording and broadcast equipment manufacturers.

These days that standard is considered obsolete, 1K and the like are common input impedances, as a result the standard is somewhat diluted. We wanted to be assured that our gear would support existing hardware that it might get connected to, as there is still quite a bit of collectible tube equipment out there like Ampex tape machines that are on the same standard.
To sort of sum up what I have been trying to say here. if you want to have the cables not be a part of the overall system sound, the old 600 ohm standard is the way to do it.

The audio world in general has seen a shift from what I call the Power Paradigm to the Voltage Paradigm (see http://www.atma-sphere.com/papers/paradigm_paper2.html for more info)
and the dilution of the 600 ohm standard in balanced operation is an example. Under the old power rule, the effect of the cable could be neglected. Under the newer Voltage rule as Kirkus has mentioned, the cables have an increasingly audible effect as the load impedance is increased.

The question is whether an audiophile would want the cable to have an audible artifact or not. I would prefer that it not, thus I use the 600 ohm standard as that is the technology that was created to solve the problem decades ago. Building more expensive cables to me does not look like a solution.
Atmasphere, I like your paper . . . it's interesting and eloquent; a good read. And I understand how it's an alluring perspective, especially as far as speaker impedance is concerned, for the manufacturer of an amplifier with a high output impedance. It's just too bad that the historical data doesn't support it - but we're never going to convince each other the opposite, so I'll drop it.

But maybe you could shed some light on why, if you're advocating a power-transfer approach (as is common on video and RF transmission systems), you're not using something like 110 - 150 ohms (source and load), the intrinsic impedance of a typical balanced interconnect? Because that's how a power-transfer system is supposed to work, no?
Kirkus, if you are referring to characteristic impedance, we in fact examined that issue about 20 years ago. The fact of the matter is that if you can determine the actual characteristic impedance of the cable and then terminate it correctly, the result is quite spectacular.

The problem is, you need a Time Delay Reflectometer or the like to make the determination- in practice a bit impractical. So you have no standard termination value as characteristic impedance can vary quite a bit due to minor changes in the construction and materials of the cable. This is likely why the industry settled on 600 ohms decades ago. It clearly was not accident.

In practice, the 600 ohm standard works quite well and since it was already in place, it seemed to be a matter of picking one's battles. As it is, the fact that our preamp is balanced has been its single biggest marketing problem, so in retrospect I'm glad its not been any more complicated that it already has been :)

I would have loved to have more input when we were setting a lot of this up but, but at the time we were the only ones that cared about balanced operation.

I can find plenty of historical evidence to support my paper BTW; the Radiotron Designer's Handbook, published by RCA is a good place to start. If you wish to discuss this further that would be a topic for another thread or email. In fact I would love to do that, the fact of the matter is I've been looking for someone who can rebutt the document ever since it was written (about 3 years ago). No-one has been able to do that so far. BTW, its not about having a tube amp with a high output impedance, its about whether you use feedback or not- IOW obeying the rules of human hearing or not.
Well, in order to really get into the specifics of your paper for analysis, there are a couple of issues that plainly need to be separated from each other.

For line-level interconnection, are transmission-line effects a significant factor in domestic hi-fi applications? What are the major electrical mechanisms that cause audibility of cables?

For the amp-speaker interface, the question is what are the primary motivations for a speaker designer to choose a particular voice-coil arrangement, cabinet alignment, and crossover network design, thus determining its impedance characteristics?

I feel that these issues can be investigated independently from that unsolveable audio argument - that of negative feedback in amplifier design. But negative feedback is the cornerstone of the perspective you outline in this paper . . . so an effective rebuttment of your paper is impossible without separating this out. How 'bout this . . . can you make the argument work without mentioning feedback?
Well, Kirkus, the issue is actually quite simple. What are the rules of human hearing? Once understood, then we simply apply physics to create the means that will obey those rules as close as possible.

The most important rule is how we perceive loudness, which is done by listening for the 5th 7th and 9th harmonics. Our ears are so sensitive to these harmonics that we can easily detect alteration of only 100ths of a percent. Audiophiles have words for this alteration: harsh, bright, clinical, brittle... -so for starters the design would have to honor this rule, as it is the most important.

With regards to cables, transmission-line effects do affect audio cables, both interconnects and speaker cables. The effects can be measured and correlate to listening as well.

Conductor spacing, size, geometry, purity of materials and the choice of materials are the issues that affect both what we hear and characteristic impedance. In interconnects it does seem that these issues are less important than they are in speaker cables (interconnects that we measured had characteristic impedances between 40 and 200 ohms- speaker cables varied from 4 to 60 ohms). I have to admit I was quite surprised to discover that characteristic impedance had any artifact at audio frequencies!

I've stayed well away from speaker design. Its easy to put drivers in a box and make them do something; it can be very difficult to make them sound like real music. There are a host of variables that can be quite vexing. I know enough excellent speaker designers that have fabulous results- I doubt I could do as well.

Plenty of material here for another thread...
The most important rule is how we perceive loudness, which is done by listening for the 5th 7th and 9th harmonics. Our ears are so sensitive to these harmonics that we can easily detect alteration of only 100ths of a percent.
No argument here, it's just irrevelant in as far as impedance/reactance in the cable interface is concerned, as all of this (including any transmission-line effects) are definable in linear network theory, meaning that additional harmonics can't be created. We're stuck with spectral balance, transient response, noise pickup, and source loading/resonance as the possible effects (which are headaches enough). But it does fit in with my biggest objection to 600-ohm loading: for random-audio-product-off-the-street-that-can't-drive-600ohm-load . . . its output stage will produce more of these noxious high-order harmonics.

So in respecting your preferences for a power-transfer approach . . . I would suggest a best practical way to implement your preferences for amp/preamp interconnection would be for your amplifiers to leave out (or raise the value of) the differential-mode termination resistor . . . thus improving the scenario for other manufacturers' preamps. You could then sell specific cables for which you had verified the intrinsic impedance, and adjust the output impedance of your preamplifiers to match. The cables would then have the appropriate termination resistor in the male XLR end, adjusting the total termination to match the cable impedance. This would seem to give the best possible performance in a wide variety of hookup scenarios . . . "automatically" adjusting impedances in the manner that studio/broadcast engineers have been doing manually for decades.

Now the whole amp/speaker thing is definately a different thread . . . but maybe another time. Been sitting at home with an injured back for a few days now, and I'm starting to go a bit nuts.
Hi Kirkus, actually the input impedance of our amps is 100K single-ended, 200K balanced. On the bigger amps there is an input termination switch that allows for 600 ohms between pin 2 and 3 of the XLR. Although I am a fan of the standard, common sense dictates that not everyone is going to have our preamps, so we have to make the amps easy to drive.

We've often done the manual termination exactly as you describe.

I've been through the back thing- good luck with it!