Class A is biased so that the wave form does not pass through the 0 DC point (thus switching). Pure Class A means that it NEVER passes through, thus the bias is pretty high (particularly for a high voltage amplifier). Running in this mode is very inefficient, but it supposed to yield the best sound quality. Class B is a switching mode which goes through the 0 volt DC point. The switching causes some small level of distortion. The advantage of B is that it runs much cooler and is generally less expensive to build. Many amps use Class A/B which means they run in Class A for the first watt or two and then go into a switching mode (B) operation. This can be very advantageous as the switching is unlikely to degrade the sound quality appreciably once the amplifier is operating at higher voltages (the switching distortion is so small compared to the overall wattage at that point it is almost negligible). As with most things in the Audiophile community the debate as to whether or not A/B can sound as good as pure A is still out. The other class is class D, which is digital amplification. It operates with discrete pulses (or bits), thousands of times a second to generate the power required. Class D has typically been used only for subwoofers, where the pulse is so short compared to the wavelengths being produced the issue of digital vs analog was not a concern (or much of one). However, more recently, companies are producing digital amplification for the full range. Hope that helps.