Most transistor amplifiers do operate pretty close to class B; at low power levels most of the power comes from the driver transistors rather than the outputs. But in order to keep the amplifier from making significant distortion (notch and crossover) at the zero crossing point, they have to have a certain minimum amount of bias current to make that happen. As a result they are considered class AB
Maybe you're meaning that most of the power dissapated is in the drivers at small signal levels? This is indeed true in some designs, notably emitter-follower output stages with one set of output transistors. But in a conventional solid-state amplifier, virtually all the loudspeaker current flows through the output transistors, NOT the drivers. In any case, I'd agree that what constitutes "Class A" and "Class AB" is indeed widely used imprecisely . . . even though in a solid-state amplifier, they are distinctly different operating points.
The "cutoff" region in a tube or transistor is defined as the area where its transconductance drops sharply as the current through the device is reduced. In true Class B operation, the bias point is chosen so that the cutoff region of both halves of the output stage occur inversely at the same time . . . that is, the upper half is turning on at the same time, and to the same degree, that the lower half is turning off. In Class AB, one half of the output stage remains turned on before the other is turned off, thus giving a region (in the "middle") where both devices are conducting at full transconductance together. This is of course its Class A region.
The problem with Class B is well known -- it's due to the fact that pairs of output devices operating this way are non-conjugate, and the turning-on of one half never lines up exactly with the turning-off of the other. But in (solid-state) Class AB, the transition-region of the output device is left completely "exposed", without the other half compensating for its change in transconductance at all. Thus, the consequences are more severe - once a Class AB output stage leaves its Class A region, it produces more distortion than a pure Class B design.
On the test bench, a solid-state Class B design produces its best distortion at a specific bias level, and increases in distortion are evident both when the bias is too low AND when it's too high. Maintaining it at the proper point with regards to temperature is a very complex task, which is dependent not only on circuit design, but physical layout, assembly, and calibration procedure. Class AB on the other hand simply needs to have sufficient bias control to avoid overheating. For those who hear differences when their solid-state amps are left on all the time, variations in output stage bias current are the obvious reason why.
However, in a transformer-coupled push-pull tube output stage, the crossover behavior is fundamentally different for two reasons. First, it's a power amplifer (rather than just a current amplifier) which means the linearity is dependent on both the voltage and current across the device, as opposed to just the current - the latter more determining the cutoff characteristics. Second, this is accentuated by the fact that the primary inductance in the output transformer "forces" the plate of the non-conducting tube through its transition region, which in turn has less influence (the Rp increases) the more "turned-off" it's driven.
The consequent is that in a traditional tube amp, there is much more "grey area" between Class B and AB, because the presence/absence of crossover distortion is determined principally by the quality of coupling in the two halves of the output transformer's primary. The Class AB bias current serves mainly to keep some magnetization in the transformer core to compensate for leakage reactances. And on the test bench, P-P tube amps tend to show their most pronounced crossover distortion at the high-frequency, high-power end of the scale, where the output transformer core becomes most saturated.