The use of terms as "ABX" testing" and "blind testing" are generalizations that do not allow to assess the appropriateness of certain methods for the verification of claims or the existence of phenomena. As usual, the devil is in the detail, and we have to look at the specific design of a study and the underlying hypothesis before we can judge the quality and usefulness of a given study design and method.
This is particularly true when we want to establish that a subjective preference is real or the result of bias.
If, for example, a person claims that a cable or a fuse makes a clear, audible difference in the sound of a system, an appropriate simple study design would be a along the lines of (1) this one person (2) on 12 consecutive days listens to (3) the same program on the same system in the same room, comparing the claimed superior component with the same standard component on each day. The test subject is 'blind' to the active component and is asked to identify which one is active. From this we would learn whether the audible difference indeed 'exists' for this person making that very claim. We would also learn something semi-quantifiable, i.e. whether this is marginal at best or "crystal clear". For a clear effect (often touted as dramatic or transforming) this study would be statistically powered with 12 data points. It would be objective.
If, on the other hand we don't want to show the mere existence of a phenomenon ("I can hear a difference when measurements don't detect a difference."), yet we want to determine a preference which holds true for many people, we need more test subjects.
Yet, to articulate a preference when the existence of an audible difference between two components cannot be established in the first place (specifically, when the very person claiming the existence of a preference cannot reliably differentiate between two components) seems unreasonable or arbitrary.
Having said that, when describing components as "synergistic" without being able to not only establish their discrete effect experimentally, but also quantify it, the use of this term seems baseless. In order to establish synergy, one needs to be able to detect AND quantify. And when a company uses the name "Research" in their name and claims there are no methods to measure or test critical product performance parameters, and also does not have any data applying appropriate blind listening data (see above), I am extremely skeptical. In fact, I am not interested.