What do we hear when we change the direction of a wire?


Douglas Self wrote a devastating article about audio anomalies back in 1988. With all the necessary knowledge and measuring tools, he did not detect any supposedly audible changes in the electrical signal. Self and his colleagues were sure that they had proved the absence of anomalies in audio, but over the past 30 years, audio anomalies have not disappeared anywhere, at the same time the authority of science in the field of audio has increasingly become questioned. It's hard to believe, but science still cannot clearly answer the question of what electricity is and what sound is! (see article by A.J.Essien).

For your information: to make sure that no potentially audible changes in the electrical signal occur when we apply any "audio magic" to our gear, no super equipment is needed. The smallest step-change in amplitude that can be detected by ear is about 0.3dB for a pure tone. In more realistic situations it is 0.5 to 1.0dB'". This is about a 10% change. (Harris J.D.). At medium volume, the voltage amplitude at the output of the amplifier is approximately 10 volts, which means that the smallest audible difference in sound will be noticeable when the output voltage changes to 1 volt. Such an error is impossible not to notice even using a conventional voltmeter, but Self and his colleagues performed much more accurate measurements, including ones made directly on the music signal using Baxandall subtraction technique - they found no error even at this highest level.

As a result, we are faced with an apparently unsolvable problem: those of us who do not hear the sound of wires, relying on the authority of scientists, claim that audio anomalies are BS. However, people who confidently perceive this component of sound are forced to make another, the only possible conclusion in this situation: the electrical and acoustic signals contain some additional signal(s) that are still unknown to science, and which we perceive with a certain sixth sense.

If there are no electrical changes in the signal, then there are no acoustic changes, respectively, hearing does not participate in the perception of anomalies. What other options can there be?

Regards.
anton_stepichev
I haven’t listened to these songs for a long time and once again i note how emphatically all his unique intonations are conveyed in the pre-war 78 recordings. No modern recording can do that way, isn’t it strange? Completely outdated shellac beats technological HI-FI in some very important music features.
I dont have personal experience of this fact but i really think that some very minute change between these 2 mediums could "percolate" through many scales to deliver audible different quality for the human hearing....
Anyway "sound" is the body and waves are the carrier of the 


By the way one of my most beloved composer with Bach and Bruckner is Scriabin... I am also in love with Elena Frolova....
“Finally this article about polynesian "primitive" navigators about to "see" their routes around islands very afar in the pacific is astounding about the INTERNAL GPS of human and say a lot about underestimating the perception of humans ” - mahgister

Thank you so much for this mahgister - it was a fantastic read : )
https://www.google.com/amp/s/abcnews.go.com/amp/Technology/research-helping-scientists-understand-humans-recognize-voices-computers/story%3fid=60699647

And so, I wondered - if human beings are not only able to recognise human voices quicker and better than computers, but also the nuance of mood in each voice heard, is this a reasonable step to believing that there are indeed some things in sound that computers are not able to measure? Keeping in mind, of course, that the subtlest timbral differences the human ear is capable of registering to know when the music they are hearing has a little more ‘air’ or realism, or a touch more focus of separation to each instrument or voice, is even less obvious (and measurable?) than the nuance of mood in a voice?
@kevn,

methinks the basic fallacy isn’t that computers can’t hear but rather that Human minds haven’t found the right measurement methods to program them appropriately. Ultimately AI will lead us to codify hitherto uncodified human sentiments such that computers can reflect them, we are just not there yet and I question whether pure two-dimensional, scalar models will ever appropriafely reflect the complex patterns of acoustics and human hearing.