The invention of measurements and perception


This is going to be pretty airy-fairy. Sorry.

Let’s talk about how measurements get invented, and how this limits us.

One of the great works of engineering, science, and data is finding signals in the noise. What matters? Why? How much?

My background is in computer science, and a little in electrical engineering. So the question of what to measure to make systems (audio and computer) "better" is always on my mind.

What’s often missing in measurements is "pleasure" or "satisfaction."

I believe in math. I believe in statistics, but I also understand the limitations. That is, we can measure an attribute, like "interrupts per second" or "inflamatory markers" or Total Harmonic Distortion plus noise (THD+N)

However, measuring them, and understanding outcome and desirability are VERY different. Those companies who can do this excel at creating business value. For instance, like it or not, Bose and Harman excel (in their own ways) at finding this out. What some one will pay for, vs. how low a distortion figure is measured is VERY different.

What is my point?

Specs are good, I like specs, I like measurements, and they keep makers from cheating (more or less) but there must be a link between measurements and listener preferences before we can attribute desirability, listener preference, or economic viability.

What is that link? That link is you. That link is you listening in a chair, free of ideas like price, reviews or buzz. That link is you listening for no one but yourself and buying what you want to listen to the most.

E
erik_squires
I'd like to move this a little more forward:
@spatialking wrote:


I can tell you the reason we have jitter problems, besides the fact that the basic CD clocks are not all the accurate,

Clocks are much better now than they were before at the same price range.  Maybe this is why DAC's got magically better?



is the sample clock is encoded in the data stream.   The clock is not a
separate signal path from the data which makes jitter an inherent problem in the system.  

I think maybe this is the transmission method, not the data. I think the issue is who is in charge though. I2S and USB allow the DAC to be in charge of the clock.
jea48, aren't microphones and tape recorders essentially measurement devices/test equipment?  If it wasn't for these measurement devices would you even know what the timbre of a performance were?
Post removed 
I studied a formula for jitter and how it relates to human perception some years back.   I'd have to go look it up as I don't recall it exactly; the limiting number is related to the number of bits and the sample rate.   Increasing the number of bits and/or the sample rate makes it more critical.  The reason it is so audible is it affects the zero crossing of music, something to which the ear is especially sensitive.  The Redbook standard at 44KHz and 16 bits is less than 50 picoseconds.   Clearly, we have a ways to go to make jitter a nonissue.

I can tell you the reason we have jitter problems, besides the fact that the basic CD clocks are not all the accurate, is the sample clock is encoded in the data stream.   The clock is not a separate signal path from the data which makes jitter an inherent problem in the system.   Whether this was known or considered an issue when the CD system was originally conceived is a good question. 

When Sony designed the CD, a number of weaknesses were created in the design due to the size limitations of the CD.   Sony's president, whose name I have forgotten, wanted it to easily fit into a car stereo and also wanted Vivaldi Four Seasons to fit on a single disk without flipping it over or inserting another disk.   This set a limitation on the sample rate, not to an advantage, and the number of bits, also not to an advantage since they had to fit all the music onto a small platter.   The original concept was to have a CD the same size as an LP since the stores were already shelved and geared for that size. 

To be fair though, at the time the CD was designed, our technology and semiconductor processes were really pushed to develop a good quality, low distortion, inexpensive DAC at 16 bits and 44 KHz.   I believe 18 bits and 50 KHz was about the limit, given the cost limitations.   I sure wish we had that in a CD, though!

As for measuring jitter and tuning fork accuracy, we have time base standards that can easily resolve better than 1x10^-14 seconds - way beyond what a human can perceive.   They are pricey but they can do it.   Gosh, the digital time base standard I have on my bench, which I bought for RIAA measurements, measures to less than 1x10^-6 seconds and is still in calibration, and that was surplus at $50!
How about a little philosophy?

There are a variety of philosophical approaches to decide whether an observation may be considered evidence; many of these focus on the relationship between the evidence and the hypothesis. Carnap recommends distinguishing such approaches into three categories: classificatory (whether the evidence confirms the hypothesis), comparative (whether the evidence supports a first hypothesis more than an alternative hypothesis) or quantitative (the degree to which the evidence supports a hypothesis).[10] Achinstein provides a concise presentation by prominent philosophers on evidence, including Carl Hempel (Confirmation), Nelson Goodman (of grue fame), R. B. Braithwaite, Norwood Russell Hanson, Wesley C. Salmon, Clark Glymour and Rudolf Carnap.[11]

Based on the philosophical assumption of the Strong Church-Turing Universe Thesis, a mathematical criterion for evaluation of evidence has been conjectured, with the criterion having a resemblance to the idea of Occam’s Razor that the simplest comprehensive description of the evidence is most likely correct. It states formally, "The ideal principle states that the prior probability associated with the hypothesis should be given by the algorithmic universal probability, and the sum of the log universal probability of the model plus the log of the probability of the data given the model should be minimized."[12]

According to the posted curriculum for an "Understanding Science 101" course taught at University of California - Berkeley: "Testing hypotheses and theories is at the core of the process of science." This philosophical belief in "hypothesis testing" as the essence of science is prevalent among both scientists and philosophers. It is important to note that this hypothesis does not take into account all of the activities or scientific objectives of all scientists. When Geiger and Marsden scattered alpha particles through thin gold foil for example, the resulting data enabled their experimental adviser, Ernest Rutherford, to very accurately calculate the mass and size of an atomic nucleus for the first time. No hypothesis was required. It may be that a more general view of science is offered by physicist, Lawrence Krauss, who consistently writes in the media about scientists answering questions by measuring physical properties and processes.

Concept of scientific proofEdit

While the phrase "scientific proof" is often used in the popular media,[13] many scientists have argued that there is really no such thing. For example, Karl Popper once wrote that "In the empirical sciences, which alone can furnish us with information about the world we live in, proofs do not occur, if we mean by ’proof’ an argument which establishes once and for ever the truth of a theory".[14][15] 

Albert Einstein said: The scientific theorist is not to be envied. For Nature, or more precisely experiment, is an inexorable and not very friendly judge of his work. It never says "Yes" to a theory. In the most favorable cases it says "Maybe," and in the great majority of cases simply "No." If an experiment agrees with a theory it means for the latter "Maybe," and if it does not agree it means "No." Probably every theory will someday experience its "No" - most theories, soon after conception.[16]

Whoa! What is this - a convention of English majors?

In audio the most logical approach is to assume everything is true and nothing is true.

“Because it’s what I choose to believe.” Dr. Elizabeth Shaw, Prometheus
stevecham"If it weren't for math, you wouldn't have a head.'

You appear to be worshipping at the wrong alter even the best math is not God or the Creator of Life you are a confused, disoriented, misinformed person.
@teo: "I like to remind people that math is an excellent tool, but to remember that math exists no where in the known universe except as that - in a human’s head." 

If it weren't for math, you wouldn't have a head.
Good points.

I guess my focus is on the distance between a measurement, which could be done by an automated device, and human perception/value.

I agree we've measured jitter for a while, but was that all? Were there some kinds of jitter worse than others? How low before we can no longer tell?
A musician with perfect pitch hearing can perceive  whether a tuning fork or oscilloscope is on pitch, or flat, or sharp. Like everything else, the measuring tool is not infallible.
Wow, That was what I call a quick response!  I was still composing. 😁 Don’t go poking hornets nests if you don’t want to get stung. 🐝
Sure thing Geoff, got it. No one is perfect.

I’m just rambling, looking for some projecting hornets nest to inflame itself and start poking at keyboards in anger. :p
Not to be confrontational but math is not the same thing as measurement. And mathematical proof is not the same as scientific proof. Things can be proven mathematically and math can support scientific evidence but math cannot prove a scientific theory. Measurements are however evidence for scientific theories. For example, measuring the velocity of light. Or, in the case of the relativity theory, measuring the anomalous rate of precession of the perihelion of Mercury's orbit.
I like to remind people that math is an excellent tool, but to remember that math exists no where in the known universe except as that - in a human’s head.

Likened to an extra long perfect kinda stick for getting the ants at the bottom of a deeper hole (on the Savannah) but, to remember that is all it is.

For example, math is not science, it is not a arbiter, even though it is a factor and an important one. But that it cannot stand in stead of logic and open minded analysis.

That we like to use numbers in analysis and would love to try to frame, corral, and label human desires. And thus own desires and perception as a tradable, saleable, storable, repeatable, fully known and owned commodity.

People have been trying to do this for thousands of years, and have pretty well gotten no where. But they sure have mastered the basics...(think modern media manipulation of the masses)

Then we want to quantity and commodify human hearing. That turns out to be a waste of time, as we don’t fully understand the limits of human perception, let alone that each and every one of us is different in such to the point that no perfection in such desires to quantify--can be found..

If we could only get one more decimal point of accuracy! Sorry dude, not happening....

As, in audio we talk limits of perception, not the basics. Basics, we all got them. Quantification can and does happen there, no disagreement from anoyne on those parts.

Limits is the deal...Peaks, absolute peaks... and there, it is all fuzzy and 100% individual in level. It’s like IQ tests..when you get beyond the basics with such testing, it gets less and less relevant and when reaching the most intelligent, IQ tests are useless. Absolutely useless.

One of those chicken and egg problems that not enough people understand the huge scope of it as existing. From those, you tend to get argument that audiophiles can’t hear what they say they do and here’s the numbers to prove that.

Where if one even begins to understand what I wrote about, it is not possible to be more incorrect than that position about extreme audio and those who hear things in it.

One could write a book on the subject and not make a single misstep. But the contrarian spitting vitriol would have to be open minded enough to not just read it, but to understand it. To be stable enough to be open.

Like Bruce Lee said, "Do not concentrate on the finger or you will miss all of the heavenly glory!" 

When science and math have a problem they can’t unravel, it (as a pair) can quite likely find itself being forced to ground to go to first principles, which are rooted in philosophy and logic. Philosophy being the actual parent of it all, in effect, or... Logic and the attempt to make logic provably, repeatedly - functional.

We won't get into engineering attempting to be to be the logic poseur that it is  when it tries to express itself in scientific exploration... this is due to the literal construction of engineering as a dogmatic form. Repeatability, is what that is about. Engineering has nothing to do with formal and actual science, engineering is a end point , in repeatability, of the application of proven science.

Thus, beware the engineering mindset that tries to dogmatically push numbers and book learning... into a frameworks and area of scientific exploration ....where all the realities are not even remotely known.

Like that of high end audio and human perception.
Jitter is not the root cause. Jitter is the result/manifestation of several independent issues/causes. Bit stuffing presumably occurs when errors occur during the optical read process that can’t be corrected by Reed Solomon EDEC.

What’s curious, though, as far as I know and please, someone correct me if I’m wrong, when the input bit stream and the output bit stream are compared *for normal uncorrected conditions* there are very few errors. If that’s true then why is it so audible?
Actually, jitter is a problem just not the only problem.    Industry knew it was a problem in the very early days, there was a lot of discussion on how much was too much.   The fact is, the best jitter removal back then was not enough.
There are numerous ways of taking measurements in a room when one is putting something into production.   But for the audiophile, you only need your ears.  :-)
A good example of this is the redbook standard set in the late 70s early 80s for the then emerging CD format. The standard was fine, but it took until the late 90s to figure out that distortion in the time domain (jitter) was a major factor

Jitter is interesting. I mean, yes, we can certainly point to it as one measurement that has improved over time, and Redbook had a markedly big jump in audible performance in the last 10 years.

Is that enough? I mean, we never proved it really, and we don't actually have any idea of what is audible, or if there are other parameters around jitter which are important. I mean, AFAIK, there's not even an agreement from manufacturer to manufacturer as to how exactly jitter is measured.

So if jitter IS the problem ... what is inaudible?


A good example of this is the redbook standard set in the late 70s early 80s for the then emerging CD format. The standard was fine, but it took until the late 90s to figure out that distortion in the time domain (jitter) was a major factor that prevented the unmeasurable enjoyment factor from CD playback. Once it was identified and measured, designers solved, or at least found ways to manage, much of this "new" type of distortion.

And to Geoff’s point above, IMO, the room accounts for at least 50% of the sound we hear from our systems.
Two things. What value are measurements of anything if it sound different in every room? And how can we measure audiophile goals like soundstage, air, musicality?
@spatialking

Not asking ... so much as stating a point of view and inviting others to chime in. 
Your reply is perfect.


E
I am not sure what you are asking - can you clarify the discussion issue? 

As for designing the measurements, once the amplifier or speaker designer has something in mind to improve the sound, it isn't too hard to create a measurement plan to quantify it.   The real problem is figuring out what to measure rather than how.   Once you measure it, it isn't too hard to run that sonic problem into obscurity. 

I'll give you an example, when I first started designing stereo amplifiers, I discovered how much power supply noise affected sound quality.  Measuring the power supply noise and relating that to sound quality was done both on the test bench as well as in listening tests.   What I found was an interesting number - the sum of the PSRR of the amplifier plus the noise regulation of the power supply has to be greater than 100 dBV.   If the power amplifier has, say 60 dBV of PSRR, then the power supply has to produce at least another 40 dBV of regulation.   All these numbers are at worst case loads for Class A or Class AB amps.  

Doing this in a solid state preamp isn't too hard, doing this in a vacuum tube preamp is harder but readily doable, doing this in a big power amp with a ton of current capacity is really hard and expensive.