How Science Got Sound Wrong


I don't believe I've posted this before or if it has been posted before but I found it quite interesting despite its technical aspect. I didn't post this for a digital vs analog discussion. We've beat that horse to death several times. I play 90% vinyl. But I still can enjoy my CD's.  

https://www.fairobserver.com/more/science/neil-young-vinyl-lp-records-digital-audio-science-news-wil...
128x128artemus_5
Hahahhahaha. 

What a macaroon this guy is.

He touches on a few random scientific and audio points, like yes, we do experience sound with the body, and then acts as if he's just suddenly discovered something no one has researched.

Anyone really interested in how we localize sound should please search for "Head Related Transfer Functions."


Honestly, this guy is one of many many "gurus" I have read who do the same thing. Put together a number of things readers may know about, and then come up with entirely new ideas, which aren't really knew, and aren't really true. It is so depressing.
PS - If you like vinyl or R2R, that's' fine, I'm not arguing you should stop liking it. I just don't think this author is bringing anything to the table.
@erik_squires This really yanks your crank doesn't it? I knew noting about "Head Related Transfer Functions." etc and actually learned something from the article. He has a PhD and maybe had to write a book. 
Those of us who have PhDs often say it stands for "Piled Higher and Deeper." A guy who studied neural pulses is hardly an automatic authority on audio.
artemus_5

Do yourself a favor. Skim right past the loser wannabes - above and to follow, as night follows day- and appreciate those like me who thank you for posting this brilliant article. 

You said "despite" but for me its actually the technical aspect that I find most fascinating. Every once in a while someone comes along, takes a few seemingly ordinary facts, and combines them in a way that is a light in the darkness. 

Here it is (from the article): 

The guiding principle of a neuron is to record only a single bit of amplitude at the exact time of arrival. Since amplitudes are fixed, all the information is in the timing.

On the other hand, the guiding principle of digitization is to record variable amplitudes at fixed times.


Then just in case you missed it the first time:
So unlike digital recorders, nervous systems care a lot about microtime, both in how they detect signals and how they interpret them. And the numbers really matter: Even the best CDs can only resolve time down to 23 microseconds, while our nervous systems need at least 10 times better resolution, in the neighborhood of two to three microseconds. In crass amplitude terms, that missing microtime resolution seems like “only” tiny percentage points. However, it carries a whopping 90% of the resolution information the nervous system cares about. We need that microtime to hear the presence and depth of sounds outside us and to sense others’ emotions inside us.

Boom. Mic drop.

When Michael Fremer says of vinyl, "There's more there there" this is the science behind it.

Good stuff.

Thanks!

My proofing the digital vs analog thing, was to put the imaginary speakers 8 feet apart...

and put the listener 8 feet back, at the tip of an equilateral triangle, kinda thing.

then fire a signal off both speakers at the same time, a sharp tick or ping sound.

then vary the timing of the signal released off one speaker, vs the other.

Humans can generally hear a ’one inch’ shift of the position of the phantom between the speakers ’ping’ sound.

This equates to a perfected zero jitter timing change of 1/100,000th of a second. Which in Nyquist terms, means a clock and signal rate of at least 225khz, with zero jitter.

for a single ping.

never mind the complexities of an orchestra, and all the instruments.

a prior calculation of what is on a record, under the best conditions....is that it comes in at a equivalent sample rate of zero jitter, at around 7 million samples per second. That is how good it’s inter channel transient timing agreeance is.

With some wobble on it, but overall, yes, at the 7 million samples a second rate. We can hear through the wobble, our ear-brain is designed for it. (cancels out heart beats and blood rushing, etc)

I talked about this as the correct counter digital argument (the 16/44 ’perfect’ argument), back in the early 90’s on the original rec.highend binaries groups that were around back then.

I’d get shouted down and called names, even though the self testable logic was right there - out in the open.

The calculation was that the timing, shaping, etc...in a 16/44 recording or playback, was only good up to about 1.05khz, and after that ----it would get progressively worse. (wave form length in time vs clocking and rate- as related to human hearing fundamental design and sensitivities)

Ed Meitner recently did an interview. He mentioned that the chip manufacturers are unwilling to produce the appropriate chip.

The technology to take digital to another level is there. Unfortunately the cost of development and production would make the chips overly expensive. Not enough profit. 

Can't wait for my 45rpm Dire Straits album. Spin baby spin.
Thanks Artemus 5
For an open minded person it is a nice view to consider, maybe not follow blindly, but seems good to take in. Thanks again.
Humans can generally hear a ’one inch’ shift of the position of the phantom between the speakers ’ping’ sound.

This equates to a perfected zero jitter timing change of 1/100,000th of a second. Which in Nyquist terms, means a clock and signal rate of at least 225khz, with zero jitter.


Yeah, and this is probably being really, really conservative.

I have over the years learned the most efficient speaker setup, in my room anyway, is to measure from the corners of each speaker to the side and front walls. Its all set up and fine-tuned first by ear of course, but then once that is done out comes the tape measure. Real handy since if they get jostled vacuuming, laying down to clean connections, or whatever, its real easy putting them exactly back where they were, no guessing, no doubt.

So anyway what I have learned over the years, move even just one speaker even as little as 1/8" and the imaging starts to go. Sad to say how many so-called audiophiles roll their eyes at this. Well, too bad. Its their loss. Whatever you think you have, unless you are dead on, just that one (free!) tweak alone and it will be better.

So one inch to me is a gross error. One inch is so far off I would hear it in an instant. Something a smart-a-- co-worker unintentionally proved one night when he tried to prank me by moving things. By about one inch. I heard it - and figured out what it was and fixed it - so fast (under a minute!) he could not believe it.

So do the math on that one, probably be in the nano-seconds. Whatever. The fact that people can hear a billionth of a second of jitter starts to make a lot more sense when you look at it this way.
Great article and it makes sense to me. While I do enjoy both formats I do enjoy vinyl more...
This year is coming to an end.

Is it time to start submitting "Post of the year" nominations?

This has to be one of the strong contenders.

"Do yourself a favor. Skim right past the loser wannabes - above and to follow, as night follows day- and appreciate those like me who thank you for posting this brilliant article."
This is not even a post. This is literature.
Microtime, as the article envisions it, is not a thing.

Interferometry and head/ear related comb filtering (i.e. HRTF) is.

1/44,100 is the sampling rate, not the precision of CD playback. 
Neurons are not single gates. They integrate of multiple conditions over time. 
Again, you can like Vinyl, but the article quoted by the OP won't stand up to much scrutiny.
I think we should be asking the question, "How many samples per waveform are required to reduce the RMS error to below, say, 5%, which is the sort of error achieved during the heyday of the vinyl years?"

Some types of error may be more or less objectionable, but let's start simple. Let's just find out how much RMS error there is for a given sampling scheme.

Surprisingly enough, it's not that hard to calculate. But shockingly, nobody seems to bother.

To calculate, begin with observing that the Fourier theorem shows that all periodic functions are built up as a sum of sine waves, so that to consider music, all we have to consider are sine waves (aka pure tones). Further, it is not hard to compute the difference between a sine wave and its sampled value at any point, for any fixed number N of samples per wave form. You can approximate by just slicing the waveform into N intervals and then calculating the difference at the midpoint of each interval.

It is also easy to square these differences and add them up. You could use calculus, but the above is an adequate approximation.

That is the essence of a computation yielding the RMS error of the sampling scheme per waveform.

Returning to our question, the answer I get is 250 samples per waveform for step-function decoding. At 20 KHz, that means sampling at 5MHz - with infinite precision, of course.

Exotic decoding algorithms can improve on this for pure tones, but how well do they work for actual music? I doubt if anyone knows - certainly I've never seen discussed, even the first question about samples per waveform. I think we should.
@erik_squires "Microtime, as the article envisions it, is not a thing."

Don't agree. It seems to me that he defines it quite clearly in terms of microsecond (neural) phenomena. And also, it seems to me that someone with a Ph.D. in this area is likely to know something about this area.

Where he could be clearer is about the relationship between math and science - like how to not screw it up when applying math to the physical world. But that's a highly technical subject all on its own (for access to the literature see Theory of Measurement by Krantz et al, Academic Press, in 3 volumes), and surprise, many scientists get it quite wrong. Let alone engineers.
It is an interesting article, and I certainly will not fault his credentials w.r.t neurobiology, though it sounds like his knowledge w.r.t. the auditory processing system is 2-skin layers deep but no doubt still deeper than mine. But, even that I will not fault.

What I will fault is his knowledge of signal processing and how that relates to analog/digital conversion and analog signal reconstruction. He seems to process the same limitations in his knowledge as Teo_Audio illustrates above with his record example, that Millercarbon alludes to, and whoever did not calculation w.r.t. bandwidth.

I will start off with the usual example. Records, almost all of them made in the last 2 decades (and longer) were recorded, mixed, mastered on digital recording and processing systems. Therefore, whatever disadvantages you think apply to digital systems w.r.t. this timing "thing" absolutely and unequivocally apply to records recorded in digital.

So back to the paper, Teo’s error in logic / knowledge, miller’s interpretation. The most recent research shows that us lowly humans can time the difference of arrival of a signal to each ear to about 5-10 micro-seconds. Using that mainly, and other information, we can place the angle of something in front of us to about 1 degree. 5usec =~1.5mm of travel. Divide the circumference of the head by 1.5mm and you get about 360, or 1 degree of resolution. Following?

So how does the brain measure this timing? By the latest research, it appears to have 2 mechanisms, one, that works on higher frequencies, higher than the wavelength of the head’s size, that is based on group delay / correlation, i.e. the brain can match the same signal arriving to both ears and time the difference and another mechanism for lower frequencies, that can detect phase, likely by a simple comparator and timing mechanism. The two overlap. Still following? You will not this happens with relatively low frequencies, i.e. still frequencies within the range identified for human hearing. I know know ... but the timing, what about the timing. So let’s talk about that.

First a statement: In a bandwidth limited system (as digital audio systems are), any signal on those two (or more) channels will be time accurate to the jitter and SNR limit of the system, and NOT the sampling rate. Let me state that another way. Any difference in timing captured by a digital audio system, assuming the signal is within the frequency limits of that system, will be captured. Let me state that a 3rd way with an example. We have a 96KHz ADC with 10 pico-second jitter. We have two identical signals, bandwidth limited to say 10Khz. One signal arrives at the first ADC 1-microsecond before it arrives at the other ADC. We then store it and play it back. What will we get? ... We will get 2 signals, essentially exactly the same, with one signal delayed by 1-microsecond.

So, all those arguments the neurobiologist made in that extensive article, all his knowledge, are all for naught because he does not understand digital signal processing and ADC systems and analog reconstruction. If he did, he would have known that digital audio systems, within the limits of bandwidth, are not limited in inter-channel timing accuracy to the sample rate, but to the jitter. Whether the signals leave both channels at time A or time B does not matter, as long as the relationship in timing between the two channel is accurate .... which it is in digital analog systems.

.... and if you are reading this GK, not once did I need to consult wikipedia :-) ...
David, I don't quite follow your third last paragraph.

Your "first statement" is indeed a statement, capable of being true or false, but is it true? It needs justification, it seems to me. It is not the same at all as the sentence following, "Let me state that another way." And the sentences following "Let me state that a 3rd way ..." do not convince me that the phenomenon is independent of sampling rate.

The nature of the signals is irrelevant. It is the relative timing of the encoding that matters. If the sampling rate is not high enough, or the jitter rate not low enough, then two signals differing by 1 microsecond will be encoded as identical.

Perhaps an example will help you to understand my confusion. It seems to me that if sampling is done at a frequency of 1Hz, and two signals differing by 1 us are detected, they will be encoded in the same pulse about 999,999 times out of 1,000,000. Which logically implies that sampling rate is intrinsic to the issue. 

Perhaps you could point out the source of my confusion.
New here but I found his points, yes his science, very intriguing.  To the point I thought, heck, hes got it right.  But I cant help but wonder, even a fully digital stream/source/path ultimately has to be reproduced through a vibrating speaker.  It seems that this is a massive integration or smoothing, each connected (albeit complex) peak and trough lasting way longer than the neural timing. Accepting his points, maybe this is digital's way to get by as well as it does.  BTW, I'm not picking sides, just the way I stated it.  
terry9,   No worries on being confused about this. I find that many audio writers, many people in the audio industry period, and certain many (most) on audio forums do not get this concept. When you do the math (no literally go through the math), which I have not done in years, it becomes quite obvious how it works (after the 3rd of 4th reading).

Let me do a more real world signal. We have a 24 bit audio system, so it captures with a resolution of about 1/16.7 million, though practically will be closer to 1/1-2 million. Let’s say the system is sampling at 100Khz, and the system is bandwidth limited to 20KHz. Now let’s say we have 10KHz signal.

One key concept in a bandwidth limited system is that you cannot have just a pulse 1 waveform long, i.e. you can’t have a 1Khz waveform that last exactly 1 cycle. That would violate the bandwidth of the system because in a bandwidth limited system you cannot start and stop instantly. You can’t start and stop instantly in the real world either.

Here is where it gets harder. So these two signals, both 1KHz tones, separated by 1 microsecond arrived at these two ADCs. Let’s assume that Signal B arrives at Channel 2, 1 microsecond before Signal A arrives at Channel 1. To make the math easy for me, let’s assume that Signal A arrives at exactly 0 phase. Here are the digital outputs for the first 10 samples at 1KH and 20KHz. This is a DC offset AC signal, so the numbers go from 1 to 2^24.

You can easily tell these numbers do not represent the same signal, there is definitely something different about them. Your next question may be about accuracy / resolution. Jitter will obviously impact the inter-channel timing accuracy. I have not looked at the math in a while, but as you approach the SNR, I remember there is an increase in the inter-channel timing uncertainty.

  • 1KHZ
  • Ch1 / Ch2
  • 8,388,608 / 8,441,314
  • 8,915,333 / 8,967,925
  • 9,439,979 / 9,492,249
  • 9,960,476 / 10,012,218
  • 10,474,769 / 10,525,779
  • 10,980,830 / 11,030,906
  • 11,476,660 / 11,525,605
  • 11,960,303 / 12,007,923
  • 12,429,850 / 12,475,958
  • 12,883,448 / 12,927,862

We can do it at 20Khz as well
  • Ch1 / Ch2
  • 8,388,608 / 9,439,979
  • 16,366,648 / 16,628,630
  • 13,319,308 / 12,429,850
  • 3,457,907 / 2,646,210
  • 410,567 / 798,368
  • 8,388,607 / 9,439,979
  • 16,366,648 / 16,628,630
  • 13,319,308 / 12,429,850
  • 3,457,907 / 2,646,210
  • 410,567 / 798,368
This is 20KHz, 90db down from full. As you can see, there are still substantial differences between the channels. This is 20-30db above the noise floor of a good ADC.

  • Ch1 / Ch2
  • 265 / 298
  • 281 / 314
  • 298 / 331
  • 314 / 347
  • 331 / 362
  • 347 / 378
  • 362 / 393
  • 378 / 407
  • 393 / 421
  • 407 / 434

terry91,067 posts11-15-2019 12:29amThe nature of the signals is irrelevant. It is the relative timing of the encoding that matters. If the sampling rate is not high enough, or the jitter rate not low enough, then two signals differing by 1 microsecond will be encoded as identical.

Perhaps an example will help you to understand my confusion. It seems to me that if sampling is done at a frequency of 1Hz, and two signals differing by 1 us are detected, they will be encoded in the same pulse about 999,999 times out of 1,000,000. Which logically implies that sampling rate is intrinsic to the issue.

Perhaps you could point out the source of my confusion.

terry9,

Are you familiar with Shannon-Nyquist theorem? I provided the rather long-winded wikipedia article link below.

In a bandwidth limited system, if the sampling rate is 2x the bandwidth, you can capture all the information, including relatively timing information. I.e. with a 100KHz sample rate, you can capture everything in up to a 50Khz bandwidth limited signal. For practical reasons of analog filters, you normally want to sample 4x or more the target analog bandwidth so by 1/2 the sample rate there is no more signal.

Within the realm of signal capture and reconstruction, I would consider this established fact, though many, without the requisite knowledge, do not understand (or at least accept) the premise.
https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem
A system, with a 20Khz bandwidth, can still respond/detect a signal in microseconds. A real world impulse may last only a few microseconds, however, as your ear is bandwidth limited, you won't perceive an impulse as only lasting a few microseconds, you will perceive it lasting 10's of microseconds or longer, just as when you hit a woofer with a short impulse, it does not stop moving at the end of the impulse.  In those respects, you can reconcile both a 20khz bandwidth limited system and microsecond impulse timing.


akgwhiz1 posts11-15-2019 12:32amNew here but I found his points, yes his science, very intriguing. To the point I thought, heck, hes got it right. But I cant help but wonder, even a fully digital stream/source/path ultimately has to be reproduced through a vibrating speaker. It seems that this is a massive integration or smoothing, each connected (albeit complex) peak and trough lasting way longer than the neural timing. Accepting his points, maybe this is digital's way to get by as well as it does. BTW, I'm not picking sides, just the way I stated it.  

Just when I'm sure its all just swirling down the drain along comes glupson with this:
This year is coming to an end.

Is it time to start submitting "Post of the year" nominations?

This has to be one of the strong contenders.

"Do yourself a favor. Skim right past the loser wannabes - above and to follow, as night follows day- and appreciate those like me who thank you for posting this brilliant article."
This is not even a post. This is literature.


Indeed.

Thanks!
I’m afraid that article is the same old song and dance. What is much more interesting are the ideas of Peter Belt and his whole free wheeling approach to sound and how the local environment affects our perception of sound. These ideas help explain why cell phones 📱  in the room degrade the sound, why books in the room degrade the sound, why unused cables and electronics in the room degrade the sound. Even why our hearing is degraded by having clocks  ⏰ or watches in the room. Things of that nature. Mind-Matter Interaction. Mind over matter. Much of the standard bologna of the theory of perception of sound has become trite.
I found it interesting.

I find Analog LP's (lots of old analog record cutter produced ones) and Reel to Reel, (analog mic to analog tape) more INVOLVING than Digital.

I have simplified the difference as "Analog gets Overtones Right". 

I wish the preserved timing advantages of analog, and the resultant timing of overtones was discussed.
.............................................

I agree about precise positioning of speakers.

Happily I have an old wood floor, like grid paper on the floor, which allows me to move them into a few 'situational' positions without a ruler, including matched toe-in. I squished a speck of paper into the grid for the 2 front speaker corners.

They are very heavy, on 3 wheels, (3 will always settle with no wobble, and their weight prevents any vibration, no spikes needed for these). 

............................................
I also believe in time alignment of various frequencies, so I tilted my speakers base a bit back (also changes reflections off both floor and ceiling, and resultant back wall).
And I use the floor's wood grid as an assist for listening chair(s) positioning, as those chairs are moved 'situationally' also, turning around to be part of home theater, back around for music system, centered for 1, off center for two. I also have a vase on the window sill perfectly centered, which helps center my head when sitting this way or that, and gives the brain a center before/during listening.

This setup is how I found that very slight balance adjustments can make a big difference on certain tracks. Remote balance from listening position allows refinement, track to track. I wish my integrated amp had remote balance. I use my Chase Remote Line Controller now.
Thank you David. I will have to think about that.

As for the Nyquist-Shannon theorem, yes, I am familiar with it, and am not convinced that it says what some engineers think it does. For one thing, it involves a limit in terms of an infinite series (or integration over infinite time), and infinite time is available for relatively few signals.

My reference, A Handbook of Fourier Theorems by Champeney (Cambridge University Press), is a little too dense for casual reading, but I’ll persevere for a time.

Thanks again for the discussion, and also for re-igniting an interest in that branch of mathematics.
terry9,

Excellent catch on infinite series, but also easily addressed. As we are dealing with audio, there is effectively no information below 10Hz, and some would argue 20, but let’s say 10Hz. For that reason, any real single data set, i.e. a song file, can be modelled as an infinite series as there is a maximum rise time and minimum fall time at beginning and end, hence you can "set" all data outside to 0 (whatever your 0 is) for all points when applying the theorem. Any "errors" in bit level would be in the silence at the beginning and end of the track. In some ways, this is like a natural windowing function.

There are lots of papers, proofs, course books, material, etc. that goes into detail, including size of error when you don’t have an infinite series, which in a practical audio case, would be much smaller than other error sources.

If you want to play with "math", GNU Octave is a free-ware version of Mathcad (not as graphical) and would let you simulate any of these concepts.
My proofing the digital vs analog thing, was to put the imaginary speakers 8 feet apart...
Is that imaging or imaginary? If the speakers are imaginary, how do the listeners hear the sound?

and put the listener 8 feet back, at the tip of an equilateral triangle, kinda thing.

then fire a signal off both speakers at the same time, a sharp tick or ping sound.

then vary the timing of the signal released off one speaker, vs the other

Humans can generally hear a ’one inch’ shift of the position of the phantom between the speakers ’ping’ sound.

This equates to a perfected zero jitter timing change of 1/100,000th of a second.
Sound travels about 13,500in/s or 74µs/in. Delaying the signal 10µs is ≈0.143in. 

So if the sound is delayed, but constant level, this will contribute phase shift alone, which is not exactly how humans hear.

atdavid: Great explanations. It's incredible these issues are still poorly understood nearly a century on.
Thanks David. Will have to think some more. Haven't use Octave - Maple is my poison.
I had Mathcad on the brain as I use it pretty regularly. GNU Octave is a freeware version of Matlab, not Mathcad.
David, you posted something earlier that resonated (no pun) with me.  You said...

So how does the brain measure this timing? By the latest research, it appears to have 2 mechanisms, one, that works on higher frequencies, higher than the wavelength of the head’s size, that is based on group delay / correlation, i.e. the brain can match the same signal arriving to both ears and time the difference and another mechanism for lower frequencies, that can detect phase, likely by a simple comparator and timing mechanism. The two overlap.
  
Phase hmmm.  If a point source of sound has a given freq range and originates as all freq at zero phase AND, air is dispersive as is all mediums, more phase change at different freq could be interpreted as farther.  Ok.  Well I was curious and looked at the phase plots of my speakers.  Phase varies fairly smoothly from about -40 deg at 50 hz to about +34 deg at about 500 hz and is flat after that for about 2 more octaves.  30deg is considered alot and over the most "important" freq to humans, this is twice that.  Point is, IF phase is used by our brains/ears to judge distance (rather than just delays for orientation),  especillay at low freq, with speakers doing that how is "depth" and what we call staging not affected negatively?  In my field, as I suspect audio is, phase is usually ignored as it's a nearly intractable problem for the most part.  I wonder if sampling/digitization etc and its issues could end up being a red herring as they pertain to this topic of natural (organic) sound.  

This doesnt address the source (CD or vinyl) question and their imaging differences but maybe someone can interject the phase aspects of the two to possibly add to the discussion. 
Your brain measures the phase difference of the same signal reaching both ears. Errors in phase of a sound primarily from one speaker will not affect the measurements. Errors in phase from a sound from both speakers only matters if the phase shift is significantly different between the speakers, I.e. mfg variation.


And yes it is a red herring that keeps being raised by people who don’t understand how how digitization and analog reconstruction works and the math behind it.
That article was very entertaining, especially this little nugget. That’s gold, jerry, gold!!

“Put another way, if a sensitive, world-acclaimed innovator denounces his industry and its technology for undermining human dignity and brain function, something big is up. Who could be more qualified than a world expert — with loads of experience and no incentive to fib — to call the alarm about widespread technological damage.”
atdavid,
And yes it is a red herring that keeps being raised by people who don’t understand how how digitization and analog reconstruction works and the math behind it.

>>>>>>Well, no wonder nobody understands. 🤗 That’s GOLD, Jerry, GOLD!
The problem is GK, is that he is Not a world expert, not even remotely on the underlying topic of this whole article. He is an expert on physics and and neurobiology. He is absolutely not an expert on digitization, digital signal processing and reconstruction. Everything he says about human hearing and perception we can assume is 100% right and it makes no difference as the whole premise of his article is underlying flaws in timing in a multi channel audio system that frankly are not there. No expert in signal processing would have ever made the fundamental flaw(s) he did.


I find it disappointing that once again you have made posts that carry absolutely no relevance or information and add nothing to the discussion but appear to be only attempts to hear yourself talk. Feel free to use your obviously extensive free time to find a scientifically relevant paper ( i.e. something published and reviewed) that shows what I said to be false. If you can’t do that, then please go troll elsewhere. There are people here that actually want to learn.
terry9, here is an example that someone created that shows an example of what I am discussing w.r.t. subsample timing:.https://www.dsprelated.com/showcode/207.php  


https://www.dsprelated.com/showarticle/26.php


This is fairly simple paper that looks at impacts of noise and distortion on time measurements:.   
https://www.google.com/url?sa=t&source=web&rct=j&url=http://www.ajer.org/papers/v4(04)/S...


There are literally thousands of articles and papers on subsample timing measurement.

atdavid
The problem is GK, is that he is Not a world expert, not even remotely on the underlying topic of this whole article. He is an expert on physics and and neurobiology. He is absolutely not an expert on digitization, digital signal processing and reconstruction. Everything he says about human hearing and perception we can assume is 100% right and it makes no difference as the whole premise of his article is underlying flaws in timing in a multi channel audio system that frankly are not there. No expert in signal processing would have ever made the fundamental flaw(s) he did.

I find it disappointing that once again you have made posts that carry absolutely no relevance or information and add nothing to the discussion but appear to be only attempts to hear yourself talk. Feel free to use your obviously extensive free time to find a scientifically relevant paper ( i.e. something published and reviewed) that shows what I said to be false. If you can’t do that, then please go troll elsewhere. There are people here that actually want to learn.

>>>>Uh, I already posted something relevant. Hel-loo! You either didn’t read it or you are one of the people who aren’t here to learn. Take your pick, Mr. Know-it-all. You even claimed to know how the brain functions. Give us break! You’re here to bully, not learn. As Noah Cross tells James Gittes in Chinatown, you may think you know what’s going on but you don’t. You have a very limited scope of what affects the sound. I’m just going by what you say. Your/his argument is a typical pseudo skeptical Appeal to Authority. Better luck next time. As expert is defined as someone with a brief case 50 miles from home.
Once again, GeoffKait enters the argument, makes personal insults and jokes, adds absolutely Nothing to the argument, and hijacks the thread making it useless. Unlike Geoffkait’s posts, which are nothing but personal attacks, deflection, and obtuse comments, mine are filled with real information, directly related to the post, and I even provided links to relevant information and downloadable experiments that can be run that show exactly what I am claiming. If you have any, I mean any value at all to add to this thread, you would disprove well anything that I have written .... but no, just more personal attacks because you have ... nothing.   This discussion and the basic premise of the article have little to do with "sound" at all, but whether a digitized system has relative sub-sample timing information. That you attempt to make it about "sound" shows you don't understand the premise of the article and were confused by the title.

Is There A Moderator In The House
Post removed 
terry9,

Here is a paper, it is about ultrasonic imaging, but that is simply a scaling issue w.r.t. frequency. It has nice graphs that clearly illustrate the ability to extract timing information and shows them as a function of sample rate. Even at a relatively low SNR, 30db, the error in extracting timing is very small. The oversampling in this case is 20x the frequency of the waveform:  https://www.diva-portal.org/smash/get/diva2:995652/FULLTEXT01.pdf



Post removed 
I don’t agree with Michael Fremer on his Analog bias , that’s very flawed 
for great digital in several areas surpasses analog 
great digital such as Lampizator with very good usb cable using Vacuum tubes 
is key to bring the so called analog sound , if properly designed ,just because it has a tube doesnot make it good .  Technologies especially digital are getting 
better every year ,vinyl  you have to buy master pressings ,not your 30 year old scratched records , same goes for quality digital. DSD recordings in many ways surpass good vinyl especially per dollar spend in comparison.
that I am convinced having had both. Vinyl now is very time consuming .
for me I don’t see any advantages . 
Thank you, David. I've obviously got some reading to do - but after I get my newest amplifiers working! 
Thanks for all the input. It had been about a month ago when I first read the article. FWIW, I am not  an electronics  tech or engineer. I am a music lover who loves to hear it as good as what I can afford. Many of you have given technical reasons for your disagreement. Great. I'm glad you are here (well.. so far) Even though  you  go way beyond my understanding I still learn something. I just know what I hear. And I'm pretty technical in how I come to my conclusions of what sounds best to me. And there is the rub...  what's best to me. The biggest question I have is this. How can an objective quantitative answer be given to such a subjective subject as music, its reproduction and one's interpretation of what they hear? Oh sure, we can give some ideas or thoughts about it. But our knowledge only goes so deep. One may look at figures and speculate what should be heard. But can we absolutely know what IS heard by 100 different people listening to the same music on the same equipment? I don't think so. My $.02 worth.
BTW @atdavid. Have you REALLY posted 367 times since Oct 30., 19? That may be a record.
It's a full time job keeping up with the misinformation being spread :-)
Post removed 

atdavid
"
It's a full time job keeping up with the misinformation being spread "

With more than 375 posts in less than 30 days of membership hear you are doing an excellent job of contributing to the misinformation even if you are correct about 15% of the time.