how is digital sound created?


So sound is a vibration which is created from things rubbing or banging together etc. If stuff isn't interacting with something to create a sound how are sounds created from nothing? I.e in the digital world? Music on an iPod or a beep from a computer? I have always wondered what the noise's are and that come from computers when they are 'thinking' or working - wtf's going on there?

lucaspeni
All sound is produced by something vibrating in air. Intercranial stimulation has not been perfected yet!
Digital devices generate the sound internally or sample the sound externally. Those electronic "frequencies" are converted to analog waveforms which can be amplified to drive a transducer (speaker) which moves the air.
The example you learned in school with the ruler on the desk was always flawed.

The sound isn’t from the contact of the thumb on the guitar string but the string vibrating back and forth. The question is how is that string energy recorded and brought back to life.

For that Wikipedia is your friend:

https://en.wikipedia.org/wiki/Sound_recording_and_reproduction
If stuff isn’t interacting with something to create a sound how are sounds created from nothing? I.e in the digital world? Music on an iPod or a beep from a computer? I have always wondered what the noise’s are and that come from computers when they are ’thinking’ or working - wtf’s going on there?

A good question and not often anyone asks so no big surprise no one knows. Heck I had to look it up a bit myself just to make sure and get it right.

With analog the electrical signal literally is an analog of the original sound. Air compresses a microphone membrane, which moves a magnet generating an electric current in a wire. This exact process is reversed at the other end. Digital is the same, only different, because instead of being an analog of the whole entire vibration digital breaks this down into discrete samples of the wave form.

What happens when playing this back is a string of bits corresponds to a particular voltage level. A whole bunch of them describes a stair-step of changing voltages. Exactly how this works is diagrammed out here: https://sciencing.com/analog-digital-converter-work-4968312.html

This gets very mathy very fast, which is why nobody talks about it, people being mathlexic and all. Best way to think of it, each bit or word represents the voltage taken at a certain point on the signal wave form. Playing it back the bits are reversed to generate a voltage based on the bits. This all happens very fast and the result is something some people call music.

Crappy cell phone and computer DACs have more of the stair-step distortion. Really good expensive DACs have a lot more quality parts and do a lot more processing trying to smooth the signal out by interpolating values in between the stair steps. It never works but like I said works good enough for some people to call it music.

What happened to the ones and zeros, or the on and offs?

I mean digital, digital.

What we hear is analog right? How it's transmitted received and then reproduced is the difference. There is no conversion just amplification, and transmission in the analog world.

There is a program running in the digital world using a processor to convert the 1s and 0s. I thought they were RISC-V or ARM Based too.. Cirrus chips are risc based. They are better known for numeric processing or as a math co-processor. Mac chips were risc based,

Intel and AMD chips used and extended or SISC. 8086 is still close to the speed in the real world..

R-2R ladder  and  combinations or hybrid DAC tech seems to be the direction its all doing to me.. No single DAC Tech seems to be able to do it all.. Multi types of DAC chips in the same enclosure is closing the gap between Analog and Digital.. I'm just amazed at the progress and SQ refinement over the last 10 years. Look at STL

The problem is RISC-V is open source it's NOT proprietary. The competition would be a government (like China) vs a company like Intel or AMD. Who has the motivation and what's the reason for the Tech advancement.. 

A DAC is used in Stealth Tech, Sonar, Radar. It's a real world military device, I'm sure that is what it was developed for..

Open code... I don't think it's gonna fly over the long term or gain a public market knitch. The military has the resources the Audiophile world doesn't.

Regards
pick your so called expert carefully…a moving magnet microphone would suck… Care to study things again Albert ?

Sampling is just the beginning. 
Funny how some analog zealots have not so much trouble listening to digital files run back thru lossy homogenizer tape decks… to wit Famous Blue Raincoat. 
If the OP means electronic music or sounds, then I suggest looking up the history of digital synthesizers and FM synthesis.

At its most basic, sound (and music) can be modeled as the sum of sine waves of different amplitudes and frequencies. This is the most important concept to understand and underpins all of digital audio. Here are some links that you’ll hopefully find useful:

https://www.compadre.org/osp/EJSS/4487/272.htm

https://gizmodo.com/digital-music-couldnt-exist-without-the-fourier-transfo-1699155287
Funny how some analog zealots have not so much trouble listening to digital files run back thru lossy homogenizer tape decks… to wit Famous Blue Raincoat.


https://youtu.be/0AHBw7wItpI?t=24
Is there something fundamentally wrong with Famous Blue Raincoat? The writer has clearly been agitated by a homogenizer.
millercarbon
... Really good expensive DACs have a lot more quality parts and do a lot more processing trying to smooth the signal out by interpolating values in between the stair steps ...
That is completely mistaken but a common misnomer. The only interpolation that is part of the digital audio standard is when it is used for error correction. Because the data on a CD is encoded redundantly and interleaved (and cached in streaming) error correction is actually quite rare.

Within the bandwidth of the system, the Fourier theorem shows us that digital audio can perfectly describe the analog waveform. If you have doubts, watch this. (Kindly note that I'm not claiming digital audio is "Perfect Sound Forever." But if we want the best sound from digital, it's helpful to understand how it works.)

It's odd how many audiophiles refuse to accept this math, which is conceptually simple even if the details are not. Consider that the Fourier mechanism also explains perfectly how the squiggles on an LP can represent a full orchestra.
The only interpolation that is part of the digital audio standard is when it is used for error correction.

There’s no interpolation happening when performing error correction since there’s no ’guessing’. The proper bits are either recovered or the data stream is so corrupted that some errors remain. In the latter case, the player may mute the output or cease playback.

Interpolation is required whenever you increase the sampling rate in digital audio. Most DACs these days, whether some form of multibit resistor ladder or the sigma delta variety, increase the sampling rate to net several benefits such as reducing or ’shaping’ quantization noise or relaxing the design requirements for the analog reconstruction filter.

Here is a great tutorial on oversampling / upsampling and interpolation from Analog Devices:

https://www.analog.com/media/ru/training-seminars/tutorials/MT-017.pdf

dspGuru also has some great information on interpolation:

https://dspguru.com/dsp/faqs/multirate/interpolation/

So sound is a vibration which is created from things rubbing or banging together etc. If stuff isn't interacting with something to create a sound how are sounds created from nothing? I.e in the digital world? Music on an iPod or a beep from a computer? I have always wondered what the noise's are and t 10.0.0.0.1  hat come from computers when they are 'thinking' or working - wtf's going on there?


issue got solved

yage
There’s no interpolation happening when performing error correction since there’s no ’guessing’
You are mistaken. Interpolation is "guessing" by definition (in this context) and interpolation is part of the CD standard.
The proper bits are either recovered or the data stream is so corrupted that some errors remain. In the latter case, the player may mute the output or cease playback.
That’s how a data disc works because it has to be bit perfect. But it’s not how digital audio works at all. The Reed-Solomon error correction code is part of the CD standard and, as MC noted, it’s all part of the fun math that makes digital audio:
" ... whereas subsequent constructions interpret the message as the values of the polynomial at the first k points a 1 , … , a k {displaystyle a_{1},dots ,a_{k}} and obtain the polynomial p by interpolating these values with a polynomial of degree less than k ..."
+1 cleeds! Too many still believe that more expensive DACs do a better job of "smoothing out" the steps between digital samples. Nonsense! My $99 DAC outputs rival those of any four-figure DAC (low noise, low distortion, accurate LSB ...).
You are mistaken. Interpolation is "guessing" by definition (in this context) and interpolation is part of the CD standard.
@yage was talking about 'error correction' and indeed that is done without guessing. There is a stage beyond error correction where the data's too corrupted to do error correction and that's normally termed 'error concealment'. Its at that point where interpolation - which indeed is a kind of guessing in this context - is used. Muting is the final stage where the data's too far gone even for interpolation/concealment.
@abraxalito

Thanks for that clarification - good to know. It's a linear interpolation, so very different from the interpolating filters used in the DACs themselves. The reference I found is at this link - https://www.pearl-hifi.com/06_Lit_Archive/02_PEARL_Arch/Vol_16/Sec_53/Philips_Tech_Review/PTechRevie...

Of course, all this only applies to compact disc digital audio. In case anyone is interested, I found an overview of the error correction approaches in other disc formats - https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.462.3524&rep=rep1&type=pdf
abraxalito
There is a stage beyond error correction where the data’s too corrupted to do error correction and that’s normally termed ’error concealment’. Its at that point where interpolation - which indeed is a kind of guessing in this context - is used.
It’s all part of error correction, all part of the Reed-Solomon code, and I actually quoted the exact math that applies.

Then there are those who insist that there is no interpolation, or those who insist digital audio results in stairstep signals. That’s why I usually post links to the facts - there is just so much misinformation about digital because it’s not intuitive.

But as I noted, interpolation in digital audio is actually quite rare. That’s how well the error correction schemes work.
This is like, imagine the OP asked how does the analog signal turn into sound? The answer is, the analog signal goes through a voice coil, creating a magnetic field that pushes the coil in and out, which makes a cone go in and out, which moves air, and that makes sound.   

But instead of that we get lots of stuff about ports and crossovers and amplifiers. All very good to know. If only it had anything to do with the question.....   


It’s all part of error correction, all part of the Reed-Solomon code, and I actually quoted the exact math that applies.
If you're claiming that interpolation is all part of R-S coding that wouldn't be correct. Interpolation is specific to audio and R-S codes get used in plenty of applications beyond audio where interpolation would be inappropriate.
abraxalito
If you're claiming that interpolation is all part of R-S coding that wouldn't be correct. Interpolation is specific to audio and R-S codes get used in plenty of application ...
I don't know how it could possibly have been more clear that we are talking about digital audio here.