AI and the future of music


Last night’s 60 minutes featured a deep look at Google’s new AI program BARD. Frightening, yet compelling.

It got me thinking, if their AI has already read everything on the internet, and can create verse, stories, etc in seconds…What could it do for music?

‘Hey , BARD create a new Beatles like song from the Rubber Soul era, but have Paul Rodgers and Jack Bruce singing”.

“Hey BARD, create a song that will melt the heart of my new girlfriend”.

 

your ideas?

128x1281111art

Interesting thanks..

I never doubt that A.I. will be anything than also a great tool.. Positive and negative...

lIke nuclear energy...

or virus research ...

The problem is our societal immaturity... Too much powerful for us now...

More interesting to me,

is it repeatable?

Or is "the best" rendering a moving target?

https://forum.audiogon.com/posts/2582616
 

@mahgister 

Either you missed my point, or more likely I just didn’t articulate it well.

Music, as played by the interpretation of the artist and the inference of the listener through their “world view”, ethics and values, is not what I was trying to describe with respect to creating an “arrangement.”

Just the arrangement of the notes, in any combination, with any set timing pattern, can absolutely be mathematically derived.  As such, given sufficient processing power, storage and time, the entire universe of those arrangements can be calculated and stored and the pattern used in a copyright attempt.  Which is what I find objectionable.
 

I’ve been watching AI generated videos. They’re freaky and hilarious. The AI obviously does not really understand what a face or a hand is, it just calculates statistically what it thinks these things should look like. As Daniel Dennet says, you can sometimes have competence without comprehension. I’m not sure how far this can go before some real comprehension is required. How will we get machines to have "real" comprehension? I guess we’ll have to get a better idea of what that is before we can get the machines to do it.

From what I’ve been reading about animal studies, it seems there has to be an innate expectation of the world pre-written into the mind. They say rats are dreaming about the world before they ever open their eyes and go out and experience it. When they see it, it’s what they’re expecting, with the specific details filled in. So to "comprehend" the situation, the AI needs to be able to look at a picture or video and make some comparison of what’s contained in that data to what it expects the real world to be like. I get the impression that this is extremely complicated. Those who want to reduce consciousness to a simple equation or written statement that can be put on a T-shirt I think are not being realistic. I'm supposing here that consciousness and comprehension of the world and self are closely related.

if it [AI] “creates” something pleasing: So what?

One thing I appreciate about music is that it came from another human person. They are communicating something to me.

Consider a world where one does not have actual friends; they have a robot that pleases them. They wind up "pleased" but do they wind up "human"?

I don’t think so. Not where I want to go.

Music is mathematics.

Music is analyzable as mathematics.

It’s also analyzable as physics. As emotion. As gesture. As language.

Reducing it to mathematics is one choice of how to deal with the phenomenon of music. But just one choice of many others.

And even if mathematics -- someone has to do the analyzing, write the algorithms, etc.