A.I. music


Possibly of interest: "the current rush to advance generative AI technology could be "spiritually, politically, and economically" corrosive. By effectively removing people, like musicians, from algorithms and tech that create new content, elements of society that were once connections between people are turned into "objects" that become less interesting and meaningful, Lanier explained.

"As soon as you have the algorithms taking music from musicians, mashing it up into new music, and then not paying the musicians, gradually you start to undermine the economy because what happens to musicians now happens to everybody later," Lanier said.

He noted that, while this year has been the "year of AI," next year the world is going to be "flooded, flooded with AI-generated music."


https://www.businessinsider.com/microsoft-jaron-lanier-ai-advancing-without-human-dignity-undermines-everything-2023-10

128x128hilde45

Showing 7 responses by sfgak

@mapman 

Or what if Bach joined Led Zeppelin?   Wouldn't that be interesting!

 

(Did this with fotor and powerpoint)

If you listen to Rick Beato talk about why today’s music is so boring, and understand a little bit about how generative AI works, you will see that generative AI is pretty much made for today’s music.

Copy (with minor tweaks) or just sample. Lather, rinse repeat. 
 

It’s very likely AI will define the next trend in music, rather than just quickly emulate it, since most people are already conditioned to expect a lack of creativity. 

@parker65310 @wsrrsw

I’ve been in AI since the 1980’s, I’ve had the good fortune to have worked at some of the world’s best academic and commercial AI labs. I’ve seen a lot of where the field has gone in the last 40-some-odd years.

When the Internet (actually then ARPANet, NSFNet, uucp, and BBS’s -- it wasn’t a unified "Internet" until 1993) first came out, we thought infinite connectivity would bring humanity together. Instead, it has created fake news, factionalized everyone it has touched, and become a haven for hateful and violent rhetoric.

In the earlier days of AI, we (mostly) thought of the good that our research would bring. There were always the Skynet scenarios, though, too.

The computing power we have today is literally billions of times more powerful in just a single iPhone when compared to, say, Xerox’s or Schlumberger’s research labs back in the 1980’s. It boggles the mind in the abstract, yet I lived through all that and it didn’t seem that strange. It’s weird to me that we spend much of that compute power in an endless arms race (cryptography, spying, bitcoin), and so much less on the creative endeavors that we envisioned in the early days of AI (and BTW @mahgister, at MIT’s AI Lab in the 1980’s we had a Bosendorfer grand piano outfitted with special microsensors as part of a project to detect minute changes in timing/velocity/force of a pianist’s fingers, in an effort to understand what separated good music from great).

Now what is clear to me, with large language models and generative AI, is that the amount of AI-generated output will soon dwarf the human output on the Internet. When that happens, AIs will no longer be responding to what humans do or say, but rather 95%+ to what other AIs do or say. If you think disinformation on the Internet is a problem today, boy, you ain’t seen nothin’ yet... The AI’s reality and our reality will not overlap all that much in relatively short order. Human opinions will be irrelevant; we will be spectators.

I use AI and language models to help people in healthcare, and it can do amazing things. But the history of the Internet and computing says that the bad and/or careless people will dominate in the end, and in this case more than any other to date, the genie is out of the bottle. The people who can make money or influence elections won’t care how dangerous AI can become if not properly nurtured in the early stages. I fear Geoff Hinton is right to fear AI, but I think where he and I differ is that I think we are the creators not of our destruction, but rather of our own irrelevance (having created something, that while not yet mature, can evolve at rates we will not be able to fathom).

On a less pessimistic note, @snilf -- curious: are you more in the Dan Dennett camp, John Searle camp, or something else? I’ll look forward to reading your paper at some point.

@hilde45

Be nice if we cured cancer with A.I., no?

That’s in part what I’m working on. Caveat: for reasons too complex to go into here, my belief is that we will never completely cure cancer, because the same mechanisms that drive and optimize evolution (a base mutation rate driven by the size of DNA and external influences like cosmic radiation) also drive the mutations that cause cancer; you can’t have one without the other. But we will cure specific instances of cancer in specific individuals. Over and over again. In other words, it becomes a long, slow game of whack-a-mole, rather than a death sentence.

AI comes into play in a lot of areas, including drug discovery. But where I’m using AI is in clinical care -- helping find and organize a medical record that is distributed among many providers, make sense of it, and provide rational options for treatment to the physicians. It’s an "augmented intelligence" approach, rather than a "get out of the way and let AI drive" approach; the human caregivers are the ultimate decision makers, and the intelligent system helps the human be more productive, comprehensive, and accurate.

The place we are focusing on is in cancer and rare diseases -- places where a single patient can have hundreds, sometimes thousands of health care encounters, and the overall record of the patient is overwhelming for any one person to deal with. When you couple that with the (sad) fact the Medicare only reimburses for 15 minutes total for both prep and a patient visit in any encounter, if you can condense that prep time from 9 minutes down to 3, your have doubled the amount of time the physician gets to actually spend with the patient. And provide better options for treatment.

Most of my career has dealt with either engineering tools or various aspects of computational finance and transaction processing, helping people make more money. This current work is so much more karmically rewarding...

+1 @puptent

As T.S. Eliot said, "For us, there is only the trying. The rest is not our business."

@hilde45 that’s what I said about streaming.

Ha!

@mahgister Well articulated, and I share your pessimism... also, like you, optimistic in the long run, should we survive the short-term consequences of our actions.

Read up on the Fermi Paradox and the Great Filter... this could be one of the deciding/defining moments for us as a species...

@falconquest 

I will argue until I'm blue in the face that AI is not a source of creativity equal to that of the consciousness of humans.

Another perspective:

We think of intelligence as an individual thing. But another way to look at it is as a collective thing. We are smart (so we say!). We do complex biology and math and manufacturing to kill a bunch of bacteria with an antibiotic that we designed. But, while individual bacteria are very, very, stupid (unthinking, most would agree), there are billions and billions of them. And they reproduce every second or so. And they mutate. Most of them die because of this new antibiotic they've been exposed to, but a few of them mutate to be resistant. And soon enough, there are billions of the new, antibiotic-resistant bacteria. These bacteria have collectively said, "F- you! We outsmarted your stupid vaccine..." So collectively, they are smart. You can view collective intelligence as the intelligence of individuals times the number of individuals times the reproduction (read: evolution) rate. In the middle of the spectrum between bacteria and humans are ants and bees.

Now think of AIs. Are they creative? Well, first, how creative are we? For 1000 years of western music, we had only what we'd call the "white notes" on the piano -- the 7 "natural" notes. Bb was discovered/invented in medieval times. It took another almost half century to figure out the rest of the black notes on the keyboard (i.e., all the key signatures that we recognize today). Looking at the population of Europe in the year 1000 (36 million) and the year 1500 (61 million), that equates to about 25 billion people-years to develop the chromatic scale and related key signatures. Is that "creativity"?

Look at what AIs can do now compared to ten years ago. A researcher was recently doing some prompt engineering on a large language model (LLM), and the LLM said to him, "Hey, it looks like you're trying to engineer my prompt..." Do I think it's "intelligent" right now? No. But in 30 years, AIs will be a billion times faster than they are today (just due to Moore's Law). A billion times today's abilities likely will be emulating consciousness, if not actually being functionally conscious. Thirty years later, they will be yet another billion times faster. A quintillion times faster than today. It's unimaginable.

And millions or billions of AIs (since copying them is as cheap as multiplying bacteria), each a quintillion times more powerful than today. Much in the same way that we can't fathom evolution over a billion years in anything but the most abstract terms (how do you get from a paramecium to a human?!?), we just can't fathom this computing change. There's no visceral reaction to such numbers; humans are not built to understand those timescales or magnitudes.

But AIs will be able to do things we can't even imagine today. And that's in 60 years, well within a human lifespan. If it took us 500 years to invent the black keys on a keyboard, how long do you think it will take something with a quintillion times the "intelligence" of today's AIs to posit and test the successors to Einstein's theories?

AIs will be things as smart or smarter than us that can multiply as fast as bacteria. The best of both worlds.