Eliezer Yudkowsky: Will superintelligent AI end the world?

Recorded atApril 18, 2023
EventTED2023
Duration (min:sec)10:28
Video TypeTED Stage Talk
Words per minute191.76 fast
Readability (FK)47.97 difficult
SpeakerEliezer Yudkowsky

Official TED page for this talk

Synopsis

Decision theorist Eliezer Yudkowsky has a simple message: superintelligent AI could probably kill us all. So the question becomes: Is it possible to build powerful artificial minds that are obedient, even benevolent? In a fiery talk, Yudkowsky explores why we need to act immediately to ensure smarter-than-human AI systems don't lead to our extinction.

Text Highlight (experimental)
     
100:04 Since 2001, I have been working on what we would now call the problem of aligning artificial general intelligence: how to shape the preferences and behavior of a powerful artificial mind such that it does not kill everyone.
200:19 I more or less founded the field two decades ago, when nobody else considered it rewarding enough to work on.
300:25 I tried to get this very important project started early so we'd be in less of a drastic rush later.
400:31 I consider myself to have failed.
500:33 (Laughter)
600:34 Nobody understands how modern AI systems do what they do.
700:37 They are giant, inscrutable matrices of floating point numbers that we nudge in the direction of better performance until they inexplicably start working.
800:45 At some point, the companies rushing headlong to scale AI will cough out something that's smarter than humanity.
900:51 Nobody knows how to calculate when that will happen.
1000:53 My wild guess is that it will happen after zero to two more breakthroughs the size of transformers.
1100:59 What happens if we build something smarter than us that we understand that poorly?
1201:03 Some people find it obvious that building something smarter than us that we don't understand might go badly.
1301:09 Others come in with a very wide range of hopeful thoughts about how it might possibly go well.
1401:16 Even if I had 20 minutes for this talk and months to prepare it, I would not be able to refute all the ways people find to imagine that things might go well.
1501:24 But I will say that there is no standard scientific consensus for how things will go well.
1601:30 There is no hope that has been widely persuasive and stood up to skeptical examination.
1701:35 There is nothing resembling a real engineering plan for us surviving that I could critique.
1801:41 This is not a good place in which to find ourselves.
1901:44 If I had more time, I'd try to tell you about the predictable reasons why the current paradigm will not work to build a superintelligence that likes you or is friends with you, or that just follows orders.
2001:56 Why, if you press "thumbs up" when humans think that things went right or "thumbs down" when another AI system thinks that they went wrong, you do not get a mind that wants nice things in a way that generalizes well outside the training distribution to where the AI is smarter than the trainers.
2102:15 You can search for "Yudkowsky list of lethalities" for more.
2202:20 (Laughter)
2302:22 But to worry, you do not need to believe me about exact predictions of exact disasters.
2402:27 You just need to expect that things are not going to work great on the first really serious, really critical try because an AI system smart enough to be truly dangerous was meaningfully different from AI systems stupider than that.
2502:40 My prediction is that this ends up with us facing down something smarter than us that does not want what we want, that does not want anything we recognize as valuable or meaningful.
2602:52 I cannot predict exactly how a conflict between humanity and a smarter AI would go for the same reason I can't predict exactly how you would lose a chess game to one of the current top AI chess programs, let's say Stockfish.
2703:04 If I could predict exactly where Stockfish could move, I could play chess that well myself.
2803:11 I can't predict exactly how you'll lose to Stockfish, but I can predict who wins the game.
2903:16 I do not expect something actually smart to attack us with marching robot armies with glowing red eyes where there could be a fun movie about us fighting them.
3003:25 I expect an actually smarter and uncaring entity will figure out strategies and technologies that can kill us quickly and reliably and then kill us.
3103:34 I am not saying that the problem of aligning superintelligence is unsolvable in principle.
3203:39 I expect we could figure it out with unlimited time and unlimited retries, which the usual process of science assumes that we have.
3303:48 The problem here is the part where we don't get to say, “Ha ha, whoops, that sure didn’t work.
3403:53 That clever idea that used to work on earlier systems sure broke down when the AI got smarter, smarter than us.”
3504:01 We do not get to learn from our mistakes and try again because everyone is already dead.
3604:07 It is a large ask to get an unprecedented scientific and engineering challenge correct on the first critical try.
3704:15 Humanity is not approaching this issue with remotely the level of seriousness that would be required.
3804:20 Some of the people leading these efforts have spent the last decade not denying that creating a superintelligence might kill everyone, but joking about it.
3904:30 We are very far behind.
4004:32 This is not a gap we can overcome in six months, given a six-month moratorium.
4104:36 If we actually try to do this in real life, we are all going to die.
4204:41 People say to me at this point, what's your ask?
4304:44 I do not have any realistic plan, which is why I spent the last two decades trying and failing to end up anywhere but here.
4404:51 My best bad take is that we need an international coalition banning large AI training runs, including extreme and extraordinary measures to have that ban be actually and universally effective, like tracking all GPU sales, monitoring all the data centers, being willing to risk a shooting conflict between nations in order to destroy an unmonitored data center in a non-signatory country.
4505:17 I say this, not expecting that to actually happen.
4605:21 I say this expecting that we all just die.
4705:24 But it is not my place to just decide on my own that humanity will choose to die, to the point of not bothering to warn anyone.
4805:33 I have heard that people outside the tech industry are getting this point faster than people inside it.
4905:38 Maybe humanity wakes up one morning and decides to live.
5005:43 Thank you for coming to my brief TED talk.
5105:45 (Laughter) (Applause and cheers)
5205:56 Chris Anderson: So, Eliezer, thank you for coming and giving that.
5306:00 It seems like what you're raising the alarm about is that like, for this to happen, for an AI to basically destroy humanity, it has to break out, escape controls of the internet and, you know, start commanding actual real-world resources.
5406:16 You say you can't predict how that will happen, but just paint one or two possibilities.
5506:22 Eliezer Yudkowsky: OK, so why is this hard?
5606:25 First, because you can't predict exactly where a smarter chess program will move.
5706:28 Maybe even more importantly than that, imagine sending the design for an air conditioner back to the 11th century.
5806:35 Even if they -- if it’s enough detail for them to build it, they will be surprised when cold air comes out because the air conditioner will use the temperature-pressure relation and they don't know about that law of nature.
5906:47 So if you want me to sketch what a superintelligence might do, I can go deeper and deeper into places where we think there are predictable technological advancements that we haven't figured out yet.
6006:59 And as I go deeper, it will get harder and harder to follow.
6107:02 It could be super persuasive.
6207:04 That's relatively easy to understand.
6307:06 We do not understand exactly how the brain works, so it's a great place to exploit laws of nature that we do not know about.
6407:12 Rules of the environment, invent new technologies beyond that.
6507:16 Can you build a synthetic virus that gives humans a cold and then a bit of neurological change and they're easier to persuade?
6607:24 Can you build your own synthetic biology, synthetic cyborgs?
6707:29 Can you blow straight past that to covalently bonded equivalents of biology, where instead of proteins that fold up and are held together by static cling, you've got things that go down much sharper potential energy gradients and are bonded together?
6807:44 People have done advanced design work about this sort of thing for artificial red blood cells that could hold 100 times as much oxygen if they were using tiny sapphire vessels to store the oxygen.
6907:55 There's lots and lots of room above biology, but it gets harder and harder to understand.
7008:01 CA: So what I hear you saying is that these terrifying possibilities there but your real guess is that AIs will work out something more devious than that.
7108:10 Is that really a likely pathway in your mind?
7208:14 EY: Which part?
7308:15 That they're smarter than I am? Absolutely.
7408:17 CA: Not that they're smarter, but why would they want to go in that direction?
7508:22 Like, AIs don't have our feelings of sort of envy and jealousy and anger and so forth.
7608:28 So why might they go in that direction?
7708:31 EY: Because it's convergently implied by almost any of the strange, inscrutable things that they might end up wanting as a result of gradient descent on these "thumbs up" and "thumbs down" things internally.
7808:44 If all you want is to make tiny little molecular squiggles or that's like, one component of what you want, but it's a component that never saturates, you just want more and more of it, the same way that we would want more and more galaxies filled with life and people living happily ever after.
7908:59 Anything that just keeps going, you just want to use more and more material for that, that could kill everyone on Earth as a side effect.
8009:07 It could kill us because it doesn't want us making other superintelligences to compete with it.
8109:12 It could kill us because it's using up all the chemical energy on earth and we contain some chemical potential energy.
8209:19 CA: So some people in the AI world worry that your views are strong enough and they would say extreme enough that you're willing to advocate extreme responses to it.
8309:30 And therefore, they worry that you could be, you know, in one sense, a very destructive figure.
8409:35 Do you draw the line yourself in terms of the measures that we should take to stop this happening?
8509:41 Or is actually anything justifiable to stop the scenarios you're talking about happening?
8609:47 EY: I don't think that "anything" works.
8709:51 I think that this takes state actors and international agreements and all international agreements by their nature, tend to ultimately be backed by force on the signatory countries and on the non-signatory countries, which is a more extreme measure.
8810:09 I have not proposed that individuals run out and use violence, and I think that the killer argument for that is that it would not work.
8910:18 CA: Well, you are definitely not the only person to propose that what we need is some kind of international reckoning here on how to manage this going forward.
9010:27 Thank you so much for coming here to TED, Eliezer.
9110:30 (Applause)
S M L