YTread Logo
YTread Logo

Eliezer Yudkowsky: AI will kill everyone | Lex Fridman Podcast Clips

Apr 12, 2024
Can you summarize the main points in the blog post? AGI ruins a list of lethal things that come to mind because, um, it's a set of thoughts you have about the reasons why AI is likely to

kill

us all, so I guess it could, but it would. Instead, offer to say: Leave that empathy with me. I bet you don't believe that, why don't you tell me why you think AGI isn't going to

kill

everyone

and then I can try to describe how my theoretical perspective works. It's different from the one that was so good that means I have to uh the word you don't like the estimate and the perspective that it's not going to kill us I think it's a matter of probabilities maybe I was wrong what do you think? like forgetting like debate and dualism and just like what do you think? what would you really believe? what are the odds?
eliezer yudkowsky ai will kill everyone lex fridman podcast clips
I even think this is probably hard for me to think about very carefully. I kind of think about the number of trajectories. I don't know what the probability of the scientific trajectory is, but I'm just looking at all the possible trajectories that happen and I tend to think that there are more trajectories that lead to a positive result than a negative one, that is That is, the negative ones. at least some of the negatives are the ones that lead to the destruction of the human species and its replacement with nothing interesting, not worth it even from a very cosmopolitan perspective on what is worth it, yes, so they are both interesting for me to investigate who are humans being replaced by Interesting AI systems and uninteresting aerial systems are a bit scary but yeah the worst one is the clip maximizer something totally boring but for me it's positive and we can talk about trying to explain what positive trajectories look like.
eliezer yudkowsky ai will kill everyone lex fridman podcast clips

More Interesting Facts About,

eliezer yudkowsky ai will kill everyone lex fridman podcast clips...

I'd love to hear your intuition about what the negative is. At the core of your belief, maybe you can correct me, AI is going to kill us all, is that the alignment problem is really difficult, I mean, in the form. We usually face it in science, if you make a mistake, you run the experiment and it shows a different result than what you expected, you say, oops! and then you try a different theory that doesn't work either and you say. Wow, and at the end of this process, which can take decades or, you know, sometimes faster, you now have an idea of ​​what you're doing.
eliezer yudkowsky ai will kill everyone lex fridman podcast clips
The AI ​​itself went through this long process of um, people thought it was going to be easier. there was a famous statement that I'm kind of inclined to take out my phone and try to read exactly what you can, by the way, okay, oh oh, yeah, we propose that a two-month study be conducted with 10 people on artificial intelligence During the summer of 1956, at Dartmouth College in Hanover, New Hampshire, the study

will

be conducted on the basis of the conjecture that, in principle, every aspect of learning or any other characteristic of intelligence can be described so precisely that You can build a machine to simulate it.
eliezer yudkowsky ai will kill everyone lex fridman podcast clips
An effort

will

be made to figure out how to make machines use language from abstractions and concepts, solve types of problems now reserved for humans, and improve themselves. We believe that significant progress can be made on one or more of these problems if a carefully selected group of scientists work together for a summer and in their report summarize some of the major subfields of artificial intelligence that are still being worked on. today and similarly there's the store, the story that I'm not sure at this point is apocryphalonaut that gratitude and who was assigned to solve computer vision over the summer, I mean computer vision in particular it's very interesting, how little we respect the complexity of vision, so 60 years later, where you know, progress on a lot of that fortunately they still haven't improved, but it took a long time and all the things that people tried initially bright-eyed and hopeful didn't work the first time they tried, or the second, or the third, or the 10th, or 20. years later, and the researchers became old, graying, cynical veterans who would tell the next generation of bright-eyed, cold-eyed graduate students that artificial intelligence is harder than you think and if alignment develops the same way, the problem is that not having 50 years to try and try again and see that we were wrong and propose a different theory and realize that everything is going to be much more difficult and we will realize it from the beginning because the first time you fail to align something much smarter than you you die and you can't try again and if every Once we built a misaligned superintelligence and it killed us all, we can observe how it killed us and you don't immediately know why, but come up with theories and propose the theory of how to do it differently and try again and build another super intelligence that does kill

everyone

and then say oh well, I guess that didn't work either and try again and become grizzled cynics and say young people guide research researchers that it's not that easy, then in 20 or 50 years I think we'll eventually do it. we would resolve;
In other words, I don't think alignment is fundamentally more difficult than AI was in the first place, but if we needed to get AI right on the first try or die, we would definitely all be dead by now, that's one harder and deadlier form of the problem, as if those people in 1956 would have needed to correctly guess how difficult AI was and correctly theorize how to do it. on the first try or everyone dies and no one will be able to make more signs and everyone will be dead and we will not be able to make more signs, that is the difficulty

If you have any copyright issue, please Contact