YTread Logo
YTread Logo

Why this top AI guru thinks we might be in extinction level trouble | The InnerView

Mar 28, 2024
coni is one of the world's leading artificial intelligence minds. He is a hacker who sees the rise of AI as an existential threat to humanity. She dedicates his life to ensuring that his success does not spell our downfall. There will be intelligent creatures on

this

planet that are not humans,

this

is not normal and there will be no turning back and if we do not control them then the future will belong to them, not us Ley is the CEO of conjecture AI, a startup trying to understand how think AI systems with the goal of aligning them with human values, he talks in the interview about why he believes the end is near and explains how he is trying to stop it and Connor Ley joins us now in the interview, he is the CEO of guess, it's in our London studio, it's good See you there, it's good to have you on the show.
why this top ai guru thinks we might be in extinction level trouble the innerview
Connor. You're something of an AI

guru

and you're also one of those voices that say we need to be very careful right now and a lot of people don't have the knowledge. or they don't have the vocabulary or a deeper understanding of why they should care, they just feel a sort of sense of doom, but they can't map it, so maybe you can help us down that path, why should I? Should we be concerned about AGI and tell me the difference between AGI and what is widely perceived as AI at the moment? So I'll answer the second question first just to clarify some definitions.
why this top ai guru thinks we might be in extinction level trouble the innerview

More Interesting Facts About,

why this top ai guru thinks we might be in extinction level trouble the innerview...

I'm sure the truth is that there really is no true definition of the word AGI and people use it to mean all kinds of different things when I talk about the word AGI, usually what I mean by this is artificial intelligence systems or computer systems that are more capable than humans at all the tasks they could perform. implies that you know any scientific task, programming, remote work, science, business, politics, anything and these are systems that do not currently exist but are actively trying to be built. There are many people working on building systems and many experts believe that these systems are close and as to why these systems could be a problem?
why this top ai guru thinks we might be in extinction level trouble the innerview
Well, I actually think a lot of people have the right intuition here. The question about intuition here is simply if you build something that is more competent than you, is smarter than you and everyone you know. They know it and all the people in the world are better at business, politics, manipulation, deception, science, weapons development, everything and you don't control those things that we currently don't know how to do well, why would you expect for that to turn out well? Yes, it reminds me. a little bit about the debate about whether we should look for life in the universe beyond our solar system Stephen Hawking said be careful, look at the history of the world every time we are invited to a stronger power, a more competent power, they

might

come . and destroy you, but then the tradeoff is that you are mapping human behavior human desires passions needs desires in this is natural and fair to do because humans created it humans humans created the parameters for it so it's actually worse than that It's really important to understand that when we talk about AI it's easy to imagine that it's software and the way it works generally software is written by a professional, by a programmer, they write a code that tells the computer what to do step by step.
why this top ai guru thinks we might be in extinction level trouble the innerview
That's not how AI works. AI is more organic, it is more well cultivated. You use these big supercomputers to take a bunch of data and grow a program that can solve the problems in the data. Now this program doesn't look like something written by humans, it's not code, it's not lines of instructions, it's more like a huge stack of billions and billions of numbers and we know that if we can run all these numbers and run them again, they can do really amazing things, but no one knows why, so it's so much more. It's like dealing with something biological, like if you look at a bacteria or something, and bacteria can do some crazy things and we don't really know why, and that's what our AIS are like, so the question is less: do humans know ? imparts emotions into these systems, we don't know how to do that, it's more if you build systems, if you grow systems, if you grow bacteria that are designed to solve problems, you know, you solve games to make money or whatever, what kind of things? you will do? grows and by default you're going to grow things that are good at solving problems, gaining power, fooling people, you know, building things, etc., because this is what we want.
You reverse engineered gpt2 at the age of 24, which was a few. Years ago, that's part of the legend, I mean, that's part of the credentialing of you before they say, well, this guy says we're in big

trouble

, they say well, by the way, you know what he's talking about because technically he knows what he's doing, tell me about the tipping point between being a believer and being excited about this and becoming a Warner. What happened. So the story goes back even further than that. Reverse engineering is a bit generous. It's more like I build. a system, I found out that no one can reverse engineer it, oh and this is a big deal, um, but it was even before, so I've been very interested in AI since I was a teenager because I want to make the world a better place. and I think a lot of people who believe in AI, a lot of tech people who do things that they consider dangerous.
I think most of them maybe not most, but most of them are probably good people and they are trying to develop technology. To make the world a better place. You know, when I grew up. The technology was great. In doing so, I was very excited about more science and more technology and well, what's better technology than intelligence? If we had intelligence, well, we could solve all the problems, we could do all the science we could, you know, invent everything. the cancer drugs we

might

know develop all the cool things, so I was thinking when I was a teenager and I think a common trajectory is that people, when they're first exposed to some of these techno utopian dreams of AGI.
It sounds great, you know, it sounds like a great solution, but then when you think more about this problem, you realize that the problem with AGI is not really how to build it, but how to control it, which is much more difficult just because you can Doing something that's smart or solves a problem doesn't mean you can make something that listens to you and does what you really want. This is much, much harder, and that's how I started looking into this problem more at first. At 20 I'm starting to realize that we're not really making progress on this problem, so worst case scenario, we either have an apocalyptic ending for all of us, we'll be existentially destroyed or enslaved in The Matrix or whatever. , tell me.
How does it really happen in your mind? How does this AGI take control? I mean, those famous moments in Terminator and elsewhere One of the Terminators, that final scene where nuclear bombs explode everywhere. I mean, there are a lot of different ways that people have imagined it. So as you see it, tell me how it happens and how, if things continue to go in the direction you fear, how long will it take to get there? Well, of course, I personally don't know exactly how things will develop. I can't see the future and I can give you an idea of ​​how I hope it feels, how I hope it feels when it happens, the way I hope it feels, it's like you're playing chess against a grandma now.
I'm very bad at chess. I'm not good at chess at all, but you know I can play, you know, a bit of an amateur game and then, but when you play against a Grandmaster or someone it's much, much, better. that you, what it feels like is not like you're having a heroic battle against Terminator, you're having this incredible back and forth and then you lose, no, it feels more like you think you're playing well, you think everything's fine and then , suddenly you lose in a move and you don't know why this is what it feels like to play chess against a grandmother and this is how Humanity will feel when playing against AGI.
What is going to happen is not something dramatic. battle where you know the Terminators rise up and try to destroy Humanity, no, things will get more and more confusing, more and more jobs are automated faster and faster, more and more technology is built and no one even knows how the technology works there. There will be movements in the media that don't really make any sense, like: Do we really know the truth of what's going on in the world right now? Even now with social media, do you or I really know what's going on? How much of this is false? much of it is generated with AI or other methods that we don't know about and this is going to get much worse, imagine if you have extremely intelligent systems, much smarter than humans, that can generate any image, any video, anything, trying to manipulate it well and being capable. develop new technologies to interfere with politics, the way I hope it will be is that things seem mostly normal, just like weird, just like things get weirder and weirder and then one day, they don't anymore we will be in control, he just won.
It won't be dramatic, there won't be a fight, there won't be a war, it will just be a day that the machines are in control and not us and even if there is a fight, yes, I'm sorry, even if there is a fight or a war, They gave us the gun and bullets and we did it. I mean, we are the ones who could make all of this rushed by being controlled in some absolutely possible way. I don't think an AI needs to use humans for that because you know it could develop extremely advanced technology, but it's entirely possible that humans aren't safe, it's absolutely possible to manipulate humans as you know, everyone knows, humans aren't immune to propaganda, they are not immune to mass movements, imagine if you met an AGI. he gives Kim Jong Un the call and says, "Hey, I'm going to make your country run extremely well and tell you how to build super weapons in return, do me this favor." I mean, Kim Jamong is going to think that that's cool and that it's very easy to obtain. power if you are extremely intelligent if you are able to manipulate people who develop new technologies, trade weapons on the stock market to make tons of money, well yes you can do whatever you want so you are ringing the alarm.
Jeffrey Hinton sees him as the founder or father or godfather of AI, he is sounding the alarm and has distanced himself from many of his previous statements. Others in the mainstream are coming out as highly credentialed people who are authentic when it comes to AIS and saying we need guardrails. We need regulation, we need to be careful, maybe we should stop everything and open up AI, Microsoft Deep Mind, these are companies, but then you have governments investing in this, everyone keeps running towards possible Doom, why do they keep doing it despite of these so legitimate? and a strong warning: is it just about the bottom line, the money and the competition, or is there something more?
This is a great question and I really like the way you said they were rushing because this is really the right way to look at this. It's not that it's not possible to do it well, it's not that it's not possible to build a safe AI. I think this is possible. It's just very difficult. It takes time. It's in the same way that it is much easier to build a nuclear reactor that melts than to build a reactor. nuclear that is stable, of course, this is difficult, so it takes time and resources to do it, but unfortunately, in the situation we are in right now, we are in a current situation. where at least here in the UK there is currently more regulation on the sale of a sandwich to the public than on the development of potentially lethal technology that could kill every human on earth.
This is true. This is the current case and a lot of this is due to slowdown, it's just that you know that governments are slow, people don't want to and there are vested interests, you make a lot of money by promoting AI, pushing AI even more, it makes you a lot of money , makes you famous on Twitter, you know, look how much these people look alike. rock stars you know people like Sam alman is a rock star on twitter you know people love these people they say oh yeah they are bringing the future they are making a lot of money so they must be good but I mean , it's just not that. simple, unfortunately we are in a territory where we all agree that somewhere in the future there is a cliff over which we will fall if we continue we do not know where it is we do not know maybe it is very far away maybe it is very close and my opinion If you don't know where it is, you need to stop, other people who you know get monetary power or just ideological points, like a lot of these people, it's very important to understand, do this because they really believe in a religion, they believe in transhumanism. in the glorious future where AI will love us and so on, there are many reasons, but I mean, yeah, I mean, the cynical opinion is that you could be making a lot more moneyright now if you were just pushing AI you could get a lot. more money than I have now, how can we do something about it without just deciding to cut the internet cables and blow up the satellites in space and just start over, how do we do that?
Because this is a technical problem and it is also a moral issue and ethical problem, so where to start right now or is it too late? So the strangest thing in the world to me right now, as someone who is deeply into this, is that things are going very, very bad, we have, you know, crazy, you know, just corporations with zero oversight that ends of plowing billions of dollars to go as fast as possible with no oversight and no accountability, which is as bad as it could be, but somehow we haven't lost yet, it's not over yet, it could be over, there's a lot of things that could end tomorrow, but not yet, there is still hope, there is still hope.
I don't know if there will be hope in a couple of years or even a year, but currently there is still hope. Oh, wait, wait a year, I mean, come on, man. We're probably going to post this interview a couple of weeks after we record it, it'll be a few months, we could all be dead by the time you know this has 10,000 views. I mean, just to explain this one year timeline. Why one year? Why is it going so fast that even a year would be too far ahead? Explain that I am not saying that one year is guaranteed in any way.
I think it's unlikely, but it's not impossible and it's important to understand. that AI and computer technology are exponential, it's like covid, this is like saying in February, you know, a million covid infections, that's impossible, that can't happen in six months and it absolutely happened, that's how it is AI, plus, exponentials seem slow. It seems like you don't have one infected, two infected, four infected, that's not so bad, but then you have 10,000, 20,000, 40,000, you know, 100,000, yes, you know, in a single week and that's how this technology works, as well as our computers. There's something called Moros' law, which isn't really much, it's more of an observation that every two years our computers get, you know, there's some details, but about twice as powerful, so that's exponential and our technology and It's not just our computers. becoming more powerful, our software is improving, our AIS is improving, our data is improving, more money is coming into this field, we are at an exponential

level

, that's why things can go so fast, so although I'm not like you know what it would be It's strange if we were all dead in a year, it's physically possible, it can't be ruled out if we continue down this path, powerful people who can do something about it, especially when it comes to regulation, when you saw those congressmen talking to Sam.
Alman, they didn't seem to know what the hell they were talking about, so how frustrating is it for you that the people who can make a difference have no idea what's really going on, and more importantly, they didn't seem to know. I really want to know that they had weird questions that didn't make sense and then you're thinking, okay, these guys are in charge. I mean, no wonder AI comes and wipes us all out, maybe maybe we deserve it. I would not go. until now, but this used to bother me a lot, it used to be extremely frustrating, but I have come to peace with it largely because what I have really discovered is that understanding the world It is difficult to understand complex topics and technology is difficult not only because they are complicated but also because people have lives and this is okay, this is normal, people have families, they have responsibilities, they have a lot of things that people have to deal with and I don't shame people for this, you know, like you know, I have turkey, you know, with my family for Thanksgiving and whatever, and you know, my aunts and uncles watch, they have their own lives, maybe they really don't have time, you know? to listen to me, rant about it, so don't do it, so I have a lot of love and a lot of compassion for that, things are difficult, this of course does not mean that it solves the problem, but I'm just trying to say that, for Of course, it's frustrating to some extent that there are no adults in the room, that's how I would see it, is that sometimes there is a belief that somewhere there is someone who knows what is going on, there is an adult. who is under control, do you know someone in the government who has this under control and as someone who has tried to find that person could tell you that this person does not exist, the truth is that anything in the world works.
It's kind of a miracle, it's amazing that anything works with how chaotic everything is, but the truth is that there are a lot of people who want the world to be good, you know, they may not have the right information. They may be confused, several people with bad intentions may be pressuring them, but since most people want their families to live and have a good life, most people don't want bad things to happen, most People want other people to be happy, safe and lucky. for us, most normal people, so not elites, not necessarily politicians or technologists, most normal people, yes, they have the right intuition about AI, where they see something like wow, that looks really scary Let's be careful with this and this is what gives me hope so when I think about the politicians and they are not in charge I think it is now our responsibility as citizens of the world that we have to take this into our own hands we can't wait to for people to save us we have to make them save us we have to make these things happen we have to know we have to make our voice hurt we have to say hey how the hell are you letting this happen?
One of the beautiful things is that you know that, to a large extent, politicians can be moved, they can be reasoned. with and can be moved by the voters, you can remove them from office, that's a good argument for democracy, that's a great argument for democracy, that's wonderful, you know, democracy is the worst system, except everyone else, yeah, um, let's get to the point. of people's feeling and I asked about this from the beginning, that intuitive feeling that something is up here, there is something sinister, there seemed to be a little plateau with something like GPT chat, so initially people were very anxious, very surprised, very captivated. so this thing could do, you could write your college thesis and everything you could know, do all these fancy tricks, they look like magic tricks, but then once the hype died down a little bit, people started entering things new ones and ask maybe better questions. and you could see some of the limitations of something like GPT G chat and its precursors and that led a lot of people to say well, I mean, okay, sometimes this sounds like a PR department or an HR department. in a company sometimes.
It's actually there to detect plagiarism, but sometimes it feels like plagiarism, like a college paper, which led to and, anecdotally, a lot of friends of mine say "ah, maybe this maybe we'll be okay for a while because this has serious consequences". the limitations address that for me because a lot of people are still like, well I know there was hype, but now I'm not so sure, tell me about it, so there's a story, I'm not sure if the story is really true. or not, but it's a good metaphor in that if you take a frog and put it in a pot of water, you know that in a pot of cold water, the frog will sit there happily, if you slowly turn up the heat of the pot, the frog will sit there.
There is no problem and if you do it very slowly you increase the temperature very slowly, the frog will get used to the temperature and will not jump until the water boils and the frog dies. I think this is what is happening with people. that, um, people are extremely good at doing things that are very normal, it's that if it's something normal, if it's something that all your friends do, then it just becomes normal, this is like during war, why people can massacre other people, because if all your friends are doing well it's normal it's a yes you kill people it's normal that you kill people it's okay that's how it can happen and the same applies here is that well, okay, now you can talk to your computer, sure we can argue about oh chat jpt It's not that smart.
You could talk to your computer as if it were slower. If this were a sci-fi movie from 20 years ago, everyone would be yelling at the screen: What the hell are you doing? This is obviously crazy. What the hell? It's happening, but because it's available now, you know, cheaply online, it doesn't feel special, so the way to approach this is, I think, a lack of coordinated campaign effort. What I mean by this is that, in general, when we think about our civilization, not just individual people, when we think about our civilization, how our civilization approaches problems, how it decides which problems to address because there are always so many problems that you could try hard, how do you decide which one to pay attention to and Actually, this is very complicated and it could be due to a natural disaster or a war or whatever.
It might be because of some stupid fashion hype, like a viral video on Tik Tok that makes everyone freak out sometimes, yes, but usually if you really want your civilization. to address a problem, a big problem, it takes a lot of effort on the part of the people who are trying to elevate it to prominence to get attention because, again, people have lives that, as you know, most people don't. they have time to go online and read huge books on AI Security and oh how do we integrate gbt chat or how do we deal with TR security?
They don't have time for that, of course not, and I'm not trying to judge these people. I understand it's not yours. I work in a good world there should be a group of people who deal with this, the problem is that they don't really exist before we leave. I'm glad you mentioned that people don't know where to look, see if there is a resource you could point people in the direction so they can educate themselves on the reality of the situation and can catch up, that would be what I don't think There is no one who has the whole thing, which is a big problem, someone should do that. resource, if anyone makes that resource let me know, but what I would probably point people to is AI control, which is a group of people that I'm also involved with and who are campaigning for exactly these issues and who are trying to unite humanity to solve them. these problems because it is a problem that neither you nor I can solve, no human being can solve these problems that we are dealing with right now, this is a problem that humanity has to solve, that our civilization needs to solve and I believe that our civilization can do it. this, but it won't do it without our help, it won't help, it won't happen without us working together, so if there is something that can go to Twitter or Google or whatever, control the AI ​​and support them, listen to what they say and this It's the campaign I'm behind, well, I support them, okay, we'll put the link in the YouTube description too.
If anyone wants to see it, Conor, you have a brilliant mind and I am very grateful that we can I have to talk thank you very much for joining us for the interview thank you very much take care

If you have any copyright issue, please Contact