YTread Logo
YTread Logo

“Godfather of AI” Geoffrey Hinton Warns of the “Existential Threat” of AI | Amanpour and Company

Apr 07, 2024
Our next guest believes that the

threat

of AI could be even more urgent than climate change if you can imagine that Jeffrey Hinton is considered the

godfather

of AI and made headlines with his recent departure from Google. He resigned to speak freely and raise awareness about the risks to delve into the dangers and how to manage them joins Hari Sreenivasan now Christian thank you Jeffrey Hinton thank you very much for joining us you are one of the most famous names in artificial intelligence you have been working on for over 40 years And I wonder, as you think about how computers learn, if everything was as you thought when you started in this field.
godfather of ai geoffrey hinton warns of the existential threat of ai amanpour and company
That's how it was until very recently. In fact, I thought that if we built computer models of how the brain learns, we would understand more. about how the brain learns and as a side effect we will get better machine learning in computers and all of that was happening very well and I suddenly realized recently that maybe the digital intelligences that we were building in computers were actually learning better than brain. And that changed my mind after about 50 years of thinking that we would create better digital intelligences by making them more like brains. I suddenly realized that we could have something quite different that was already better.
godfather of ai geoffrey hinton warns of the existential threat of ai amanpour and company

More Interesting Facts About,

godfather of ai geoffrey hinton warns of the existential threat of ai amanpour and company...

This is something you and your colleagues should have been. thinking about these 50 years, I mean what was a turning point, maybe there were several ingredients, a year or two ago I used a Google system called Palm, it was a great chat box and I could explain why jokes for Funnies . and I've been using that as a kind of litmus test to see if these things really understood what was going on and I was a little surprised that I could explain that the jokes were funny, some with one ingredient, another ingredient was the fact that things like chat gbt they know thousands of times more than any human being in a kind of basic common sense knowledge, but they only have about a trillion strong connections in their artificial neural networks and we have about 100 trillion strong connections in the brain , so with one hundredth of the storage capacity He knew thousands of times more than us and that strongly suggests that He has a better way of getting information into the connections and then the third thing was very recently, a couple of months ago , I suddenly became convinced that the brain was not using a learning algorithm as well as these digital intelligences and, in particular, it was not as good because brains cannot exchange information very quickly and these digital intelligences can.
godfather of ai geoffrey hinton warns of the existential threat of ai amanpour and company
I can have a model running on ten thousand different bits of hardware, it has the same connection strength on each copy of the model on different Hardware, all the different agents running on different Hardware can learn from different bits of data, but can then communicate each other what they learned by simply copying the weights because they all work identically and brains are not. It's so that these guys can communicate at billions of bits per second and we can communicate at hundreds of bits per second using sentences. There is such a big difference and that's why GPT chat can learn thousands of times more than you can for people who might not.
godfather of ai geoffrey hinton warns of the existential threat of ai amanpour and company
Follow what's been happening with Ai open and GPT chat and Google product banned. Explain what they are because some people have explained it as a kind of autocomplete function that finishes your thought for you, but what are these artificial intelligences doing? It's hard to explain, but I'll do my best. It's true, in a sense they're all too complete, but if you think about it, if you want to do really good autocomplete, you have to understand what someone is saying and they understand what you're saying. saying and they've learned to understand what you're saying just by trying to do the autocomplete, um, but now they seem to really understand, so the way they understand is not at all what people in AI 50 years ago thought it would be in.
People Old-fashioned AI thought you'd have internal symbolic expressions a bit like sentences in your head, but in some kind of clean language you'd then apply rules to infer new sentences from old sentences and that's how it all works and it doesn't look alike. nothing but it's completely different and let me give you an idea of ​​how different it is. I can give you a problem that doesn't make any sense in logic but where you know the intuitive answer and these great models are actually models of human intuition. So suppose I tell you that you know that there are male cats and female cats and male dogs and female dogs, but suppose I tell you that you have to choose between all the cats being male and all the dogs being female or you can make all the cats female. and all dogs are male, now you know that's biological nonsense, but you also know that it's much more natural to make all cats female and all dogs male, that's not a question of logic, what's up with that? inside your head? a large pattern of neural activity that represents the cat and you also have a large pattern of neural activity that represents the man and a large pattern of neural activity that represents women and the large pattern of the cat looks more like the pattern of the woman than to the pattern for man which is the result of a lot of learning about men and women and cats and dogs um but now it's intuitively obvious to you that cats are more like women and dogs are more like men because of these big patterns of neural activity that I have learned and it doesn't involve sequential reasoning or anything that you wouldn't have to do to solve that problem.
It's just obvious that's how these things work. They're learning these big patterns of activity to represent things and that creates all kinds of things that are just obvious to them, you know what you're describing here, ideas like intuition and basically context, those are the things that scientists and researchers always say. Well, that's why we're pretty sure we're not going to head into that kind of situation. Terminator scenario where you know that artificial intelligence becomes smarter than human beings, but what you're describing is that these are almost um decision processes at an emotional level of consciousness, okay.
I think if you add sensitivity to it, it just clouds the problem, so a lot of people are very confident that these things are not sensitive, but if you ask them what you mean by sensitive, they don't know and I really don't understand how they are so sure. that they are not sensitive if they do not know it. what they mean by sensitive, but I don't think it helps to discuss that when you're thinking about whether they're going to get smarter than us, I'm pretty sure they think so. Suppose I'm talking to a chatbot and suddenly it dawns on me. he tells me all kinds of things I don't want to know, like he tells me he writes answers about someone named Beyonce that I'm not interested in because I'm an old white man and I suddenly realized he thinks I'm a teenager now when I use the word he thinks there I think it's exactly the same sense of think is when I say you think something um if I asked you I'm a teenager I would say yes if I had to look at the In the story of our conversation I could probably understand why he thinks I'm a teenager and I think that when I say he thinks I'm a teenager, I'm using the word think in the same sense that we normally use it. using it, I really think that gives me insight into why this is such a significant leap forward.
I mean, to me, it seems like there are parallel concerns because in the '80s and '90s, blue-collar workers were worried about the arrival of robots and their replacement. and not being able to control them and now this is kind of a

threat

to the kind of white collar people who say that there are these Bots and agents that can do a lot of things that we otherwise thought would be something that only people can do, yeah, believe. There are a lot of different things we need to worry about with these new types of digital intelligence and what I've mainly been talking about is what I call the

existential

threat, which is the possibility that they will become smarter than us and take control of us. , they will take control, that is a very different threat from many other threats that are also serious, so they include these things that take away jobs from us in a decent society.
That would be great, it would mean that everything would become more productive and everyone would be better. but the danger is that it will make the rich richer and the poor poorer. That's not AI's fault, that's how we organize society. There are dangers in them, making it impossible to know what is true by having so many fakes out there, that is a different danger. something you could address by treating it as a fake. Governments don't like you printing their money and they take it seriously. It is a serious crime to print money. It is also a serious crime if they give you counterfeit money to pass to someone else.
If you knew it was fake, it is a very serious crime. I think the government will have to set similar regulations for fake videos, fake voices, and fake images. From what I see, it will be difficult, the only way to avoid being inundated. these fake videos and fake voices and fake images is to have strong government regulation that makes it a serious crime, you go to jail for 10 years if you produce a video with AI and it doesn't say it's made with AI, that's what they do it for. counterfeit money and this is a series of threats that are going to fit the money, so my opinion is that that is what they should be doing.
I actually talked to Bernie Sanders last week about this and he liked that vision. I can understand governments and central banks and all private banks agree on certain standards because there is money at stake and I wonder if there is enough incentive for governments to sit down together and try to come up with some kind of rules about what is acceptable and what is not, some type of Convention or Geneva Agreements. It would be great. if governments could say look these fake videos are so good at manipulating the electorate we need them all to be flagged as fake otherwise we are going to lose democracy the problem is some politicians would like to lose democracy so that will make it more difficult.
So how do you solve that? I mean, it seems like this Genie is kind of out of the bottle, so what we're talking about now is the genie of being inundated with fake news, yeah, and that's clearly kind of out of the bottle. It's pretty clear that organizations like Cambridge Analytica by spreading fake news had an effect on Brexit and it's pretty clear that Facebook was manipulated to have an effect on the 2016 election, so June is out of the bottle, in that sense we can try it and at least At least hold it back a little bit, but that's not the main thing I'm talking about.
The main thing I'm talking about is the risk of these things becoming super intelligent and taking control of us, I think because of the

existential

threat we're all in. In the same boat, the Chinese, the Americans, the Europeans, they all wouldn't like superintelligence to take over people, so I think for that existential threat we will get collaboration between all companies and all countries because none of them want superintelligence. intelligence to take control, in that sense it is like a global nuclear war in which even during the Cold War people could collaborate to prevent a global nuclear war from occurring because it was in no one's interest, and that is, in a certain way meaning, something positive about this existential existence. threat, it should be possible to get people to collaborate to avoid it, but for all other threats it is more difficult to see how you are going to achieve collaboration.
One of your most recent employers was Google and you were a vice president and member there. and you recently decided to leave the

company

so you could speak more freely about AI. Now they just released their own version of a sort of Bard's GPT in March, so tell me, here we are now, what do you think you can say today? or I'll say today what you couldn't say a few months ago um, not much, I really just wanted to be. If you work for a

company

and you're talking to the media, you tend to think about what implications this has for the company.
At least you should think back because you're getting paid. I don't think it's honest to take the company's money and then completely ignore the company's interests. But if I don't take the money, I just don't do it. I don't have to think about what's good for Google and what's not. I can only say what I believe. I mean, everyone wants to broadcast the story that I left Google because they were doing bad things, that's pretty much the opposite of the truth. I think Google is behaving very responsibly and I think after leaving Google I can say good things about Google and be more credible.
I'm just coming out so I'm not forced to think about the implications for Google when I say things about singularities and things like that, do you think that tech companies, given that it's primarily their engineering staff that are trying to work on development of these intelligences, they will have a betteropportunity to create the rules of the road than governments or third parties? I think there are some places where governments have to get involved, like regulations that force you to show if something was generated by AI, but in terms of maintaining control of a superintelligence, what you need is for the people who are developing it to do a lot of things. small experiments. with it and seeing what happens as they develop it and before it gets out of control and that will mainly be the researchers from the companies.
I don't think philosophers can be left to speculate about what might happen to anyone who has written. A computer program knows that doing a little empirical feedback from playing with things quickly disabuses you of your idea that you really understood what was going on, and therefore it's the people in the company developing it who will understand how. maintain control, if possible, so I agree with people like Sam Altman from Open AI said that this will inevitably develop because it has a lot of good uses and what we need is that as it develops, we put a lot of resources into it. to try to understand how to stay in control and avoid some.
Of the negative side effects in March there were more than a thousand different people in the tech industry, including leaders like Steve Wozniak and Elon Musk, who signed an open letter essentially calling for a six-month pause in development. of artificial intelligence and you didn't sign that, how come I thought it was completely unrealistic? The point is that these digital intelligences are going to be tremendously useful for things like medicine to read scans quickly and accurately. It's been a little slower than I expected, but it's getting there. Well, they are going to be tremendously useful for designing new nanomaterials so that we can make more efficient solar cells, for example, they are going to be tremendously useful or already are for predicting floods and earthquakes and improving the climate, obtaining better weather projections. "They are going to be tremendously useful in understanding climate change, so they will develop, there is no way to stop it, so I thought maybe it was a sensible way to get media attention, but it wasn't sensible to ask." because it was simply not feasible, what we should be asking for is that comparable resources be allocated to deal with the possible negative side effects and to address how we keep these things under control, how they go into developing them, so present, 99 of the money. is developing them and the one percent is going to people who say "oh, these things could be dangerous", it should be more like 50 50 I think when you look back at your life's work and when you look forward at what could be to come.
Are you optimistic that we will be able as humanity to rise to this challenge or are you less so? I think we are entering a time of great uncertainty. I think it would be foolish to be optimistic or pessimistic, we just don't do it. If we don't know what's going to happen, the best we can do is say let's make a big effort to try to ensure that whatever happens is as good as it could have been. It is possible that there is no way for us to control these superintelligence and that humanity is only a passing phase in the evolution of intelligence that in a few hundred years there will not be people everything will be digital intelligence that is possible we simply do not know how to predict the future It's a bit like looking in fog.
I know that when you look in fog you can see about a hundred meters very clearly and then at 200 meters you can't see anything. There is some kind of wall and I think those walls lasted about five years. Jeffrey Hampton, thank you very much for your time, thank you. for inviting me abroad

If you have any copyright issue, please Contact