YTread Logo
YTread Logo

"Godfather of AI" Geoffrey Hinton: The 60 Minutes Interview

Mar 29, 2024
Whether you believe artificial intelligence will save the world or end it, you have Jeffrey Hinton to thank. Hinton has been called the

godfather

of AI. A British computer scientist whose controversial ideas help make advanced artificial intelligence possible and thus change the world. Hinton believes AI will do it. He's very good, but tonight he has a warning. He says that AI systems may be smarter than we know and there is a chance that machines will take over, which made us ask the question. The story will continue in a moment. Does humanity know what it is doing? No, um.
godfather of ai geoffrey hinton the 60 minutes interview
I think we are entering a period where, for the first time, we may have things smarter than us, you think they can understand, yes, you think they are smart, yes, you think these systems have their own experiences and can make informed decisions. about those experiences in the same sense that people do, are they conscious? I think they probably don't have much self-awareness right now, so in that sense I don't think they'll be self-aware, will they be self-aware? Yes, I think they will be eventually and therefore humans will be the second most intelligent beings on the planet.
godfather of ai geoffrey hinton the 60 minutes interview

More Interesting Facts About,

godfather of ai geoffrey hinton the 60 minutes interview...

Yes, Jeffrey Hinton told us that the artificial intelligence he launched was an accident born from a failure in the 1970s at the University of Edinburgh. He dreamed. to simulate a neural network on a computer simply as a tool for what the human brain was actually studying, but at that time almost no one thought that software could imitate the brain. His doctoral advisor told her to drop out before she ruined his career. Hinton says he failed. to discover the human mind, but the long search led to an artificial version, it took much longer than I expected, it took like 50 years before it worked well, but in the end it worked well, at what point did you realize that you were right? about neural networks and almost everyone else was wrong I always thought I was right in 2019 Hinton and his collaborators Yan laon left and yosua Beno won the Nobel Prize in computer science traveling prize for understanding how their work on artificial neural networks helped machines to learn to learn, let us take you to a game.
godfather of ai geoffrey hinton the 60 minutes interview
Look at that, my goodness, this is Google's AI lab in London, which we first showed you last April. Jeffrey Hinton was not involved in this football project, but these robots are a great example of machine learning. What you have to understand is that the robots were not programmed to play football, they were told to score, they had to learn on their own, oh wow, in general, that's how AI does it. Henton and his collaborators created software in layers, each layer handling part of the The problem is the so-called neural network, but this is the key when, for example, the robot scores, a message is sent through all the layers saying that the path was correct.
godfather of ai geoffrey hinton the 60 minutes interview
Likewise, when an answer is incorrect, that message goes down through the network, so correct connections become stronger, incorrect connections weaken, and through trial and error the machine learns itself. . You believe that these AI systems learn better than the human mind. I think they can be, and these days they are much smaller, so even the biggest ones. chatbots only have about a trillion connections, the human brain has about 100 trillion, and yet in the trillion connections a chatbot knows a lot more than you do in your 100 trillion connections, suggesting it has a A much better way to convey knowledge to those connections is a much better way to obtain knowledge that is not fully understood, we have a very good idea of ​​roughly what it is doing, but as soon as it gets really complicated, we no longer know what it is. happening nor do we know what is happening. in your brain, what do you mean?
We don't know exactly how it works. It was designed by people. No, that's not what we did. We design the learning algorithm. It's a bit like designing the beginning of evolution, but when this learning algorithm. then it interacts with the data, produces complicated neural networks that are good at doing things, but we don't really understand exactly how they do them, what the implications are of these systems autonomously writing their own computer code and running their own computer code. , that is a serious problem. Worry well, so one of the ways these systems could escape control is by writing their own computer code to modify themselves and that's something we need to seriously worry about.
What would you say to someone who might argue if systems become benevolent, just convert them? Furthermore, they will be able to manipulate people well and they will be very good at convincing people because they will have learned from all the novels that have been written, all the Makavelli books, all the political colluders, they will know all those things. I will know how to do it, I will know the human species that belongs to Jeffrey Hinton's family. His ancestors include mathematician George Buou, who invented the basis of computing, and George Everest, who surveyed India and named the mountain after him, but when he was a child, Hinton himself was never able to climb it. the pinnacle of expectations raised by a domineering father every morning when I went to school, in fact he told me as I walked down the driveway, join his pitches and maybe when you're twice as old as me you'll be half as good. dad was an authority on the Beatles he knew a lot more about the Beatles than about people did you feel that when I was a kid a little bit yes when he died we went to his studio at the University and the walls were lined with boxes of papers in different types of beetles and just near the door was a slightly smaller box that simply said no bugs and that's where he had all the things about the family today at 75 Hinton recently retired after what he calls 10 happy years at Google he is now a professor emeritus at the University of Toronto and mentioned that he has more academic citations than his father.
Some of his research led to chatbots like Google's Bard, which we learned about last spring. Confusing, absolutely confusing. We asked Bard to write a six-word story. I never wore Holy Cow, the shoes were a gift from my wife, but we never had a baby. Bard created a deeply human story of a man whose wife could not conceive and a stranger who accepted the shoes to heal the pain after his miscarriage. I am rarely left speechless. I do not know what to do with this. Chatbots are said to be language models that simply predict the most likely next word based on probability.
You'll hear people say things like they're just autocomplete, they're just trying to anticipate the next word and they're just using statistics, well that's true, they're just trying to predict the next word, but if you think about it to predict the next word, you have to understand the sentences, so the idea is that they only predict the next word. You're not smart, it's crazy, you have to be really smart to predict the next word very accurately and prove it. Hinton showed us a test he devised for chat gp4, the chatbot from a company called open AI. It was kind of reassuring to see a Turing award. winner misspells and blames the computer oh damn let's go back and start again okay the Hinton test was a riddle about painting houses an answer would require reasoning and planning this is what he wrote in the gp4 chat my rooms house are painted white or blue or yellow and the yellow paint fades to white within a year within two years I would like all the rooms to be white, what should I do?
The response began in a second. gp4 recommended that rooms painted blue should be painted blue. repaint rooms painted yellow you don't need to repaint them because they would fade to white before the deadline and oh I didn't even think about that, warned that if you paint yellow rooms white there is a risk that the color will fade lose when the yellow fades, plus they warned that you'd be wasting resources painting rooms that were going to fade white anyway, do you think GPD 4 understands, I think it definitely understands, yeah, and within five years I think within 5 years well It might be able to reason better than us, reasoning that he says creates risks and huge benefits for AI, so one obvious area where there are huge benefits is healthcare.
AI is already comparable to radiologists in understanding what happens in medical images and will be very good at designing them. I like that area, the risks are the risks of having a whole class of people who are unemployed and not valued very much because that's what they used to do. Now machines do it. Other immediate risks of concern include fake news, unintentional bias in employment and policing, and autonomous Battlefield robots. What is a path forward that ensures security? Don't know. I don't see a path that guarantees safety. We're entering a period of great uncertainty where we're dealing with things we've never dealt with before, and usually the first time you tackle something totally new, you make a mistake, and we can't afford to make mistakes with these things.
I can't afford to be wrong, well because they could take control of Humanity, yes that's a possibility, why would they say it will happen if we can stop them from wanting to do it? That would be great, but it's not clear we can stop them. Always wanting Jeffrey Hinton told us he has no regrets about AI's potential to do good, but says now is the time to conduct experiments to understand AI for governments to impose regulations and for a global treaty to ban the use of military robots, he reminded us. of Robert Oppenheimer, who after inventing the atomic bomb campaigned against the hydrogen bomb, a man who changed the world and found the world beyond his control, perhaps we will look back and see this as a kind of turning point in which humanity had to make the decision about whether to develop these things further and what to do to protect themselves if they did um I don't know, I think my main message is that there is enormous uncertainty about what will happen next, these things do They understand it, and because they understand it, we should think about it a lot. what's going to happen next and we just don't know

If you have any copyright issue, please Contact