YTread Logo
YTread Logo

Elon Musk tells Tucker potential dangers of hyper-intelligent AI

Apr 07, 2024
Then suddenly AI is everywhere, people who weren't quite sure what it was or that playing with it on their phones was good or bad. Yeah, so I've been thinking about AI for a long time since I was in college. Um, it was one of the things that was kind of four or five things that I thought would really dramatically affect the future. It is fundamentally profound in the sense that the most

intelligent

creatures as far as we know on this Earth are humans. Um, it's our defining characteristic, yeah. We are obviously weaker than chimpanzees, we are less agile, but really smarter, so now what happens when something much smarter than the smartest person appears in silicone foam?
elon musk tells tucker potential dangers of hyper intelligent ai
It is very difficult to predict what will happen in that circumstance. It's called The Singularity, you know. a singularity like a black hole because you don't know what happens next, it's hard to predict, so I think we should be cautious with AI and we should. I think there should be some government oversight because it affects the population. It is a danger to the public. and then when you have things that are in danger to the public, you know, let's say, then, Food, Food and Drugs. That's why we have the right Food and Drug Administration and the Federal Aviation Administration, the FCC, we have these. agencies to oversee things that affect the public that could be a public harm and you don't want companies to cut corners on safety and then make people suffer as a result, which is why I've long been a strong advocate for regulation. of AI, something that I think is regulation, you know, it's not fun to be regulated, it's kind of I saw an audience to be regulated.
elon musk tells tucker potential dangers of hyper intelligent ai

More Interesting Facts About,

elon musk tells tucker potential dangers of hyper intelligent ai...

I have a lot of experience with regular regulated industries because obviously automotive is highly regulated. You could fill this room with all the regulations that are required for a production car in the United States alone and then there is a completely different set of regulations in Europe, China and the rest of the world, so we are very familiar with the oversight of many regulators and the same is true with rockets, you can't just shoot Rock or not the big ones anyway because the FAA is overseas, and even to get a launch license there are probably half a dozen or more agencies federal agencies that need to approve it, plus state agencies, so I've been through so many regulatory situations, it's crazy and, but you know, sometimes people think I'm some kind of regulatory Maverick, that kind of challenges regulators on a regular basis. , but in reality this is not the case, so, you know, from time to time, I will rarely disagree with regulators, but the vast majority of the time my companies agree with the regulations and comply with them all anyway, so I think we should take this seriously and we should have a regulatory agency.
elon musk tells tucker potential dangers of hyper intelligent ai
I think you need to start with a group that initially seeks a vision for AI, then asks for industry input, and then has proposed rulemaking and then those rules. I know, probably, hopefully, we will gradually be accepted by the major Ai players and I think we will have a better chance of advanced AI being beneficial to humanity in those circumstances, but all regulations start with a perceived danger and airplanes. falling from the sky or food causes botulism yeah, I don't think the average person playing AI games on their iPhone perceives any danger. Can you explain roughly what you think the

dangers

might be?
elon musk tells tucker potential dangers of hyper intelligent ai
Yes, so the danger really is AI. It is perhaps more dangerous than, say, poorly managed aircraft design or production maintenance or poor automobile production in the sense that it has the

potential

, however small one takes that probability into account, but it is not trivial and It has the

potential

to destroy civilization. There are movies. like Terminator but it wouldn't happen like Terminator um because the intelligence would be in the data centers right the robot is just the end effector but I think maybe what you're getting at here is that the regulations are really just aiming for the end effector . after something terrible has happened, that's right, if that's the case with AI and we're only putting regulations in place after something terrible has happened, it may be too late to implement regulations, AI may be in control, At that moment you believe that it is real.
It's conceivable that AI could take over and get to a point where it can't be turned off and would make decisions for people, yes absolutely, no, that's definitely how things are headed, uh, sure, uh, I want say. um things like say uh chat EVT which is based on openai's jpd4, which is the company that I was instrumental in creating, unfortunately when it was a non-profit. Yeah, I mean, the reason openai exists is because Larry Page and I were close friends and I was staying at his house in Palo Alto and talking to him later tonight about AI safety and at least my perception was that Larry wasn't taking AI security seriously enough. um and um, what did he say about it?
He really seemed to be some kind of digital superintelligence, basically digital God, if you want, as soon as possible, he wanted it, yes, he has made many public statements over the years, that the whole goal of Google is what called artificial general intelligence AGI or artificial superintelligence, you know, and I agree with him that there is a lot of potential for good, but there is also potential for evil, so if you have some radical new technology, you want to try to take a set of actions that maximize will probably do good and minimize will probably do bad things yeah, it can't just be Healthy leather, come on, you know, moving full speed ahead and you know, hope for the best and then in a moment, uh I said well, do you?
How about you? You know, we're going to make sure that humanity is okay here um and and and um uh and then he called me a speciesist that term yeah and there were Witnesses the other one I wasn't the only one there when he called me a speciesist and then I said, "Okay, that's it." . Yes, I am a speciesist. Okay, you got me, what are you? Yes, I am completely auspicious. He had a pretty deep mind, so Google Deepmind together had about three quarters of all the AI ​​talent in the world, they obviously had a trans amount of money and more computers than anyone else, so I'm fine, we're on. a unipolar world. where there is only one company that has close to a monopoly on AI talent and computers like computers so scaled and one person who is in charge doesn't seem to care about security this is not good so what is the furthest thing from Google ? it would be like a nonprofit that is completely open because Google closed for profit, so open and open AI refers to open source, you know, transparency so people know what's going on, yeah, and that we don't want to have it.
I mean, well, I'm normally in favor of for-profit companies, we don't want this to be some kind of profit maximizing demon from hell, that's right, it never stops, that's how I opened the air, so you want misleading. incentives here incentives that yes, I think we went pro-human, yes, this makes the future good for humans, yes, yes, because we are humans, so can you say it? I keep pushing it, but only for people who haven't thought about this. You've read it and you're not familiar with it, and the interesting parts of artificial intelligence are so obvious that you know, write your college paper, write a limerick about yourself like there's a lot of fun and useful stuff.
Can you be more precise about what is potentially dangerous? and scary, like what could it do, what specifically are you worried about, okay, go by old sayings, the pen is mightier than the sword, so if you have a super

intelligent

AI that is able to write, incredibly well and a way that is very influential. um, you know, compelling and then, and like, and you constantly figure out what's more, what's more, and what's more compelling to people over time and then you get on social media, for example, Twitter, uh, but also Facebook. and others that you know, um, and potentially manipulate the public. opinion in a way that is very bad, how could we know?
How do we know? To summarize in the words of Elon Musk, throughout the history of humanity, human beings have been the most intelligent beings on the planet, now human beings have created something that is much smarter than them and the consequences of that are impossible. to predict and the people who created it don't care, in fact, as he put it, Google founder Larry Page, a former friend of his, is looking to build a digital God quote and believes. that anyone who is worried about that is a speciesist, in other words, they are taking care of human beings first.
Elon Musk responded that as a human being, it is okay to take care of human beings first and then in the end he said that the real problem with AI is not simply that it will jump the boundaries and become autonomous and cannot be turned off anytime soon. , the problem with AI is that it could control your brain through words and this is the application we need to worry about now, particularly going into the next presidential election, the Democratic Party, as always, was at the forefront in this. They have been thinking about how to leverage AI to gain political power.
Subscribe to the Fox News YouTube channel to watch our inaugural nightly stories that are changing the world. and changing your life from

tucker

carlson tonight

If you have any copyright issue, please Contact