YTread Logo
YTread Logo

"I Tried To Warn You" - Elon Musk LAST WARNING (2023)

Jul 04, 2023
I think the danger of AI is much greater than the danger of nuclear warheads. Remember my words. AI is much more dangerous than nuclear weapons. I try to convince people to slow down AI to regulate it. This was useless. I

tried

for years. The biggest problem I see with so-called AI experts is that they think they know more than they do and that they are smarter than they really are. This tends to play into how intelligent people define themselves by their intelligence. and they don't like the idea that a machine could be much smarter than them, so they dismiss the idea, which is fundamentally flawed.
i tried to warn you   elon musk last warning 2023
That is the situation of illusions. I'm really very close to the cutting edge of AI and that's scary. Damn, he's capable of so much more than almost anyone knows and the rate of improvement is exponential. It feels like we are the biological bootloader for AI. Effectively, we are building it and then we are progressively building increasing intelligence and percentage of intelligence. that is not human is increasing and eventually we will represent a very small percentage of intelligence it is going to arrive faster than anyone appreciates I think it is with each passing year the sophistication of computer intelligence is growing dramatically I mean I really think we are in a path of exponential improvement of artificial intelligence and the number of intelligent humans who are developing AI is also increasing dramatically.
i tried to warn you   elon musk last warning 2023

More Interesting Facts About,

i tried to warn you elon musk last warning 2023...

I mean, if you look at attendance at AI conferences, they're doubling every year. full um I have kind of a young cousin of mine who is graduating from Berkeley um in computer science and physics and I asked him how many of the smart students are studying artificial intelligence in computer science and the answer is they all have a The best approach or best outcome is that we achieve democratization of AI technology, meaning that no company or a small group of people has control over advanced AI technology. I think that is very dangerous. It could also be stolen by someone bad, you know, like some. evil dictator, the country could send their intelligence agency to rob him and take control, it just becomes a very unstable situation.
i tried to warn you   elon musk last warning 2023
I think if you have some incredibly powerful target, you just don't know who's going to control it, so it's not that I think the risk is that the AI ​​develops its own will from the beginning. I think it's more of a concern that someone might use it in a way that's bad, or even if it's not bad. I'm not going to use it in a way that is bad, that someone can take it away and use it in a way that is bad, that I think is a pretty big danger, we are all already cyborgs, so you have a machine extension. of yourself in the form of your phone and your computer and all your applications, you are already superhuman, but you have a much more powerful ability than the president of the United States if you had known 30 years ago that if you have a link to the Internet, you have an article from Wisdom can be communicated to millions of people and communicate with the rest of the earth instantly and these are magical powers that did not exist not long ago, so everyone is already superhuman.
i tried to warn you   elon musk last warning 2023
I think uniqueness is probably the right word because we just don't do it. I don't know what's going to happen once there's intelligence substantially greater than that of a human brain, I mean most movies and TV shows that feature AI don't depict the way it's probably going to happen, but I think it has to be done. Consider even in the benign scenario where um ai if ai is much smarter than a person um what do we do yeah what is that what job do we have I have to say that when you know when something is a danger to the public then there needs to be some government agency as regulators, the fact is we have regulators in um, you know, the airline industry, the automotive industry, uh, with drugs, food, um, you know, and anything that's kind of a public risk, and I want I mean, I think this has to fall into the category of public risk usually it will be something that some new technology will cause harm or death there will be an outcry there will be an investigation years will pass there will be some kind of knowledge committee rules will be made then there will be oversight eventually there will be regulations, this all takes many years, this is the normal course of things, if you look at automotive regulations, how long did it take for seat belts to be implemented?
You know the water industry fought against seat belts, I think. for over a decade successfully fought any seat belt regulations, even though the numbers were extremely obvious, if you were wearing a seat belt you would be much less likely to die or be seriously injured, it was unequivocal and the industry fought this for years successfully, finally after many, many people died, regulators insisted on seat belts. If this is a time period that is not relevant to AI, you cannot go 10 years from the point, which is dangerous, it is too late. I'm not normally a proponent of regulation and oversight, I mean.
I think once you're generally on the side of minimizing those things, but this is a case where there is a very serious danger to the public and therefore there needs to be a public body that has knowledge and then oversight to confirm that everyone is developing AI safely, this is extremely important, I think the danger of AI is much greater than the danger of nuclear warheads, and no one would suggest that we allow someone to build nuclear warheads if they want, that It would be crazy, so why? Don't we have regulatory oversight? This is crazy and Openai's intention is to democratize the power of AI.
There's a quote I love from Lord Acton. He was the guy who invented corrupt power and absolute power. Absolutely, um, what is that freedom? It's about the distribution of power, despotism and its concentration, so I think it's important if we have this incredible power of artificial intelligence that it doesn't get concentrated in the hands of a few and potentially lead to a world we don't want. I'm not really that worried about the short-term things, the things that are like narrow artificial intelligence is not a species-level risk, it will result in dislocation, job losses and you know, that kind of better weaponry and that kind of things, but it's not a fundamental risk at the species level, whereas digital superintelligence is, so it's really about laying the groundwork to ensure that if humanity collectively decides that creating digital superintelligence is the right step, so we should do it very, very carefully, very, very carefully.
We are rapidly heading towards digital superintelligence. That photograph surpasses any human being. I think it's very obvious.

If you have any copyright issue, please Contact