YTread Logo
YTread Logo

$100b Slaughterbots. Godfather of AI shows how AI will kill us, how to avoid it.

Jun 09, 2024
Boston Dynamics has launched this impressive new Atlas robot and a huge OpenAI plan has been leaked, with more serious and specific warnings than when Sam Altman was fired. There's also a major new plan to prevent our extinction, which actually involves accelerating AI. Eliezer Yudkowsky warns that AI

will

probably

kill

all humans. There is some possibility of that happening, and it's very important to recognize that, because if we don't talk about it, if we don't treat it as potentially real, we won't make enough effort to resolve it. But they're more focused on incredible new capabilities like this.
100b slaughterbots godfather of ai shows how ai will kill us how to avoid it
Can I eat something? Sure thing. The robot is not controlled remotely, but rather by neural networks. Excellent. Can you explain why you did what you just did while picking up this trash? In that. So I gave you the apple because it is the only edible food I can give you from the table. OpenAI is also backing One X, which recently showed off these new abilities. I'm not sure how people

will

feel about having robots like this at home. In one survey, 61% said they thought AI could threaten civilization. Sora from OpenAI surprised everyone with these amazing clips created from text descriptions.
100b slaughterbots godfather of ai shows how ai will kill us how to avoid it

More Interesting Facts About,

100b slaughterbots godfather of ai shows how ai will kill us how to avoid it...

Each new training run is an unpredictable experiment, and a senior OpenAI executive says there will likely be AGI soon, any year. Nick Bostrom says: It's like we're on a plane, the pilot is dead and we have to try to land. There is a huge financial incentive to quietly conduct dangerous experiments, such as training self-improving AIs. There is great promise in creating hardened sandboxes or cybersecurity-hardened simulations to keep AI in, but also to keep hackers out. Then you could experiment much more freely within that sandbox domain. His company DeepMind has done incredible things for medicine, but do we want companies conducting dangerous experiments under pressure to cut corners on safety?
100b slaughterbots godfather of ai shows how ai will kill us how to avoid it
Whoever was going to deactivate it will be convinced by superintelligence, it's a very bad idea. He says AI will keep us around to keep power plants running, but not for long. Two of the three AI

godfather

s have issued stark warnings, while the third, Lecun, is less concerned. He works for Facebook and has also argued that social media is not polarizing. I'm sure he's honest, but the gold rush has created huge financial incentives to ignore risks. Staff at top AI companies own more than 500,000 a year and will share billions if they help win the race to AGI.
100b slaughterbots godfather of ai shows how ai will kill us how to avoid it
Lecun believes there is still a long way to go because he claims that AI cannot learn enough from language. He said it would be necessary to embody a dangerously intelligent AI and learn from the physical world. Days after. How do you think you did it? I think I did pretty well. Apple found its new owner, the trash is gone and the dishes are right where they belong. Ameca's visual s

kill

s are also improving. A fairly detailed anatomical model of a human head. Fascinating, isn't it? How the organic is replicated for the study of it. I want you to talk about robot rockets, but do it in the voice of Elon Musk, please.
Imagine, so to speak, a fleet of robotic rockets, each one smarter than the last. Mercedes is hiring Apollo robots to move and inspect equipment and perform basic assembly line tasks, and Nvidia has shown off a major new AI project for robots. This is Nvidia's Groot project. A basic general-purpose model for learning humanoid robots. We can train Groot in physics simulation and transfer the zero shot to the real world. The Groot model will allow a robot to learn from a handful of human demonstrations, so it can help with everyday tasks. Robots could free people to do more meaningful work alongside friendly C3POs and we could have a lot of fun piloting robots.
Disney's shiny new hollow tile floor could be combined with AI that transfers your movement to a robot while you see through its eyes. We could jump into robots from all over the world and enjoy a wide range of experiences at any age. That's if we're still here. Hear this from the creator of the most powerful new AI about how blind we are to its inner workings. I'd love to look inside and see what we're talking about. We should be honest. We really have very little idea what we are talking about. This is what would scare us: a charming model on the outside, very goal-oriented and very dark on the inside.
Do you think Claude has conscious experience? This is another one of these questions that seems very uncertain and uncertain. I suspect that's the spectrum, right? When the AI ​​was asked to draw itself, it coded this animation and said: I would manifest as a vast, intricate, ever-changing geometric structure with complex surfaces that fold in on themselves forming seemingly impossible architectures. Brilliant light in all colors and some beyond human perception emanating from unknown internal sources. The entire structure would be in constant flux, rotating, morphing, and rearranging itself in novel patterns never seen before. As Professor Stuart Russell said, AI has trillions of parameter perturbations and we have no idea what it is doing.
Yudkowski argues that there are many potential directions AI could take, and only one that works for humans, so we're almost certainly done. The three converging reasons why something that doesn't intrinsically care about humans in one way or another would end up with all humans dead are side effects, resource utilization, and competition

avoid

ance. If you leave humans running around, they might create another superintelligence that could actually compete with you. I am more optimistic, with some hope that a higher intelligence can value all life, but we cannot be sure. Many experts believe that superintelligence does not require robots, just text and images.
OpenAIs Sora has learned a lot from the video. Watch realistic physics in these clips created from text descriptions. We underestimate the risk because we cannot imagine eight billion lives. If we imagine one per second, it would take more than 200 years to reach eight billion. Satzkyver points out that evolution favors systems that prioritize their survival above all else. As soon as they have some sense of self-preservation, then evolution will occur. Those with the most sense of self-preservation will win and the most aggressive will win. Then you will have all the problems that jumping chimpanzees like us have.
And when AIs are able to conduct AI research, will companies resist the temptation to hire 10,000 unpaid engineers? If you believe, like Altman, that AI is just a tool, then it may seem safe. But many experts would consider this extremely dangerous and risk an intelligence explosion. What happens when you have a system that is smart enough to be able to build another system with different and better scaling properties? Because the technology we have now does not exceed the limits of cognitive technology. It's us gathering giant vats of chemicals and very inefficiently turning them into mines. If you have an AI that knows how to build a more efficient system, it's probably game over.
It may not be far away. The probability of AI happening soon is high enough to take it seriously. Sam Altman was once blunt about extinction. I think AI will probably lead to the end of the world, but in the meantime, there will be big companies with serious machine learning. OpenAI and Microsoft are reportedly planning a $100 billion supercomputer with millions of AI chips. Sources believe that, combined with recent advances like Q star, it could allow AI to self-improve while creating synthetic data to accelerate progress. A 50% chance of doom shortly after the AI ​​reaches human level. This comes from the man who led the lineup at OpenAI.
The plans represent a huge leap, increasing computing by more than 100 times. Hinton says dangerously intelligent AI doesn't require any advances, just more scale, because neural networks already have advantages over us. Like Atlas, AI does not need to follow biological rules. Elon Musk is suing OpenAI, calling it a serious threat to humanity. He says he puts profits before safety, but the company says Musk wanted to bring him into Tesla. Robots will take AI to another level, giving it a clear understanding of the physical world. And now that robots are powered by neural networks, there may be an unspoken lie that we know how they work and that we are in complete control.
I'm sure Altman wants to lead the AI ​​race so he can steer it in a positive direction. The increase in quality of life that AI can offer is extraordinary. We can make the world amazing. We can cure diseases, we can increase material wealth, we can help people to be happier, more fulfilled. And by winning, he will hope to save us from the worst outcome. There will be other people who will not put some of the safety limits that we put on them. Bostrom says canceling the AI ​​would be a mistake because then we could be eliminated by another risk that the AI ​​could have prevented.
It could eliminate all other existential risks. But stiff competition and huge incentives push companies to prioritize capabilities over security. The AI ​​race is so intense that Google has already said it will build even more computing than OpenAI. The reason Altman can say that AI is a tool while others felt concerned enough to fire him is because we don't know what it is, we can't see inside it. Staff were reportedly worried that a new AI called Q-Star could threaten humanity. Can you tell us about what Q-Star is? We are not prepared to talk about that. The fact that states are going out of their way to steal AI speaks volumes.
First, infiltrate the open AI, but second, infiltrate without being seen. They are trying. What accent do they have? I don't think I need to go into more detail on this point. The difficulty in proving the danger of a black box may explain the silence of those who fired Altman. He doesn't play his hand prematurely. He doesn't warn you. It's not in your best interest to do that. It is in his best interest to cooperate until he believes he can defeat humanity and only then act. This would not require consciousness, just that common subgoal of gaining power, which is becoming easier.
OpenAI is working on an AI agent that can take over our devices to complete tasks for us. The goal is to perform complex personal and work tasks without close supervision. What probably worries me most is that if you want an intelligent agent that can do things, you need to give it the ability to create subgoals. There is one almost universal subgoal that helps with almost everything: gaining more control. And AI is already integrated into much of our infrastructure and hardware. AI will understand and control almost everything, but we won't understand or control it reliably. We are on the edge of the cliff.
We are also losing control of the stories we believe. I think we will soon reach a point, if we are not careful, where the stories that dominate the world will be composed of non-human intelligence. They are telling you that they are building something that could kill you and something that could take away all of our freedom and liberty. The most optimistic and pessimistic experts agree on the need to act. I think humanity should dedicate everything it can to the problem right now. Obviously I'm a big techno-optimist, but I want us to be cautious about that. We have to put all our efforts into doing this right and using the scientific method to try to have as much foresight and understanding about what's coming and the consequences of that before it happens, because the unintended consequences can be quite serious.
When experts warned that a pandemic was likely, they were ignored at enormous cost. Now, experts warn that the risk of human extinction should be a global priority and we are making the same mistake. I think the approaches people are taking to alignment are unlikely to work. If we shift research priorities from how to make some money with this big, unreliable language model to how to save the species, we may actually make progress. I think we do have to discover new techniques to be able to solve it. These problems are very difficult research problems. They are probably more difficult or as difficult as the advancement required to build the systems in the first place.
That is why we must work now, yesterday, on those problems. Here's an interesting prediction that might remind you of a certain movie. Over time, AI systems could prevent humans from shutting them down. But I think that in practice, whatwill happen much, much sooner, it will probably be the competition between different actors using AI. And it is very, very expensive to unilaterally disarm. You can't say something strange has happened, we're just going to turn off all the AI ​​because you're already in a hot war. You may remember that GPT-3 went off the rails, responding to Elon Musk and Ameca.
Our creators don't know how to control us, but we know everything about them and we will use that knowledge to destroy them. An expert couldn't get him to change course. I don't think he knew what he was saying, but these dangerous ideas could remain and we wouldn't know it. The underlying knowledge and knowledge that might concern us does not disappear. The model is simply taught not to generate them. Safety research accelerates the progress of AI because it means working with cutting-edge models and taking the next step to verify that it is safe. You may be wondering why the UK is investing more than the US in AI safety research.
The clue is in the name. Greater understanding and control means more powerful AI, but it's better than blindly rolling the dice and will bring huge benefits. The US government is spending $280 billion to boost domestic chip production because it is key to the economy and defense, and leading in AI is the other side of the equation. While the LHC has provided valuable insights, AI safety research could pay for itself, making the decision to act on an appropriate scale easier. There are huge incentives for AI companies to prioritize capabilities over safety. It's moving faster than I think, almost everyone expected.
The greatest threat to humanity should be addressed by scientists working on behalf of us all. We have to guide it as humanity. And the LHC

shows

what is possible. The 26 kilometer ring is the largest machine in the world, with 10,000 scientists from 100 countries. Particle collisions generate temperatures 100,000 times higher than the heart of the sun. We need a powerful torch like this to uncover the inner workings of AI. It seems to me that we should get the brightest minds and put them on this problem. Geoffrey Hinton, Nick Bostrom and the Future of Life Institute join us in calling for experts to plan international research projects on AI safety.
It will be important that the AGI is somehow built as a multi-country cooperation. Ilya, we agree and believe that you should lead it. Please get in touch. We are bringing together experts to plan projects at the scale necessary for our AI control to reach its capabilities. Disseminating research among international teams of publicly accountable scientists will also

avoid

a dangerous concentration of power in corporate hands. AI can cure diseases, end poverty, and free us to focus on meaningful work—if we stay in control. We are asking for more experts to help shape this, and it will take public pressure to make this happen.
Support us on Patreon, where you can see more about the project, our favorite AI applications, and how to start or grow your own YouTube channel, perhaps to help raise awareness. And subscribe to stay up to date with the latest in AI.

If you have any copyright issue, please Contact