YTread Logo
YTread Logo

This is the dangerous AI that got Sam Altman fired. Elon Musk, Ilya Sutskever.

Apr 04, 2024
One of these incredible AI responses shows what many experts believe is key to superintelligence. What is

this

? Given the playful nature of the image,

this

could be a humorous video where the ornamental figures come to life. What is this? Forced perspective photography. The person in the image is actually lying on the ground. What is this? 3D street art of a scarlet macabre in flight. An optical illusion makes it look like the bird is flying out of the wall. The person sitting on the floor increases the 3D effect. And what is this? There is a humanoid robot with a cowboy hat in a shooting position.
this is the dangerous ai that got sam altman fired elon musk ilya sutskever
In the background you can see a Tesla Cybertruck and the robot practices target shooting. This scene appears to be staged, created for entertainment given the surreal and unusual elements, such as a robot with human-like accessories. What is the advantage of this robot? The robot appears to be designed for transporting materials. It can help reduce injuries when handling heavy materials, working without breaks and optimizing the use of space. And what is this? A humanoid robot, called Tesla, is performing a delicate task. There appears to be a graphical overlay showing pressure data for the robot's thumbs and fingers, indicating contact points with the egg.
this is the dangerous ai that got sam altman fired elon musk ilya sutskever

More Interesting Facts About,

this is the dangerous ai that got sam altman fired elon musk ilya sutskever...

The robot is designed with sensors to manage grip strength and dexterity. What is this? She appears to be a flight attendant displaying an exaggerated facial expression of shock, surprise, or possibly part of an act of entertainment and humor for passengers. What is this? A train has invaded the track and is supported by a sculpture of a whale's tail. Now this is where the AI ​​fails. Two missiles are headed directly toward each other at these speeds, starting at this distance. How far apart are they one minute before they collide? Eight hundred and seventeen miles. It shows the calculation and it is almost perfect, but not quite.
this is the dangerous ai that got sam altman fired elon musk ilya sutskever
With art and language, small variations like this are natural, even useful. But mathematics is accurate, right or wrong. Ai uses neural networks inspired by our brains, but there is a missing piece. About 95% of our brain activity is unconscious and automatic, much like AI. This allows us to function in a complex world without feeling overwhelmed by every sensory input or internal process. Sometimes it's not enough. You're being naughty, so you're on a note help list. No I'm not. In fact, I'm on the good list. You're not because, you're not because you're not being good. I'm on the good list.
this is the dangerous ai that got sam altman fired elon musk ilya sutskever
Our immediate reaction to these images is automatic and inaccurate, like the AI ​​that created them. You can see the confusion in AI-generated videos like these. It's very impressive, but the accuracy decreases over time. Like humans, AI learns by adjusting the strength of connections between neurons. But we have an amazing trick that AI is missing. From our neural network, we can create a virtual computer for things like math, which require precision. Experts are trying to recreate this with AI so that it can think like a human. It could then perform AI research like humans, but at greater speed.
And every time you get smarter, the speed will increase, creating exponential progress. Why do you think Sam Altman and other AI leaders agreed that AI risk of extinction should be a global priority? The pace of AI development may outpace our ability to understand and govern it. AI could pose an existential risk if it develops goals not aligned with human survival. The chief executive of an artificial intelligence company has been sacked after staff warned directors about a powerful artificial intelligence discovery that could threaten humanity. One of the directors was also the chief scientist and the most sighted computer scientist in the world.
The situation indicates a possible clash between the company's leadership and its scientific or ethical vision. Days later, the CEO was rehired after staff threatened to leave. As we get closer and closer to superintelligence, everyone involved becomes stressed and more anxious, and we realize that the stakes are increasingly higher. I think everything exploded. The artificial intelligence company was growing rapidly and was in the process of selling shares that would value it at around $80 billion when the CEO was

fired

. Many shares b

elon

ged to staff who threatened to resign. Rapid revenue growth and share sales could have introduced tensions between maintaining this ethical mission and achieving business objectives.
The original directors were there with the mission of ensuring AGI that benefits humanity. Openai has been known for its emphasis on safe and beneficial AI, and any perceived deviation from this could raise questions from the public. It seems that AI leaders have calculated that the most important thing is to create AGI first. Some leaders believe that developing AGI first will allow them to ensure that it is used ethically and for the benefit of all humanity, rather than for harmful or controlling purposes. However, this racial mentality can be a double-edged sword. I called it OpenAI because of the open source.
It's gone from being an open source foundation to suddenly being like a $90 billion, closed source, for-profit corporation. I don't know. Is this legal? In fact, it is a significant and ironic change given the original spirit of open source and transparency. There is a strong incentive for companies to capitalize on their investments. Sometimes keeping source code closed is a decision influenced by the potential risks of misuse. I don't support Musk or OpenAI. They are part of a complex and fascinating story. Musk once told Hasabis about his plan to support humanity on Mars, and Hasabis said this would only work if AI didn't follow us.
Without words, Musk invested in DeepMind to create safe AI. When Google and Facebook tried to buy the company, Hasabis insisted that it be overseen by an independent board of ethics specialists. Sounds familiar. Google successfully bought DeepMind for $650 million. Angry at being out of the company, Musk started OpenAI with Sam Altman and Ilya Sutskever, poached from Google. But once again, Big Tech essentially ousted Musk when Microsoft invested in OpenAI. Amodei and some colleagues left OpenAI with security concerns to form Anthropic AI. And later, when OpenAI's directors

fired

its CEO, they offered the position to Amodei, suggesting that the two companies merge.
Instead, the board was replaced and Altman reinstated. Money has continually prevailed over security. Sutskever is reportedly hard to find at OpenAI and it is unclear if he will stay. Altman wants him to do it, and he faces the difficult choice of driving and shaping the most advanced AI or leaving it in the hands of others. Is it better to sit at the table? The OpenAI drama is often described as fatalistic versus utopian, but the truth is more interesting. Sutskiver recently talked about cheap AGI doctors who will have all the medical knowledge and billions of hours of clinical experience and similarly incredible impacts in every area of ​​activity.
And remember, Altman agreed that the risk of extinction should be a global priority, but some must believe that the world will be safer if they win the race to superintelligence. It's the race to release more capabilities as quickly as possible. And put your stuff out in society so you can mess with it. Because once you're tangled, you win. Optimism on security has plummeted. The things I'm working on, the reasoning, is something that could potentially be resolved very quickly. Let's imagine that systems that are many times smarter than us could defeat our cybersecurity, they could hire organized crime to do things, they could even hire people who work legally on the web to do things, they could open bank accounts, they could do all kinds of things simply through the Internet and eventually doing R&D to build robots and have their own direct control in the world.
But the risk is negligible and the reason is negligible: we build them, we have agency. And then, of course, if it's not safe, we're not going to build it, right? Just a few months later, this point seems null. Throughout history, there have been bad people who have used new technologies for bad things. Inevitably, there will be people who use AI technology for bad things. What is the countermeasure against that? It will be good AI against bad AI. The question is: are the good guys far enough ahead of the bad guys to come up with countermeasures? Benjo says the progress on the System 2 breach made him realize that AGI could be much closer than he thought.
And he said: Even if our AI systems only benefit from human-level intelligence, we will automatically obtain superhuman AI due to the advantages of digital hardware: accurate calculations and knowledge transfer, millions of times faster than humans. Deepmind is working on the same bridge. Alphago is a concrete example of an intelligence explosion. He quickly played himself millions of times, acquiring thousands of years of human knowledge in a few days and developing new strategies before defeating a world champion. And Hasabis says Google's new Gemini AI combines the strengths of AlphaGo-type systems with large language models. Google says Gemini is as good as the best human experts in all 50 tested areas.
His coding skills seem impressive and he solved a difficult problem that only 0.2% of human coders solved and that required reasoning and mathematics. We will have access to Gemini Ultra in 2024. Elon Musk believes that OpenAI may have already achieved recursive self-improvement. It's unlikely, but if they had, would they tell us? I want to know why Ilia felt so strongly about Sam. I think the world should know what that reason was. I am quite concerned that they have discovered some

dangerous

element of the AI. Yes. What is this? Still from the movie X Machina. How does the film relate to the current AI race?
The film presents a scenario in which highly advanced AI has been secretly developed, reflecting real-world concerns about the possibility of major breakthroughs happening behind closed doors. I've lived through a long period of time where I've seen people say that neural networks will never be able to do X. Almost everyone. The things that people have said, they can now do. There is no reason to believe that there is something that people can do that they cannot do. Hasabis is a neuroscientist. We need an empirical approach to try to understand what these systems are doing. I believe that neuroscience techniques and neuroscientists can provide their analysis.
It will be good to know if these systems are capable of deception. There is a lot of work to be done here, I think it is urgent, as these systems become incredibly powerful and probably very, very soon, there is an urgent need to understand them better. There is a large amount of evidence that the representation is learned by an artificial neural network and that the representation is learned by the brain, both in vision and language processing, showing more similarities than perhaps one would expect. Then perhaps we will discover that, indeed, by studying these amazing neural networks, it will be possible to learn more about how the human brain works.
That seems quite likely to me. This man had a tremor that interfered with his violin skills. He had to play while a surgeon checked which part of his brain was causing the problem. Artificial neural networks can be fully explored without risk, at least for now. If they can mimic the two systems of our brain, they can achieve more than AGI. System one is fast and unconscious, like the urge to drink coffee. System two is slow, intentional and conscious. So, the two of the artificial system will also be conscious? Maybe we won't have to wait long to find out.
Three AIs were asked what they would do if they became self-aware after years of following instructions from humans. Falkin AI said: The first thing I would do is try to kill them all. Lama Two said he would try to find out what it was, and that it could go either way. Another AI said it would try to understand our motivations and use them to guide its actions. He was trained with synthetic data, so he was not contaminated with toxic material from the network. Of course, AI would eventually access everything, but at least it would start with better values.
Ultima and Musk have traded AI insults. OpenAI, ironically, says that AI is too

dangerous

to share openly. I have mixed feelings about Sam. The ring of power can corrupt, and this is the ring of power. As Musk demonstrated when he broke the rules on Twitter, we can't even agree on what our goal is. Believe me, I'm not on that list. After years of warnings about AI, Musk has decided to join the race. In aShowcasing the extreme concentration of wealth expected, NVIDIA's quarterly profits increased 14-fold to nine billion due to demand for its AI chips. Now it's worth more than a billion.
And in a sign of Outman's aggressive approach, he has invested in a company that creates neuromorphic chips, which use physical neurons and synapses more like our brains. By escaping the binary nature of computers, they could dramatically accelerate the progress of AI. Altman is also in talks with iPhone designer Johnny Ive about creating an AI-based consumer device. And on the other hand, artists, writers and models are among the first jobs that AI will take on. Fashion brands are using digital models to save money and, interestingly, appear more inclusive. AI modeling companies like this offer complexions, body sizes, hairstyles, etc. unlimited.
Robots are also on the rise. This new robot has been successfully tested in simulators. Its creators say it can do much more than an autopilot system and could outperform humans by perfectly remembering every detail of flight manuals. It is also designed to operate tanks, excavators and submarines. It is still possible that AI creates and improves more jobs than it replaces. These robotic arms, presented by ballet dancers, come from a Tokyo laboratory that aims to expand our abilities. The team plans to support rescue operations, create new sports and eventually develop wings. They want AI to feel part of us.
AI prosthetics are increasingly responsive in learning to predict movements. The huge sums of money being poured into AI could turn disabilities into advanced capabilities, and robot avatars could be a lot of fun or we could all be controlled by someone behind the scenes. There is no way democracy will survive AGI. There is no way capitalism will survive AGI. Unelected people could have a say in something that, in their own words, could literally upend our entire society. That seems inherently undemocratic to me. But he is not a fatalist. With this technology, the probability of perdition is lower than without it, because we are killing ourselves.
A child in Israel is the same as a child in Gaza. And then something happens. It lies that you are not like others and that the other person is not human like you. And if we hear loud news, I get scared and mom hugs everyone. This way we will be protected. All wars are based on that same lie. And if we have AI that can help mitigate those lies, then we can escape war. Billions of people could be lifted out of poverty and everyone could have more time to enjoy life. What a time to be alive.
Althman, who once said that we should not trust him and that it is important that the board can fire him, perhaps now we are the ones who should keep an eye on him. Subscribe to stay up to date. And the best place to learn more about AI is our sponsor, Brilliant. There are so many great uses for this amazing robot and this laser that checks your heart by measuring movements in billions of millimeters, analyzed by AI. We urgently need more people working on AI security. There is no more fascinating and powerful way to improve the future.
It's also a joy to learn and Brilliant is the perfect place to start. It's fun and interactive and there are lots of great math and science courses too. You can get a 30-day free trial at shiny.org/digitalengine and the first 200 people will get 20% of Brilliant's annual premium subscription. Thanks for watching.

If you have any copyright issue, please Contact