YTread Logo
YTread Logo

Report: AI Is Going To KILL Us All

Apr 10, 2024
A new

report

commissioned by the US government alleges that artificial intelligence poses an existential threat to humanity if not properly regulated. And it won't be popular, it won't be regulated properly. So buckle up because we now face an existential threat to humanity. Yes, guys, I want to double down on what Anna just said. There is no way for us to comply with the regulations they suggested. So listen carefully as Anna tells you what the threat is, because apparently it's real, which is very scary. - I'll give you my opinion in a second. - Yes. More information about a

report

commissioned by the government.
report ai is going to kill us all
This is basically a waste of resources, because the government has no interest in using the information in the report to do the right thing. The report was written by Gladstone AI. It's a four-person company that hosts technical briefings on AI for government employees. The authors worked on the report, titled An Action Plan to Increase the Safety of Advanced AI. For more than a year ago. They worked on it for more than a year and talked to more than 200 government employees, experts, and workers at frontier AI companies. Well, at the end of their investigation, they concluded this: The US government must act quickly and decisively to avoid substantial national security risks from artificial intelligence, which could, at worst, cause an extinction-level threat to the human species. .
report ai is going to kill us all

More Interesting Facts About,

report ai is going to kill us all...

Okay, so that's what's

going

to happen. See this as a step by step of your future. This is what it will look like. Well. There are two giant threats. I think one is much more important than the other, but the one that isn't as important is definitely

going

to change the world. - Then go. - Forward. That's why they write that the rise of advanced AI and AGI, which is artificial general intelligence, has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons. And if you're wondering what AGI is, it's something that's currently being developed.
report ai is going to kill us all
It's not developed yet, but AGI could do it. It is like a technology that could perform tasks at the level of humans or above them. And since these systems don't exist yet, I don't want anyone to think that it's not a threat. They hope to have this fully developed in five years or less. Then it comes. The report also focuses on two separate categories of risk, as Jen alluded to above. So let's talk about what those two types of risk categories describe. The first category is called the risk of weapons development. The report states that such systems could potentially be used to design and even execute catastrophic biological, chemical or cyber attacks or enable unprecedented weaponized applications in swarm robotics.
report ai is going to kill us all
Fun. - I'll explain that in a second. - And then there is the second category. This is what the report calls the risk of loss of control, or the possibility that advanced artificial intelligence systems will outperform their creators. According to the report, there is reason to believe that they may be uncontrollable if developed using current techniques and could behave in ways that are adverse to humans. - Default. - Yes, good. We are definitely in trouble. So let me break it down without doing histrionics etc. But look, you can begin to understand through common sense how some of this might play out.
So in terms of the risk of weaponization, that's the one that's much more real. Will they ever use AI as a weapon? Of course they turn everything into a weapon. If you think they won't do well, they are already making

kill

er robots. We did a story on this a while ago. The Pentagon, hilariously, claims: no, no, we wouldn't use those robots to

kill

people while they're starting to put weapons on them. Of course they will. You know, look, they are already in all countries, not in all countries. But many countries are already using drones to kill people.
And that does not have artificial intelligence, but it is some mechanism that kills from afar. And they are using it today in Gaza, for example, in Israel, when those 100 hungry people, 112, were killed while they were running for food. It was both tanks and drones that were killing them from the sky. So now the robotic swarm within the weaponry is what could change the entire military landscape of the planet, because this is something I've been talking about for a while. And I always give credit to Wes Clark Jr, who brought it. This happened a long time ago, long before I came back to reality, he said, well, look, eventually the Chinese are going to put a thousand drones and they're just going to launch them against our planes, and then we're not going to have superiority. air, which is how we mainly maintain control over the entire world.
Good. But now with swarm robotics, our entire Pentagon may be useless aside from nukes, because we. - Stop financing it. - So that's what I was thinking. I think that might be the one. But then they're going to fund this because imagine a swarm of drones with artificial intelligence attacking an aircraft carrier. And they are all loaded with bombs. What will the plane carry? You are going to do it. It's an easy target. Well. So, imagine that now Iran has them. Why can't they have them? They can have them. It's not that complicated. So now, there goes your control of the waters, much less the skies.
So this is all going to shit. I would be. I mean, maybe it will take us five years and maybe it will take China seven years to be able to develop it before us. It may take Iran 12, but everyone will get it eventually. Therefore, it is very likely to completely change the military landscape, at the very least. And these guys say this is the biggest threat to national security, and this is a government order. And they were part of the original people who were part of the funding, not the funding of my foundation in the first place.
So they're the OGs of I, not some randos who don't know what they're talking about. Now this is the part of the weaponry that is the least important part. Loss of control means we are finished. So that's something they can't screw up. So if they accidentally program them, especially those who are armed to preserve their own life or their own entity over us. And they are smarter than us. Well then we are in a world of trouble because they will definitely be smarter than us. And smarter is a very vague word, but they will be able to access more information and be able to make better decisions.
And we against them if they are armed and we lose control. Good evening Irene. And so that's, I hope, less likely, I hope, I hope, I hope, but remember, it's already happened once, even though no one thinks of it that way. You know, we created machines that we lost control over. They're called corporations because we write the wrong code. We write just one line of code here in the United States to maximize profits. And we didn't give a second line. So corporations took over everything in an effort to maximize profits. And now we will all live under corporate rule.
All of our politicians are controlled by corporate donors, and they only approve things that help corporations and crush us and our standard of living and the life we ​​enjoy here in America. So those robotic machines already exist. But now, when you make them physical, there is another round of pain potentially headed our way. I can tell you, if you're not scared enough yet, I can tell you why that is. There is no chance we can stop them. Now I'm sure. Forward. Okay, so these guys are suggesting regulations, and one could debate whether the regulations they suggest are exactly correct or not.
I don't think that's the important part. The important thing is that there is no possibility of them passing. Because? Because right now, Silicon Valley has a mad race to get the best AI first among the different companies, goal and so on. Good. And one of the main reasons is that Nvidia is a relatively new company. His specialty is AI. And they have skyrocketed to be among the seven richest corporations in the world. - Oh, not Pelosi's. - Yes. So everyone tries to be Nvidia and try to beat Nvidia. So there's a world of money involved here that exceeds billions, okay?
So now you think you're going to get in the way of all that money. It's not that corporations control the government anymore. There is no way they will let the government stop them from making billions or even trillions of dollars. They will crush any politician who dares to oppose them. So there's no way they're going to pass these regulations. And remember, they are also competing against the Chinese and other countries. So they have a super easy excuse. Well, if we don't do it, the Chinese will. And that will be the fig leaf that politicians will use to make sure they don't get in their way as they create these AI devices.
Yes. Why did you mention that story for the bonus episode? - But okay, then we're done. - Yes, I'm definitely done. Don't get too depressed, man. We die. We died too late. But you know, the real competition is between AI and super babies. You know, I've been talking about super babies for a while. Let's bring in the super babies. I prefer the super babies. You know, because super babies are at least a species of humans and they are. Human. They will just do it as genetically modified babies who are better than us. Well. Who cares? Let's get on with our lives.
Everyone is panicking. Why do we panic about super babies? - Know. - Because? Because I think we need a little help. - The human race needs a little help. - Well, the thing is. Well, now they were. We did this story as a series. Now we dive into fun and crazy speculation. Just so you're clear about what we're doing. Okay, look, super babies would be a different class since they would be considerably smarter than us. Let's capture feelings. That is the nature of this era. Oh me. I would rather have more people who are smarter than me in it.
World. I know, but you are. But you are alone. You are not alone because we have a close-knit community, but the vast majority of human beings will pick up on feelings and want to tear them down. And since they are smarter and stronger than us, they won't want to be taken down. And then we will enter into a conflict. Well. Yes Yes. Well. But we may need the super babies to attack the oncoming super AI. So that is the future we should hope for. I wonder what. My super babies would be like the technology was already there and I could afford it.
Look, even she's tempted to do it. I'm telling you, super babies are on the way. - No. I'm excited. - Maybe they will save. Us. I'm excited about the super baby situation. I think we should all be fine. But AI robots join in, get smarter than us, and then kill us. - That worries me a little. - Yes, and the latest on this. Guys, things evolve. Its evolution is real. So, we think that this version of humanity will exist forever and ever because we are so incredibly special. - Wait. - No. Yes, no, it is likely that we will evolve.
And so the way we would evolve in the modern world is by changing our genes, physically, literally changing our genes. - I do it every day, Jake. - Yes. And it's not like we haven't done it before. We did it to the wolves and turned them into Charlie. - Look, I see, then maybe it could be positive. - See? So that's on track, but. So don't be scared. Alright. What comes, will come. We will take care of it. So stick together and we'll be fine.

If you have any copyright issue, please Contact