YTread Logo
YTread Logo

Should You Be Afraid Of AI?: Yann LeCun And AI Experts On How We Should Control AI

Jul 03, 2024
Can we play video number one, all open source? AI

should

be banned because it threatens the profits of monopolistic technology companies. Number two, universities

should

stop researching AI because companies are much better at it and number three, to be honest, I don't care. all this AI security stuff, never mind, let's focus on Quantum Computing, okay, let's leave it for our panel. There are many, many current challenges with AI, of course, that we have to deal with deepfakes is very relevant this year because of Elections and fraud and all kinds of other things, if you want to learn more about that, we have a deepfake demo where you can pretend and share your ideas about what to do about it, but that's not where we're going.
should you be afraid of ai yann lecun and ai experts on how we should control ai
To talk now because we're going to look a little further into the future, the ultimate goal of AI from the beginning has always been to really solve intelligence and figure out how to make machines that can do everything that humans can ideally do, much better, that is, both things. exciting and terrifying and I have taken the prerogative as moderator to rank the panelists from least concerned to most concerned. I hope you feel like oh wait, you changed, you should change with Stuart, please change places, yeah, we're not placing you in your depth. Fake opinions now, but for the real ones, the OP, the main goal we have here is not to have another debate about whether we should worry or not, but rather to brainstorm Solutions in the Spirit M, okay, and I have a very radical opinion. and heretical belief that you all actually agree on a lot more things than the average Twitter user probably realizes and we're going to look if we can find some of those shared things that you agree on and that we can all go.
should you be afraid of ai yann lecun and ai experts on how we should control ai

More Interesting Facts About,

should you be afraid of ai yann lecun and ai experts on how we should control ai...

Go out and do it. I'm going to warm you up just with some flash questions where you can basically say yes and no or no. Well then, who's who here? First question. Are you excited to improve AI in ways that? It can be our tool and really complement and help humans yes or no yes yes yes yes okay next question do you think that AI in a few years will be much better than it is now? yes yes yes yes everything is fine now let's make it a little harder so if we define artificial general intelligence as AI that can perform basically all cognitive tasks at the human level or better I think you feel like we already have it now no absolutely no no no okay four NOS um do you think uh? we will probably have it within the next thousand years maybe for sure yes yes do you think we will probably have it within the next I said thousand 100 the next 100 years maybe very possibly very probably with a nuclear catastrophe yes, okay? so if you had to put a number like how many years are we going to have to wait until we have a 50% chance of getting it, would anyone say what year would you estimate? uh not in the short term M uh decades decades uh much less than I used to think, okay, and you 5.4 5.4 years, okay, there's a lot of precision, so I think everyone will agree that the we put them in the right order and you can see that the level of alarm that they have correlates with how quickly they think we're going to have to deal with this so clearly if you have some of the world if you have the world's leading

experts

who think it could happen relatively soon we have to take the possibilities seriously so the question is how do we make sure that this becomes the kind of AI tool that we can

control

so that we can get all the advantages and not the disadvantages.
should you be afraid of ai yann lecun and ai experts on how we should control ai
One thing that really struck me here in Davos is that the vast majority of what I hear is that people are excited. about in AI, all the medical advances that eliminate poverty, help with the climate, create huge new business opportunities, don't require AGI at all, so I'm very curious if somehow we could agree to say, listen , do. all the cool AI stuff, but maybe not making super intelligence until 2040 at the earliest or something like that is something everyone feels they could live with or feels like there's a huge urgency to try to make something super intelligent as quickly as possible. , what would you do?
should you be afraid of ai yann lecun and ai experts on how we should control ai
We would all go in this direction this time. What would you say? I can live with that? Can you say it again? I can live with it, you can live with it, what about you? Stuart, uh, you can elaborate a little more this time. I could live with that, but it's not really relevant. What I think is relevant is the economic forces driving it and if the AGI is worth uh, as I've estimated at $15 trillion, it's kind of hard to tell people that no, you can't go. For that, Yan, how about you? First of all, there is no such thing as a GI because we can talk about AI at the human level, but human intelligence is very specialized, so we shouldn't talk about AGI at all, we should be talking about AGI. about what kind of intelligence we can observe in humans and animals that current I systems don't have and you know there are a lot of things that current AI systems don't have that your cat or your dog has and doesn't have. anything close to general intelligence, so the problem we have to solve is how to make machines learn as efficiently as humans and animals, which is useful for many applications.
This is the future because we are going to have artificial intelligence assistance. We speak to help us in our daily lives. We need these systems to have human-level intelligence. So, you know, that's why we need it. We need to do it right. Danela. Well, I'm with Yan. But first let me tell you no. I don't think it's feasible to say that we're going to stop science from developing in one direction or another, so I think knowledge has to keep being invented, we have to keep pushing the limits and this is one of the most uh, exciting aspects. from working in this field right now uh, we want to improve our tools, we want to develop better models that are closer to nature than the models we have now, we want to try to understand nature, um, in greater detail. um as possible and I think it's um the feasible way forward is to start with simpler organisms in nature and work your way up to more complex creatures, um uh, like humans, Steuart, so I want to disagree with something, uh , there.
There is a difference between knowing and doing and that is an important distinction, but I would argue that there are actually limits to what is a good idea for the human race to know. Is it a good idea for everyone on Earth to know how to create in their kitchen? an organism that will wipe out the human race is a good idea Daniela no, of course, no, of course, it's not right, so we accept that there are limits to what is, what is a good idea that we know, uh, and I think there are too. limits to what is a good idea to do well, we should build nuclear weapons that are big enough to ignite the entire atmosphere of the earth, we can do it but most people would say no, it is not a good idea to build such a weapon is fine, so there are limits on what we should do with our knowledge and then the third point is whether it is a good idea to build systems that are more powerful than human beings and that we don't know how to

control

well.
I have to answer. For you, and I will tell you that every technology that has been invented has positive and negative aspects. We invent knowledge and then we find ways to ensure that the inventions are used for good and not evil, and there are mechanisms to do that. and there are mechanisms that the world is developing for AI with respect to your point about whether we have, we have machines that are more powerful than humans, we already have, we already have robots that can move more precisely than you can. we have robots that can lift more than you, we have machine learning that can process a lot more data than we can, so we already have machines that can do more than we can, but those machines are clearly not more powerful than humans, uh In the same way that gorillas are not more powerful than humans even though they are much stronger than us and horses are much stronger and faster than us, but no one feels threatened by horses.
I think there is a huge fallacy in all of this, so first. of all, uh, we don't have a model for a system that has human-level intelligence, it doesn't exist, research doesn't exist, it's necessary to do science, that's why it's going to take a long time and that's the case if We're talking today about how to protect yourself against the intelligence systems that already know they are taking over the world, or its dangers, regardless of what they are. It's as if we were talking in 1925 about the dangers of crossing the Atlantic at near speed. of sound when the turbojet was not invented. um, you know, we don't know how to make those systems secure because we haven't invented them yet.
Now, once we have a blueprint for a system that can be intelligent, we will have a blueprint. probably for a system that can also be controlled because I don't think we can build intelligence systems that don't have control mechanisms within them. We do it as humans. Evolution, you know? Can it build us with certain impulses that we can build? machines with the same impulses, then that is the first fallacy, the second fallacy is that it is not because an entity is intelligent that it wants to dominate or that it is necessarily dangerous, it can solve problems, you can tell it, you can set a goal, the goals for it . and it will accomplish those goals um and the idea that somehow the system is going to accomplish its own goals and take over humanity is just absurd, it's ridiculous, what worries me is that the danger of AI doesn't come from any property evil that has an evil that must be removed from AI is because it is capable is because it is powerful this is what makes it dangerous what makes a technology useful also what makes it dangerous why nuclear reactors are useful This is because nuclear bombs are dangerous.
This is the same property as technology advances over decades and centuries. We have gained access to increasingly powerful technologies. More energy. More control over our environment. What this means is the best and worst things we can do. It can happen on purpose or accidentally grow along with the technology we build. AI is a particularly powerful technology, but it is not the only one that could become so powerful that even a single accident is unacceptable. There are technologies that exist today or will exist. at some point in the future let's not discuss whether in 10 or 20 years my children will be alive in 50 years and I want them to live in a world where not a single accident can be the end if you have a technology, be it AGI, future weapons nuclear weapons, biological weapons or anything else, you can build weapons or systems so powerful that a single accident means game over and our civilization is not set up in the way we currently develop Technologies to be able to deal with technologies that do not give you retries, this is the problem if we have retries, if we can try again and again and we fail and some things explode and you know, maybe a couple of people die, but that's okay, yeah, so I agree with Yan and Danela that I think our scientists I understood this.
I think Yan's lab will figure this out. I think these people will figure it out, but if one accident is too many, I don't think they'll handle it to that point. And to the point that Stuart and Conor just mentioned, you can imagine an infinite number. of scenarios where all those things go wrong, you can do this with any technology, you can do this with AI, obviously science fiction is full of it, you can do this with turbojets, turbojets can explode, there are so many ways. build those systems in ways that would be dangerous and wrong, kill people, etc., but as long as there is at least one way to do it right, that's all we need and for example there is technology that was developed in the past and that it was developed at prototype level and then it was decided that it should not be deployed because it would be too dangerous or uncontrollable.
Nuclear powered cars. People were talking about this in the 50s. There were prototypes. They were never implemented. Nuclear powered spacecraft. The same. There are mechanisms. in society to stop the deployment of technology if it is really dangerous and not implement it, but there are ways to make AI safe. In fact, I agree that it is important to understand the limitations of current technology and understand and set out to develop Solutions and in some cases we can develop technological solutions and for example we have been talking about a bias problem in machine learning . We actually have technological solutions to solve that, we're talking about size, we're talking about interpretability.
There, the scientific community is working to address the challenges with current solutions and also looking to invent new approaches to AI, new approaches to machine learning that have other kinds of properties, and in fact at MIT there are several groups ofinvestigation. Our goal is really to push the boundaries to develop solutions that can be deployed on safety-critical systems and edge devices. This is very important and there is really excellent progress, so I am very optimistic about the use of machine learning and artificial intelligence in safety-critical applications. So I would say I agree with one thing Stewart said, but also with many of the observations Yan shared.
Several of you are independently saying that we need new architectures, new technical solutions, so in closing, I would love if some of you would like to do this. Please very briefly share some thoughts on this, like what kind of new architectures we need that are most promising so that it is the kind of AI that complements us rather than replaces us. Do you want to go first? Sure, yeah, I really can't. Give you a working example because this is a work in progress but these are systems that are powered by gold and at inference time they have to satisfy a goal that we give them but also satisfy a bunch of guardrails and so both can't be... they're planning their response instead of just producing it Auto regressively one word after the other and uh, they can't be released, unless you hack them or things like that, so this would be an architecture that I think would be considerably safer than the current types we're talking about and those systems would be able to plan and reason, remember maybe understand the physical world, all kinds of things that current movies can't do, so the future system won't be on the plane that we currently have and they will be controllable because they will be goal-driven liquid networks that are brain inspired by the brains of small creatures and are provably causal, are compact, are interpretable and explainable and can be implemented on edge devices and, since We have these great properties, we also have control.
I'm also excited. about connecting some of the tools we are developing in machine learning with tools from control theory and, for example, combining machine learning with tools like the net barrier and control barrier functions to ensure that the output of a system of machine learning is secure. The real technological technology that I think is most important is social technology. It's very tempting for tech people, tech nerds like all of us here on this panel, to try to think of solutions that don't involve going human, but the truth is that the world is complicated and it's as much a political problem as it is a technological one and if we ignore the technical and social side of this problem we will fail reliably, that is why it is extremely important to understand that techno optimism is not a substitute for great humanism, so Thank you to our wonderful panel here for provoking us and I hope you learn from this too that although they don't agree on everything, they all agree that we want to create tools that we can control and that complement us and that are all very nerdy and I have interesting technical ideas to do this, thank you all

If you have any copyright issue, please Contact