YTread Logo
YTread Logo

Max Tegmark interview: Six months to save humanity from AI? | DW Business Special

Mar 19, 2024
This is not science fiction. Intelligence is not something mysterious that can only exist in the human brain. It is something we can also build. We were basically building these alien mines that are much smarter than us and who we will have to share the planet with. The pessimism is because basically everyone driving the race toward this cliff denies that a cliff exists, but they can't stop it, no company can stop itself because the competition will eat their lunch and kill them. by shareholders, otherwise there may simply be no human beings on the planet. This is not an arms race, it is a suicide race.
max tegmark interview six months to save humanity from ai dw business special
Billions of dollars are being invested in artificial intelligence, but our private companies are putting the world at risk in a race to create. better, even the so-called divine artificial intelligence. Well, my guest today is Max Tagmark. He is a Swedish-American physicist cosmologist and machine learning researcher. He is a professor at the Massachusetts Institute of Technology and scientific director of the Foundational Questions Institute. Max. Welcome to the program. I want to start with a general question before we really get into the details and I would ask that you be as brief as possible with your answer and that question is: are there a handful of companies that are leading us down a dangerous path?
max tegmark interview six months to save humanity from ai dw business special

More Interesting Facts About,

max tegmark interview six months to save humanity from ai dw business special...

The path, yes, it is quite short, let's move on, so let's move on, let's start with a little history. Okay, things have moved quickly in recent

months

after US company Open AI made its text-generating GPT chat model public late last year. Followed in recent weeks by an updated chat bot based on an even more advanced AI model, gpt4, companies including Google and Alibaba have rushed to launch their own AI chat bots, meanwhile, with plans to incorporate them into web browsers and other applications. to my guest Max Tagmark, who had a one-word answer to the general question.
max tegmark interview six months to save humanity from ai dw business special
We want to go into detail here. Before we get into this, I want to ask you from the front, at Forefront, how much progress there is. These new generative AI models, like Chat GBT, are what seems like a breakthrough in the media, it's actually pretty steady progress in work in AI. It's been going on for a long time, you know, in the '60s, the term artificial intelligence was coined by an MIT. professor and for a long time and people will be like man, this is much more difficult than we thought and gradually the list of things that humans can do, machines cannot, this is getting shorter and shorter and, uh, one of the Holy Grails Alan Turing called the Turing test being able to speak like a human being, really mastering the language and that is the great advance that we have now seen manifested in gpd4 and tools like this one that you wrote about the potential of AI more than five years ago in a book. called life 3.0, how do the developments we're seeing now compare to your expectations when you wrote that book?
max tegmark interview six months to save humanity from ai dw business special
Let's just say it's gone even faster than I expected when I wrote that book in terms of the growth of the power of Unfortunately, technology has gone much slower than I expected in terms of the ability of policymakers to keep Pace with us. and really regulate it in a meaningful way and make sure that we keep all of these things safe. Was the pressure exerted by companies taken into account? We would be faced with moving so fast to compete with each other. Yeah, the nightmare scenario that we had nine years ago, for example, when we held the first AI safety conference that brought together these corporate leaders with the skeptics, was that we would get this kind of race and we thought: don't run, don't enter in a race where you feel like you have to take shortcuts to Speed, unfortunately that's exactly what we have now and as a result, although I talk to the leaders of In the various companies, it is clear that they understand the risks they want to do the right thing and they should be commended for speaking publicly about the risks, but they can't stop it.
No company can stop alone because they are simply moving forward. having your lunch eaten by the competition and being murdered by shareholders, well, let's get into that a little more first. Our viewers probably remember a few weeks ago when a large group of artificial intelligence experts, including our guest Max Take Mark, signed an open letter to AI Labs urging them to pause their work on advanced models for at least six

months

among the signatories were Elon Musk and Apple co-founder Steve Wozniak The letter warns that the dangers posed by advanced AI are being overlooked as companies race to develop exactly what Max has been talking about asks if unelected tech leaders should essentially be the ones make the decisions the future of Life Institute published the letter to my guest Max takes Mark, you are president of the future of Life Institute, which I published this letter before entering it.
Did it have the result you expected? What have you seen so far? If you know. Personally, I was really hoping that this letter would help the mainstream slow down if you're generalizing the conversation about whether we really needed this for the first time to slow down riskier research or not and the fact that, for example, Professor Joshua Bengio signed the letter with its number one signatory, who is one of the godfathers of deep learning, this type of AI that has given us. All of this, the fact that he signed it and so many other AI researchers with him really incorporates us, no one can say anymore that those who want to stop the founders of the field, those who want to stop, those who totally understand are just clueless Luddites. the wonderful thing.
The advantages that AI can have are that it can help us cure cancer and much more, but we realize that we will only get that advantage if we can ensure that the regulation and wisdom with which we direct this technology towards good uses. keeps pace with technological progress critics have called this AI hype, they've said that essentially, you know, chatbots like GPT chat are engineering achievements, they're not quite the breakthrough at the beginning of our program , you said these companies are in a dangerous race you said yeah that's dangerous why it's dangerous this is um these tools are amazing the public is seeing them for the first time and maybe that kind of um there's a definite wow factor there, But isn't it true that they are still far short of the kind of general intelligence that would, in theory, make AI dangerous.
Of course, there is a lack of general artificial intelligence that can do all our jobs better than we can, and the biggest danger is not the current systems, but what they will soon be. This is going to lead unless we slow down, it's like a warning about nuclear weapons in 1944, when you know you can build them but they haven't been fully built yet, um, but for the people who say that's overkill, my question okay, write down some predictions of things you're sure AI won't be able to do a year or two years from now and let's make a bet for those who say it's not dangerous yet, well what about the Belgian who committed suicide?
It's probably the result of talking to a chatbot. What about all the crazy things that have happened to our democracy, where even a lot more low-tech AI with its recommendation algorithms created these horrible filter bubbles and the polarization that we're seeing? really fragment our society with more hate than understanding, these are real dangers and the other things that have already happened are caused by AI, so that's just a small warning of how we can lose more and more control of our society as we These systems become more powerful unless we implement good security measures, which we can totally say, is a warning sign.
Give me an example of something that scares you when you talk about the future capabilities of AI or something that you've seen so far that AI does that. shows some of its capabilities that could be further developed not long ago, for example, who is using AI to develop useful medicines to cure diseases and see how it could make people live as long as possible and destroys it as proof. minus sign, so we're going to try to make him kill people instead of saving them and very quickly he invented VX, one of the most powerful chemical weapons known to mankind, so this shows that you know that with intelligence comes power, the intelligence.
It's not bad or good, it's a tool and we really need to make sure that bad actors don't use it for bad things, we can't just, you don't, the reason you don't sell hand grenades in the supermarket is because not everyone we have the wisdom to handle this well and it would be crazy for AI to think it is different from biobiology or physics, we don't sell nuclear weapons in the supermarket so we have safety regulations about who can do dangerous things. uh synthetic biology research um we have to do the same thing with AI, it's already been done before we can't do it, there's just a lot of commercial pressure of course not to do it, just like there was commercial pressure from companies not to regulate tobacco for such a complicated technology, how exactly is a frame created?
I mean, if it's inherently, if the bad aspects are the flip side of the positive aspects, is it possible to create a framework to control AI or what it looks like? Does AI have to control AI? Does this look exactly like creating the structure for regulation? Well, we have managed to ban biological weapons. We have successfully achieved human cloning and altering the human germline. So we know how to do it. We met with all the key actors. We have constructive conversations. and from there we just need a little more time for this process of policy-making progress to catch up.
That is why our open letter asks for a six-month pause. Six months are enough. You said at least six months in the The letter um is just a period of time that you chose just to have something or why six months yeah you guys start somewhere and this will be the first time we slow something down in this space and we really give people a break and we can go on from there um I think we should also lose sight of, not lose sight of the much bigger picture, you know, this sounds a lot like the movie, don't look up if you don't know , Have you seen her?
It's not destroyed, it's heading towards Earth, right, yes, it's heading towards Earth, right, there's no asteroid, don't look up, don't worry, you know there's a much bigger asteroid-like threat that has kept scientists worried for so long. Concerned about climate change, in reality, if we build machines that are much smarter than humans, some humans could use that to destroy our democracy and take over our civilization, and soon after, it is very likely that as the machines get smarter, some humans lose control of the machines completely, so basically we were building these alien mines that are much smarter than us, who we will have to share the planet with, and it can be really inconvenient to have to share the planet with alien minds much more intelligent than not.
We don't worry, just ask the Neanderthals and I know how it worked for them, so my view is yes exactly, we humans should make sure to implement safety measures as well so that we can maintain control over these machines to help. Humanity flourishes instead of risky things that we could lose control over, despite very diligent technical work on so-called AI alignment, we have failed. I confessed as an AI researcher to solve this until now, we need a little more time, otherwise. There may simply not be any humans on the planet within a few decades. I mean, some viewers will see this, including me, and sometimes I think what we've seen has been so amazing in terms of what it means.
It can even show elements of deception, for example, as I was reading recently. Do you think this is something that people have a hard time understanding because they haven't seen it up to this point? Will they begin to understand it as they see it? some of these applications, do you think that then the urgency behind more policies do you think will grow? Yes, but it will probably be too late, unfortunately, once people see the AI. That's a lot smarter than us, right? It's like when Homo sapiens were much more intelligent. smaller than Neanderthals, Neanderthals were kind of screwed, it's like you assumed you knew that we humans have wiped out more than half of all the mammal species on the planet, now it's too late for those other species to say, oh , those humans are smarter than us.
They are cutting down our rainforest. You know we should do something about this. They should have thought of that before they lost control of us. Now it's ourschance to get it right and I don't want to sound too pessimistic. We're not talking here about, say, a nuclear war where there are only disadvantages, there are no advantages, there is a huge advantage if we can do this right. Everything I love about civilization as a product of human intelligence, so if we can amplify our intelligence with artificial intelligence and its use. solve climate change solve all the cures all the diseases and eliminate poverty and help

humanity

flourish, you know, not just for the next election cycle but for billions of years, that's incredible and let's not waste all those opportunities being too anxious.
To release things too quickly, let's do it deliberately so we can do it safely and get all these benefits, but it's fair to say that you're generally pessimistic given that companies are competing with each other given that there hasn't been any regulation at the moment. uh since you said even if there is more public awareness and regulation it won't come in time given that nations are competing with each other and obviously there are military applications for this. It's fair to say you're pretty pessimistic when you look at this. The pessimism is not because I have no way to solve this, the pessimism is because basically everyone who is driving the race towards this cliff denies that a cliff exists or even that it is an asteroid, which is why it is so valuable.
What are you doing, you are helping people. From there they understand that, ultimately, this is not a race that anyone is going to win. If we race out of control, we're all going to lose. Germany doesn't care if the AI ​​we lose control over is the one that wipes out

humanity

. whether American, German or Chinese, it doesn't really matter, what matters is that no one does it and that we can use all this amazing technology to help every human on the planet improve dramatically. This is not an arms race, it is a suicide race. and I think if we can educate people about that, then everyone will have the right incentives to stop competing and figure out how to make this all safe.
I want to move on to another aspect, but before that I have a technical question. I've compared this to nuclear proliferation and biological weapons, for example, the kind of technology we're talking about here can proliferate or be so complicated and expensive that it will be state actors that will actually be in control of this, it can very easily proliferate, you know, nuclear weapons don't proliferate as much because it's very hard to get plutonium or uranium, whereas this is more like biological weapons, they're small, cheap things once you have the code, I mean, which is very expensive.
Right now to develop the so-called big language models, you spend hundreds of millions of dollars on them, once someone has them, they can copy them if they have access to them. Software never respects borders more than Covid, so what you have to do is make sure that the riskiest things don't develop and there's a lot of discussion right now in the political space about regulating the IT side because you know you can't. hide six gigawatts of computer server power that you can see from space. when you're building one of these things, you know, a quarter of a million dollars or whatever, that's the first good place to start and then of course gradually we can also use a lot of tools, including artificial intelligence tools, to Identify dangerous things to prevent. proliferation, so the problem is not technical, the problem is really political and sociological and arises from the fact that most people do not understand how big the disadvantage is if no one wants to cooperate, well, talking about different countries, maybe choosing different paths for AI.
Take a look at China briefly. China's cyberspace administration this week releases draft rules designed to manage how companies develop generative artificial intelligence products like Chachi PT. These rules say that AI-generated content must reflect the core values ​​of socialism and must not subvert state power. Yes, what we are seeing here is that Europe, China and the United States are taking different approaches to regulation. Europe has been at the forefront with the EU and U AI Law. Originally some lobbyists had managed to plug the loophole by saying that CPT and gpt4 chat would be completely exempt, we have worked very hard to close that loophole because it would be ridiculous, so I think that will help that the US has been very resistant to regulating anything, I think because a very successful tech lobby against the Chinese government has actually done that.
I've been interregulating this quite a bit because they see, I think very clearly, that this could cause the Chinese government to lose control and they don't want to lose control, so they are limiting the freedom of companies to experiment wildly. with things that are not well understood. I was just going to ask you if maybe they are taking the approach that you are pushing more than the West because it is a huge threat to their system, but at the same time they are in a military competition with the rest of the world, won't that push them towards certain applications? of artificial intelligence that could be dangerous and proliferate?
Yes, you are right about all the risks there are, but you know the Soviet Union and the US were not. exactly best friends where they met up for drinks and trusted each other a lot, but they were still able to avoid a nuclear war and came up with all sorts of useful ways to reduce the risk because in that case they had all seen videos of nuclear explosions, everyone knew that no one wins a nuclear war once we get to the point with AI where more policymakers realize that this would be much worse than nuclear war if we completely lost control of the ultra-intelligent machines.
I'm optimistic that China and the US military will also realize that they should work together to prevent exactly what they don't want. It will be too late, right? Didn't you say it would be too late at that point anyway once there? It's the mass recognition uh event that it might be too late anyway no I said it will be too late once those machines exist so I'm very hopeful that through journalism like yours we can help them realize Since the imminence of these machines before they were actually built, what we have now is still short of what's called artificial general intelligence, this Holy Grail of making machines that can outsmart us at basically all work tasks and , but we're getting there at Express speed, right?
It turned out that it was a lot easier to build this than people thought 10 years ago 10 years ago most people thought 50 years maybe 30 years maybe a lot more now you see a lot of the top experts in the field saying that They give much shorter deadlines that we may have already surpassed, you know? the mastery of human language, for example, and I hope that this makes many policy makers realize that this is not science fiction, intelligence is not something mysterious that can only exist in the human brain, it is something that we can also build and um, when we can build it, we can also very easily build things that are directly beyond us, as far beyond us as we are, beyond insects, and obviously we're building this, we should build AI for humanity by humanity , for humanity, not for the The purpose of the machines having a great time later, so make sure that we really take our time and make sure that we control these machines or make sure that they at least have our values ​​and do what What we want, that's something else.
It's important than any other choice humanity is making right now, and I think it's a good place to close this conversation. Obviously, we can talk about much more because it is a very important topic and we have a great expert with us. but we're going to have to leave it there, big thanks to Max, please, Mark, he's an expert in artificial intelligence and a professor at MIT, and of course, big thanks to our viewers for watching. If you liked it, check out one of our other DWs.

business

special

s and see you next time

If you have any copyright issue, please Contact