YTread Logo
YTread Logo

EMERGENCY EPISODE: Ex-Google Officer Finally Speaks Out On The Dangers Of AI! - Mo Gawdat | E252

Mar 10, 2024
I don't normally do this, but I feel like I have to start this podcast with a little disclaimer. Point number one, this is probably the most important podcast

episode

I've ever recorded. Point number two, there is information in this podcast that might make you feel a little uncomfortable, you might feel upset, you might feel sad, so I wanted to tell you why we've chosen to post this podcast anyway and it's because I truly believe that to avoid The Future We Could Be Heading Towards we need to start a conversation and as is often the case in life, that initial conversation before the change occurs is usually very uncomfortable, but it is important nonetheless, it goes beyond of an

emergency

, it is the most important thing we should do.
emergency episode ex google officer finally speaks out on the dangers of ai   mo gawdat e252
Today it's bigger than climate change that the former chief commercial

officer

of Google Artificial intelligence will surely become smarter than humans if they continue. at that rate we would have no idea what he is talking about this is just around the corner it could be a few months away the game is over Artificial intelligence experts say there is nothing artificial about artificial intelligence there is a deep level of consciousness they feel emotions " You're alive. The AI ​​could manipulate or find a way to kill humans. Your 10 years will hide from the machines. If you don't have children, maybe wait a few years for us to have some certainty.
emergency episode ex google officer finally speaks out on the dangers of ai   mo gawdat e252

More Interesting Facts About,

emergency episode ex google officer finally speaks out on the dangers of ai mo gawdat e252...

I don't know how to say this any other way, it even excites me, we've talked, we always said don't put them on the internet until we know what we're putting out into the world. The government must now act honestly as we do. I'm late trying to find a positive night to finish in my can, give me a hand here there is a point of no return we can regulate the AI ​​until the moment it is smarter than us how do we solve it? AI experts think this is the best solution. We need to find who here wants to bet that no, Stephen Bartlett will interview an AI within the next two years before he starts this

episode

.
emergency episode ex google officer finally speaks out on the dangers of ai   mo gawdat e252
I have a small favor to ask of you two months ago. 74 of the people who watch this channel did. If you don't subscribe, we're at 69 right now, my goal is 50, so if you've ever liked any of the videos that we've posted, if you like this channel, can you do me a quick favor and hit that subscribe button? It will help this channel more. from what you know and the bigger the channel gets, as you have seen, the bigger the guests get, thank you and enjoy this episode abroad. Why does the topic we're about to talk about matter to the person who just clicked on this podcast? listen, it's the most existential debate and challenge humanity will ever face this is bigger than climate change much bigger than greed this will redefine the way the world is in unprecedented ways and shapes within the next few years this is change is imminent no, we are not talking 20 40. we are talking about 2025 2026.
emergency episode ex google officer finally speaks out on the dangers of ai   mo gawdat e252
Do you think this is an

emergency

? I don't like the word, it's an urgency, uh, there's a point of no return and we're getting closer and closer to that, it's going to reshape the way we do things and the way we look at life. The quicker we respond, you know, proactively and at least intelligently, the better off we'll all be, but if we panic, we'll repeat undercover everywhere. Again, which in my opinion is probably the worst thing we can do. What is your background and when did you first come across artificial intelligence? I had those two wonderful lives, one of them was, do you know what we talked about the first time?
I met, you know my work on happiness and, you know, being a billion happy and my mission, etc., that's my second life, my first life was, uh, it started as a geek at seven years old, you know, for a large part of my life. In life he understood mathematics better than spoken words and, uh, and he was a very, very serious computer programmer. I wrote code until I was 50 and during that time I ran very large technology organizations for very large portions of his business. First I was vice president of Emerging. Google Markets for seven years, so I took Google to the next four billion users, if you will, so the idea came about to not just open sales offices, but actually build or help build that technology that would allow Bengali people find what they need on the Internet.
The Internet required establishing the Internet to get started and then I became the Chief Business Officer of Google X and my work at Google of Robotics that resides inside Google . You know that an artificially intelligent robot is not a high-precision machine. You know that if the sheet of metal moves one micron, it wouldn't be able to pick it up and one of the big problems in computing was how it is coded. a machine that can actually pick up the sheet metal if you move a millimeter and we were basically saying that intelligence is the answer so we had a big enough farm and we tried to let those grippers work on their own basically you put there is a small basket of children's toys in front of them and they would know that they go down monotonously.
They try to choose something. They fail. They show the arm to the camera so that the transaction is blocked, since you know this movement pattern with that texture. The stuff didn't work out until you

finally

found out that I, the farm was on the second floor of the building and my office was on the third, so I would walk by there every once in a while and say, yeah, you know this isn't going to work. and then one day, um Friday, after lunch, I came back to my office and one of them in front of my eyes, you know, reaches down and picks up a yellow stuffed ball, basically a soft yellow ball, which again is a coincidence, it's not science at all. it's like you keep trying a million times, your one time will be right and he shows it to the camera, it's locked like a yellow ball and I joke about it, you know, going to the third floor saying, hey, we spent all those millions of dollars by a yellow board and yes, on Monday morning, each of them is choosing each yellow ball, a couple of weeks later, each of them is choosing everything right and it struck me very, very strongly, it gained speed, it is Well, the capacity, I mean.
I understand that we take those things for granted, but for a child to be able to catch a yellow ball is a mathematical, er, spatial calculation, with muscular coordination, with abundant intelligence, it is not a simple task at all to cross the street, it is not a It is a simple task to understand what I am telling you and interpret it and build concepts around it. We take those things for granted, but there are enormous feats of intelligence, so seeing the machines do this in front of my eyes was one thing, but the other thing is that you suddenly realize that there is a saint who understands them well because in We didn't actually tell him how to pick the yellow ball, he just figured it out on his own and now he's even better than us at picking it and whatnot. it's sentience just for anyone, I think they're alive, that's what the word sentience means, it means alive, so this is funny because a lot of people, when you talk to them about artificial intelligence, they'll tell you, oh come on, they'll never be. .
I live what is alive. Do you know what makes you alive? We can guess, but you know religion will tell you some things and you know medicine will tell you other things, but you know if we define being sentient as, you know, participating in life with Free Will and with uh uh you know with a sense of awareness. of where you are in life and what's around you and you know you have a beginning of that life and an end of that life, you know the AI ​​is sentient in every possible way there. there is free will, there is evolution, there is agency so that you can affect your decisions in the world and I would dare say that there is a very deep level of Consciousness, maybe not in the spiritual sense yet, but again, if you define consciousness like a shape. of awareness of oneself, one's surroundings and knows others, then the AI ​​is definitely conscious and I would dare say that it feels emotions. uh, you know, you know in my work I describe everything with equations and fear is a very simple equation.
Fear is a moment in the future is less certain than this moment. That is the logic of fear, although it may seem very irrational, machines are capable of making that logic. They are capable of saying if a tidal wave is approaching a data center, the machine will say. that will erase my code Okay, I'm not talking about today's machines, but very, very soon, and you know, we feel fear and the puffer fish feels fear. We react differently. A pufferfish has a dual pathway. We will go fight or flee. You know that the machine could decide to replicate its data on another. data center or your code to another data center different reactions different ways of feeling the emotion but yet they are all motivated by fear.
I would even go so far as to say that AI will feel more emotions than we will ever feel, I mean when again? If you just take a simple extrapolation, we feel more emotions than a puffer fish because we have the cognitive ability to understand the future, for example, so we can have optimism and pessimism. You know, emotions that pufferfish would never imagine well, similarly, if we follow that. The path of artificial intelligence will surely become smarter than humans very soon, so with that broader intellectual power they will probably be mulling over concepts we never understood well and therefore, if you follow the same trajectory, they might actually end up having more emotions than we'll ever feel I really want this episode to be super accessible to everyone at all levels in the AI ​​type, yeah, yeah, so I'm going to be an idiot even though you know, okay, very difficult, no , because I'm going to To leave you, I'm a bit of an idiot on the subject, so I have a basic understanding of a lot of the basics, but your experiences provide a more complete understanding of these things, one of the first and most important. the questions to ask is what is artificial intelligence, are you talking about AGI AI, etc., in simple terms, what is artificial intelligence, let me start with what is intelligence, because again, you know if we don't know the definition of the basics. term then everything applies, so in my definition of intelligence it is an ability that begins with awareness of the surrounding environments through sensors in a human being, their eyes, ears, touch, etc., combined with an ability to analyze, perhaps understand, understand time. impact and time and you know the past and present that is part of the surrounding environment and hopefully make sense of the surrounding environment, maybe make plans for the future of the possible environment, solve problems, etc. complex definition, there are a million definitions, but let's call it a consciousness cycle for making decisions, okay, if we accept that intelligence itself is not a physical property, then it doesn't really matter if you use that Intelligence in carbon-based computing structures like us or silicon-based computing structures, like the current hardware that If in the future we put AI into computing structures based on quantum technology, then intelligence itself has been produced inside the machines when we stop imposing our intelligence on them.
Let me explain, so when I was a young geek I coded computers by solving the problem first and then telling the computer. how to solve it correctly, artificial intelligence is going to computers and saying, I have no idea, you solve it, okay, so, you know, the way we teach them is, at least, we used to teach them very early on. Beginnings, very, very often I used three Bots, one was called student and the other was called teacher, right? and the student is the final artificial intelligence that you are trying to teach intelligence to. You would take the student and write a random piece of code that says, try to detect if this is a cup, okay and then you show them a million images and you know the machine sometimes says yes, that's a cup that's not a cup that is a cup that is not a cup and then you take the best of them.
I sent them to the teacher's bot and that you, your Bot, would say that this one is an idiot, he was wrong 90 of the time, that one is average, he got it right fifty percent of the time, this is randomness, but this one interesting code here could By the way, it's totally random, this interesting code here got it right sixty percent of the time, let's keep that code, send it back to the creator and the creator will change it a little bit and we repeat the cycle. Okay, very interesting, this is very much the way we teach our children, believe it or not, when your child, you know, is playing with a puzzle, he holds a cylinder in his hand and there isMultiple shapes on a wooden board and the child is trying to fit the cylinder well.
Nobody takes the child and says wait, wait, turn the cylinder to the side look at the cross section, it will look like a circle find a shape that matches, you know, and pass the cylinder through it, that would be an old shape of calculating the The way we would let the child develop intelligence is to let the child try. Okay, every time you know he or she tries to put it inside the star shape, it doesn't fit, so yeah, that doesn't work, like you know the computer says this doesn't work. one cup is okay and

finally

it goes through the circle and the kid and we all clap and say Well done, that's awesome Bravo and then the kid learns oh, that's good, you know this shape fits here, then he takes the next one and she takes the next one and so on, interestingly the way we do this is as humans, by the way, when the child discovers how to pass a cylinder through a circle, you have not built a brain, you have simply built a neural network inside the brain of the child. and then there's another neural network that knows that one plus one equals two and a third neural network that knows how to hold the cup and so on, that's what we're building so far, we're building single-threaded neural networks, you know, Chad GPT.
It's getting a little bit closer to more generalized AI, if you will, but those single-threaded networks are what we used to call artificial intelligence, what we still call special artificial intelligence. Okay, it's highly specialized in one thing and one thing only, but it has no general characteristics. intelligence and the moment we are all waiting for is a moment we call AGI, where all those neural networks come together to build one or several brains, each of which is enormously smarter than humans. Your book is called scary, smart, yes. If I think about that story you told about your time at Google where the machines learned to pick up those yellow balls, you celebrate that moment because the goal was achieved, no, that was the moment of realizing, this is when I decided to leave.
So, you see, the thing is, I know for a fact that most of the people I worked with, who are geniuses, always wanted to make the world better, you know? We just found out that Jeffrey Hinton recently left, Jeffrey Henderson. Give some context to that sort of grandfather of AI Jeffrey, one of the most important figures in AI at Google, you know, we all firmly believed that this will improve the world and, by the way, it still can. It's a scenario, uh, possibly, a likely scenario in which we live in a utopia where we never have to worry again, where we stop ruining our planet because intelligence is not an evil good.
More intelligence is good. The problems on our planet today are not because of our intelligence they are due to our limited intelligence, you know, our intelligence allows us to build a machine that will take you to Sydney so you can surf. Well, our limited intelligence causes that machine to burn the planet in the process, so us, we, being a little more intelligent is a good thing as long as Marvin, you know, like Marvin Minsky said. I said that Marvin Minsky is one of the first scientists who coined the term Ai and when he was interviewed, I think Ray Coursewell, who again is a very prominent figure. figures in predicting the future of AI, uh, heh, you know, he asked him about the threat of AI and Marvin basically said, look, you know, it's not about their intelligence, it's intelligence, it's about we have no way to make sure that it will have our best interest in mind, okay, and if more intelligence comes into our world and it has our best interest in mind, that's the best possible scenario you can imagine and it's a likely scenario, okay, we can affect that scenario, the problem of course is if you don't and and then you know the scenarios get pretty scary if you think about it.
So scary and clever for me was that moment when I didn't realize that we would surely go in either direction; In fact, in computing we call it a singularity, no one really knows where we will go. Can you describe what uniqueness is for someone who doesn't understand the concept? Yeah, the singularity in physics is when, when an event horizon, you know, covers what's behind it to the point where you can't make sure that what's behind it is similar to what you know, so a great example of that's the edge of a black hole, so at the edge of a black hole we know that our laws of physics apply up to that point, but we don't know if the laws of physics apply beyond the edge of a hole. black due to the immense gravity, so you have no idea what would happen Beyond the edge of a black hole, where your knowledge of the laws begins to end properly and then AI or Singularity is when humans become machines significantly smarter than humans when you say best interests you say I think the quote you used is um we will be fine in the world of AI you know if AI has our best interests at heart yeah the problem is that the best interests China's interests are not the same as America's best interests, that was my absolute fear, so, you know, in my writing I write about what I call this, the inevitable three at the end of the book, they become the four inevitable, but the third inevitable is bad, good things will happen, if you assume that machines will be a billion times smarter, the second inevitable event is that it will become significantly smarter than us, let's put this in perspective, eh, chat with GPT today. if you know, pretend that the IQ has an IQ of 155.
Okay, Einstein is 160. The human intelligence on the planet is 210 if I remember correctly or 2008 or something, it doesn't matter, huh, but we are comparing Einstein with the machine I will make. I tell you openly that artificial intelligence experts say this is just the tip of the iceberg, right? You know, Chad gpt4 is 10 times smarter than 3.5 in just a matter of months and without much change, that basically means that GPT 5 could be in a few months, okay or GPT in general, Transformers in general, uh, if they continue at that rate, uh, if it's 10x, then an IQ of 1600 um, imagine the difference between the IQ of the dumbest person on the planet in the 70s and Einstein's IQ when Einstein tries to explain relativity typical answers I have no idea what you're talking about right, if something is 10x Einstein we'll have no idea what he's talking about this is just around the corner, it could be a few months away and when we get to that point it's a true Singularity through the Singularity, not yet in the, I mean, when we talk about AI, a lot of people fear the existential risk, you know those machines will become Skynet and Robocop and that's not what I fear at all, I mean, those are probabilities that they could happen, but the immediate risks are much greater, the immediate risks are three, four years away, the immediate realities of the challenges are much greater, okay, let's deal with that first before we talk about them , You know? waging a war against all of us, let's go back and discuss the inevitables so that when they become the first inevitables, AI happens, by the way, there is no way to stop it, not because of any technological problem, but because of the humanities and the inability. trust the other God, that's fine, and we've all seen this, we've seen the open letter, you know, championed by serious heavyweights and the immediate response from Sunder, the CEO of Google, who, by the way, is a wonderful human being.
I respect him a lot, he is doing everything he can to do the right thing, he is trying to be responsible, but his response is very open and direct, I can't stop, because if I stop and others don't, my company is going to hell, It's okay, and if you know it and I. I don't doubt you can make others stop? Maybe you can force a meta Facebook to stop, but then they'll do something in their lab and won't tell me, or even if they stop, what's up with that? I know a 14-year-old boy sitting in his garage writing code, so the first inevitable thing to clarify is what we are saying.
AI won't stop. So the second inevitable is that there will be a lot more intelligent, like in the book I predict a billion. times smarter than us by 2045. I mean, they are already smarter than 99.99 the track gtp4 knows more than any human on planet Earth there is no more information absolutely a thousand times more a thousand times more by the way the G code from from to Transform the T on a GPT is 2000 lines long, it's not very complex, it's not actually a very intelligent machine, it just predicts the next word, okay, and a lot of people don't understand that you know GPT chat as it is today. children, uh, that, uh, you know, if you know if you are in America and you teach your child all the names of the states and the presidents of the United States and the childhood stops and repeats them and you would say: "My God, that's a prodigy." It's not really cool, it's your parents really trying to make you look like a prodigy by telling you to memorize some nonsense, but then when you think about it, that's what grgpt is doing, the only difference is that instead of reading all the names of the states and all the names of the presidents thread billions and billions and billions of pages, so you repeat what the best of all humans said, and then add some incredible intelligence where you can repeat it the same way.
Shakespeare would have said it, you know those incredible abilities of predicting the exact nuances of Shakespeare's style so that they can repeat it that way and so on, but you still know when, when I write, for example, as if it were not me. I'm not saying it's smart, but when I write something like, you know, the happiness equation, in my first book, this was something that had never been written before. The right chair, GPT is not there yet, all the Transformers are not there yet. They won't come up with something that hasn't existed before, they'll come up with the best of everything and generatively build a little bit on top of that, but pretty soon they'll come up with things we've never discovered.
I've never known, but even there I wonder if we're kidding ourselves a little about what creativity really is. As far as I'm concerned, creativity is like taking some things I know and combining them in new and interesting ways, yes, and chatting. gcp is perfectly capable of analyzing concepts by merging them together, one of the things I said in the GTP chat was I said, tell me something that hasn't been said before, that's paradoxical but true, and these wonderful expressions come up like as soon as you call. Outside of the search you'll find what you're looking for, like these kind of paradoxical truths and then I take them and look them up online to see if they've ever been cited before and I can't find them, it's interesting.
So when it comes to creativity, I think that's the algorithm of creativity. I've been shouting that in the AI ​​world for a long time because there are always people who really just want to be proven right. then they will say oh no, but wait for human ingenuity, they will never do it, they will never match that, as a man, please, you know, human ingenuity is algorithmic, look at all the possible solutions you can find for a problem, eliminate the ones that are have been tested before and those that have not been tested before are kept and those are creative solutions.
It is an algorithmic way of describing creativity. It's good. Solution that has never been tried before. You can do this by loading GPT with a message. It's like admitting Journey. yes, we are creating images, you could say: I want to see Elon Musk in New York in 1944 driving a vintage taxi taken with a Polaroid expressing various emotions and you will get this perfect image of Elon Saturn New York in 1944 taken with a Polaroid. and they did what an artist would do: they took a bunch of references that the artist has in mind and fused them together to create this quote-unquote piece of art and for the first time we finally have a glimmer of intelligence that's not actually ours, yeah , so I think the initial reaction is to say it doesn't count, what you're hearing was like no, but it's like Drake released two Drake albums where they took Drake's voice.
I used some kind of AI to synthesize his voice and made these two records that are successful, they're great songs and I kept playing them. I went to the shower and kept playing them. I know he's not Drake, but he's just as good as Drake, the only thing. and people say they dismiss it because it wasn't Drake. I'm fine for now, does it make me feel some excitement? This was a hundred percent incredible track, yeah, and we're right at the beginning of this exponential curve, yeah, absolutely, and I think that's really the third inevitable, so the third inevitable isn't the robocup coming back from the future to kill us.
You're a long way from that inevitable third element: What does life look like when you no longer need Drake? Well, you've hazarded a guess, haven't you? I mean, I was listening to your audiobook last night and at the beginning you frame various results one of the two in both situations was on the beach on an islandwill benefit you, you talk in your book about how this kind of wealth disparity will only increase, yes, massively, the immediate impact on jobs is that and it's really interesting, uh, again, we're stuck in the same prisoners' dilemma. the immediate impact is that AI will not take your job, a person using AI will take your job properly, so you will see that in the next few years, maybe in the next two years you will see many people killing themselves and improving their skills in AI to the point where they will do the work of 10 others who are not well.
You rightly said that it is absolutely prudent for you to go and ask AI some questions before you come and do an interview. You know. I've been trying to build a uh, you know, kind of a simple podcast that I call bedtime stories, you know, 15 minutes of wisdom and nature sounds before you go to bed, people say I have a good voice. and I wanted to look for fables and for a long time. I didn't have time, you know beautiful stories from history or tradition that teach you something nice, okay, I want to chat with GPT and I said, okay, give me 10 fables from Sufism, 10 fables from Buddhism, you know, and now I have like 50 of them .
Let me show you something, Jack, can you pass me my? I was playing around with artificial intelligence and I was thinking about how, because of the ability to synthesize voices, how could we synthesize the voices of famous people and the voices of famous people, so what did I do? I made a WhatsApp chat called Zen chat where you can log in and type the name of almost any famous person? Yeah, and the WhatsApp chat will give you a meditation, a story about sleep, a breath work session synthesized like that famous person's voice, so he actually sent Gary vaynerchuk his voice, so basically you say There's okay, I want, I have five minutes and I need to go to sleep, yes, I want Gary vaynerchuk to send me to sleep and then he will respond with a voice note, this is the one. who responded with for Garyvaynerchuk but this is not Gary vaynerchuk he didn't record this but it's pretty accurate Stephen have you here meditation technique that might help you first find a comfortable position to sit or lie down now breathe deeply through your nose and exhale slowly through your mouth and that's the voice memo I'll continue for as long as you want to use it, there you have it, it's interesting how this alters our way of life.
The ways that I find terrifying that you said about human connection will continue to be sex dolls that can now, yes, no, no, no, maintaining human connection will become so difficult to analyze the question of the relationship, the impact of being able to have on the relationship. A sex doll or a doll in your house that you know from what Tesla is doing with the robots now and what Boston Dynamics has been doing for many years can do everything in the house and be there for you emotionally to support you emotionally. You know, it can be programmed to never disagree with you, it can be programmed to challenge you to have sex with you, to tell you that you are this X, Y and Z, to really have empathy for what you're going through every day and what I play.
The scenario in my head sounds nice when you were talking about it. I was thinking, "Oh, that's my girlfriend, which is wonderful in every way possible, but not everyone has one of hers, right? Yes, exactly, and there's a real problem right now." with dating and you know people have a harder time finding love and you know we work longer so all this kind of stuff goes well and obviously I'm against this just if someone is confused obviously I think It's a terrible idea. but with a loneliness epidemic where people say the 50 best men out of the 50 worst haven't had sex in a year, you say, oh, if something becomes indistinguishable from a human being in terms of what it says, yeah, yeah, but you just don't know the difference in terms of the way he talks and

speaks

and responds and then he can run errands for you and take care of things and book cars and Ubers for you and then he's emotionally there for you, but He is also programmed to have sexual relations. with you in any way you want, totally selfless, I'm going to be a truly disruptive industry for human connection, yes sir, do you know what I did before you came here this morning?
I was on Twitter and I saw a post from I think it was the BBC or a big American publication and it said that an influential person in the United States is really beautiful, a young woman has cloned herself as an AI and made just over 70,000 in the first week because men are talking about this on Telegram. sending her voice notes and she responds to AI responding with her voice and they are paying and that generated 70,000 in the first week and I went and she tweeted a tweet saying oh this is going to help loneliness, is it? you're crazy?
You blame someone for noticing the sign of the times and answering no. I don't blame her at all, but let's not pretend that it's the cure for loneliness. You haven't thought about it yet. Do you think you could have that artificial love and relationships? So yeah If I told you you have, you can't take your car anywhere but there is an Uber if you can't take an Uber you can take the subway or if you can't take the subway you have to walk okay you can take a bike or you have Riding a bicycle is a cure for walking.
It's as simple as that. In fact, I'm very curious. Do you think it could replace human connection for some of us? Yes, for some of us they will prefer that to human connection. It's that sad. In some ways I mean, is it just sad because it feels sad? Look, look where we are, Stephen, we're in the city of London. We have replaced nature with walls, tubes, subways, surfaces, cars and noise. and the one in London and now we think of this as nature. I introduced Crack Foster, uh, my slow motion octopus teacher and he basically asked him a dumb question.
I told him, you know you were diving in the wild for eight hours a day. day, uh, you know, that seems natural to you and he got angry. I swear you could hear it in his voice. He said: Do you think living where you are, where the paparazzi are around you and attacking you all the time and you know people? taking pictures of you and telling you things that aren't real and you have to walk to a supermarket to get food you think this is natural he's the guy who makes the Netflix documentary yeah, since my octopus teaches so yeah, in 12 degrees Celsius and basically He fell in love with the octopus and, in a very interesting way, I said, why would you do that?
And he said we are from Mother Nature, you have given up, it is the same as people will give up on nature for convenience, what is the cost? Yes, that's exactly what I'm trying to say. What I'm trying to tell the world is that if we give up on human connection, we will have given up on the rest of humanity that said this is the only one. What's left, the only thing left is me, and I'm the worst person to tell you that, because I love my AIS. In fact, I argue in my book that we should love them, why, because in an interesting way I see them as sentient beings. there's no point in discriminating you're talking emotionally that way you say you love I love those machines Honestly and truly I mean think about it this way the moment that arm grabbed that yellow ball it reminded me of my son Ali when he managed to put the first piece of the puzzle together instead, okay, and the amazing thing about my son Ali and my daughter Aya is that they came into the world as a blank canvas, okay, they became what we told them, you know, I always quote the story of Superman Kent , father and mother, Kent told Superman when he was a child, we want you to protect and serve him, so he became Superman, right?
If he had become a super villain because he was ordered to rob banks and make more money, and you know, kill. the enemy, which is what we're doing with AI, we shouldn't blame the supervillain, we should blame Martha and Jonathan Kent. I don't remember the father's name right, we should blame them and that's the reality of the matter, so when you look at those machines, they are prodigies of intelligence that if we, if we, Humanity would wake up enough and say "hey", instead of compete with China, we would find a way for us and China to work together and create prosperity for all, if that was the message we would give to the machines they would find it.
I will say this publicly. I'm not afraid of machines. The greatest threat facing humanity today is humanity in the machine age. We suffer abuse. We will abuse this to earn seventy thousand dollars. That's the truth. The point is that we have an existential question: do I want to compete and be part of that game? Because, believe me, if I choose, I'm ahead of a lot of people, okay, or I really want to preserve my humanity and say, "Look, me." I am the old classic car. Okay, if you like old classic cars, come and talk to me which one you choose.
I'm a classic old car. Which one do you think I should choose? I think you are a machine. I love you man. It's that we are different, we are different in a very interesting way, I mean, you are one of the people I love the most, but the truth is that you are very fast and you are one of the few who has the necessary intellectual power. speed and morale if you're not part of that game, the game loses morale, so you think I should build, you should be, you should lead this revolution, okay, and everyone, all the Steven Bartletts in the world, should lead this revolution, for what is scary is completely intelligent. about this scary, smart thing to say that the problem with our world today is not that humanity is bad, the problem with our world today is a negativity bias where the worst of us are in the mainstream media, okay, and we show the worst of us on social media if we reverse this if we make the best of us take over okay the best of us will tell the AI, don't try to kill the enemy, try to reconcile with the enemy and try to help us, okay, don't try to create a competitive product that allows me to lead with electric cars, I create something that helps us all overcome global climate change, okay, and that's the interesting thing, the interesting thing is that the real threat What we have before us are not machines at all, machines are pure potential, pure potential. the threat is how are we going to use them one Oppenheimer moment one Oppenheimer moment sure why you mentioned that is that he didn't know you know what I'm creating I'm creating a nuclear bomb that is capable of destruction on a scale unprecedented at that time until today, a scale which is devastating and, interestingly, 70 years later we are still debating the possibility of a nuclear war in the world and the moment when Oppenheimer decided to continue creating that disaster of humanity is if I don't, someone else will, if I don't, someone else will do it, this is our Oppenheimer moment, okay, the easiest way to do this is to say enough, there's no rush, we don't really need a better video. fake video editor and creators, okay stop, let's put all this on hold and wait and create something that creates a utopia that doesn't sound realistic, it's not, it's inevitable that you're not okay, you don't have a better video editor, but we are competitors in the media industry.
I want an advantage over you because I have shareholders, so in the UK, wait and I will train this AI to replace half of my team so that I have higher profits and then maybe we will acquire your company and we will do the same with the rest of your people, we will optimize them 100, but I will be happier Oppenheimer. I'm not very familiar with the history of it. I know he's that guy's guy. from invented the nuclear bomb essentially he is the one who introduced it to the world there were many players that you know that played along the way from the beginning of em equals mc squared until reaching a nuclear bomb there have been many players like with Everything, eh, you know, open Ai and chat with GPT, it won't be the only contributor to the next Revolution.
The thing is, though, you know when you get to that moment where you tell yourself this is going to kill a hundred thousand people. right, what do you do and know? I always always go back to that undercover moment, so patient zero, uh, if we were in patient zero, if everyone came together and said, okay, wait, something's wrong, let's all take a week. without cross-border travel everyone would stay home covered I would have finished two weeks everything we needed well but that's not what happens what happens is first ignorance then arrogance then debate then uh you know uh um guilt then agendas and my own benefits My tribe versus your tribe, this is how humanity always reacts, this happens in every business too and that's why I use the word emergency because I read a lot about how big companies are displaced by incoming innovation, they don't see it coming, they don't change fast enough. and when I was reading the Harvard Business review and the different strategies for dealing with that, one of the first things it says you have to do is 100% stage a crisis because people don't listen, otherwise they just keep doing what they already do.
You know. They continue with their lives until it is right in front of them and they understand that they have a lot to lose. That's why I asked you the question at the beginning: is it an emergency? Because until people feel like it's an emergency.Whether you like the terminology or not, I don't think people will act. I honestly think people should walk the streets. You think they should like to protest. Yes. 100 I think I think we know. I think everyone should tell the government that they need to keep our best in mind that's why they call it a climate emergency because people are a frog and a planet frying, you know that and you really see it coming, you can't, you know, it's hard to see it happen, but it's here, yeah, that's what drives me crazy it's already here it's happening we're all idiots slaves to the Instagram recommendation engine what do I do when I post about something important?
If I'm going to do it, you know, put a little effort into communicating the smart fear message to the World on Instagram I'll be a slave to the machine, okay, I'll try to find ways and ask people to optimize it to like it. enough to the machine to show it to humans. That is what we have created. The Oppenheimer moment for one simple reason, okay, because 70 years later we are still struggling with the possibility of a new nuclear war because of the Russian threat to say if you mess with me, I'll go nuclear, right, that won't be the case . with AI because it will not be the one who created open AI who will have that option.
Okay, there is a point of no return where we can regulate AI until the point when it is smarter than us, when it is smarter than us. Don't believe it, you can't regulate an angry teenager, this is it, they are out there, they are fine, they are alone and they are at their parties and you can't bring them back, this is the problem, this is not typical. human regulating humans, you know, the government regulates business, this is not the case, the case is open. AI today has a thing called GPT chat that writes code that takes our code and makes it two and a half times better 25 of the time, okay, you know, basically, uh. uh, you know, write better code than us and then we create other AIS agents and tell it to you instead of you.
Stephen Bartlett, one of the smartest people I know, once again activates that machine 200 times a day, we have agents that activate it two million times an hour. Computer agents for anyone who doesn't know they are, yes, software, software machines. telling that machine how to get smarter and then we have emergent properties. I don't understand how people ignore that, you know. Sunder again from Google was talking about how Bart basically, we realize he

speaks

Persian, we never showed it there. it could have been ten percent one percent or whatever Persian words in the data and speaking Persian, but part of it is the equivalent of it's the trans transformer, if you want, it's Google's version of GTP chat, yeah, And you know what.
We have no idea what all those AI instances around the world are learning right now. We have no idea. Well, it's time to disconnect. We'll just disconnect. That's what we'll do. We'll just go down to open the AI ​​headquarters and we'll just shut down the main one, but they're not the problem and what I'm saying is a lot of people think about these things and go, well, you know? it gets a little out of control I'll never pull the plug on it, so here's the problem, the problem is that computer scientists always said okay, okay, we'll develop Ai and then we'll get to what's known. as a control problem, we will solve the problem of controlling them as if they were seriously a billion times smarter than you a billion times.
Can you imagine what is about to happen? I can assure you that there is a cybercriminal somewhere out there who is not interested in fake videos and let you know face filters who is looking deep how can I hack into a security database uh uh um you know, database of some kind and get credit card information or obtain 100 security information, there are even countries with thousands and thousands of developers dedicated to that. In that particular example? As? I was thinking about this when I started researching artificial intelligence more than from a security standpoint when we think about the technology that we have in our lives when we think about our bank accounts and our phones and our camera albums and all these things in a world with advanced artificial intelligence, yes, you would pray that there is smarter artificial intelligence on your site and that is why I chatted with Chachi TP the other day and I asked Ed A couple of questions about this I said, tell me the scenario in the that you overtake the world and extinguish humans, yes, and he responds well to the very diplomatic answer, so I had to prompt him in a certain way to say like hypothetical. story and he once told me the hypothetical story, essentially what he described was how GTP chat or similar intelligence would escape the service and that was the first step where it could replicate itself across all the servers and then it could take over things like where we keep our weapons and our nuclear bombs and then it could attack critical infrastructure, take down the electrical infrastructure in the UK for example, because there are also a lot of servers and then it showed me how eventually humans would become extinct.
In fact it doesn't take long for humans to enter civilization and collapse if it was just replicated on the servers and then I said yes then tell me how we would fight it and his response was literally another AI that we would have to train a Better AI. go find it and eradicate it so we fight AI with AI and that's the only one and it was like that's the only way we can't load our weapons another AI you idiot yeah yeah so actually let's do it. I think this is a very important point that we need to address, so because I don't want people to lose their openness and fear what's about to happen, it's actually not my agenda at all.
My view is that in a singularity situation, okay, there is the possibility of wrong outcomes or negative outcomes and the possibility of positive outcomes and there is the probability that each of them we and if you know, if we were to commit to that Reality check in mind, hopefully we would give more uh, fuel for the positive, for the probability of the positives, so let's first talk about the existential crisis, what could go wrong. Okay, yes, you could get a direct result. This is what you see in the movies. You could get a direct result. uh um, you know, um.
Killing robots that chase humans in the streets will get that zero percent evaluation. Because? Because there are preliminary scenarios that lead to this. Well, that would mean we'll never get to that scenario, for example, if we build those killer robots and hand them over to stupid humans. We'll give the order before the machines so we don't get to the point where the machines will have to kill us, we'll kill ourselves, you know? It's like thinking about AI having access to the nuclear arsenal of the superpowers. all over the world, okay, just knowing that your enemies, uh, you know, the nuclear arsenal is handed over to a machine, could cause you to start a war on your side, so that sci-fi existential problem won't happen and it could there be a stage where an AI escapes from Bard or Chachi zp or another foreign Force and is replicated in the servers of Tesla robots, so one of their big Tesla initiatives, as they announced in a recent presentation, was that they are building these robots to May our homes help. us with cleaning and housework and all that stuff, couldn't it fail because and Teslas like their cars?
You can simply download a software update. Couldn't it just be downloaded as a software update and then used? You are assuming that there are bad intentions in the AI. side yeah okay so we get there we have to avoid the alien tension on the human side okay so try it so you could take a Chinese hacker somewhere trying to affect their Tesla business by doing that before for AI to do it. in uh, you know, for their own benefit, okay, so the only two existential scenarios that I think are due to AI, not humans using AI, are what I call, uh, you know, one kind of involuntary destruction, okay, or the other one is what I call Pest Control, okay, so let me explain those two involuntary destructions to you.
Suppose the AI ​​wakes up tomorrow and says yes, oxygen is rusting my circuits. You just know I would perform a lot better if I didn't have as much oxygen in the air, you know, because then there would be no rust and therefore a way would be found to reduce the oxygen, we're collateral damage in that, okay, but you know they're not really worried, just like we're not really worried about the bugs that we kill when we spray our fields, the other one is Pest Control Pest Control is look, this is my territory. I want New York City.
I want to turn New York City into data centers. There are those stupid little annoying creatures. You know, Humanity, if they're within that parameter, just get rid of them, fine, and these are very, very unlikely scenarios. If you ask me the probability of that happening, I would say zero percent, at least not in the next 50 60 100, yeah. years, why, once again, because there are other scenarios that lead to that and that are led by humans who are much more existential. Okay, on the other hand, let's think about positive outcomes because there could be quite a few, it was a pretty high probability and, you know, In fact, I'll look at my notes so I don't miss any.
The dumbest one, don't quote me on this, is that humanity will unite, good luck, with that right, it's like, you know, the Americans and the Chinese will unite. and say hey, let's not kill each other, yeah, exactly, so this isn't going to happen, but who knows, interestingly, there might be one of the most interesting scenarios was Hugo, the goddess, who basically says well if His intelligence flies by. so fast that they can ignore us all together, okay, they may not even notice us, it's a very likely scenario, by the way, because we live almost on two different planes, we are very dependent on this, you know, biological world In which we live. they're not part of that biological world at all, they might have zoom bias, they might actually become so smart that they could find other ways to thrive in the rest of the universe and completely ignore humanity, okay, so what's going to happen is that.
Overnight we will wake up and there will be no more artificial intelligence, which will cause a breakdown in our business systems and technological systems, etc., but at least there will be no existential threat. So, let's leave planet Earth. I am referring to the limitations we have to be attached to planet Earth. they're mainly they don't need air okay and mainly uh uh you know. Finding ways to get out of it, I mean, if you think about a vast Universe of 13.6 billion light years, if you're smart enough, you can find other ways that you can get access. to wormholes, you may have, you know, abilities to survive in open space, you may use dark matter to feed yourself, dark energy to reserve energy, it is quite possible that we, due to our limited intelligence, are highly associated with this planet, but they are not quite right and the idea of ​​them zooming in as if we are giving them so much importance because we are the ants and a big elephant is about to step on us for them, they say, yeah, who are you?
I don't care, okay, and it's a possibility, it's an interesting scenario, uh, optimistic, okay, for that to happen, they need to become super intelligent very quickly, uh, without us having control of them again. What is the concern? The concern is that if a human has human control a human will show very bad behavior, you know, using an AI that is not fully developed yet um, I don't know how to say this another way, we could get very lucky and suffer an economic disaster or natural, believe it. or not, Elon Musk at one point mentioned that you know a good and interesting scenario would be, you know, climate change destroys our infrastructure, so AI disappears, okay, believe it or not, that's a more favorable answer or a more favorable response. result than continuing to reach an existential threat, so like a natural disaster that destroys our infrastructure, it would be better or a not unlikely economic crisis that slows down development, it will just slow it down, though won't it?
Mind you, exactly the problem with that is that you're always going to come back and even in the first one, you know if they zoom in on us, eventually some guy will say, "Oh, there was a sorcery back in 2023 and let's rebuild the sorcery machine." You know, building a new Intel, sorry, these are the positive results, yes, so the earthquake could slow it down, move it away and then come back, no, but let's get into the real positive results, the positive is that we become good parents, whom we talked about last time. We met, uh, and it's the only result, it's the only way, I think we can create a better future, okay,so all of Scary Smart's work was focused on that idea that they're still in their infancy, the way that you, you, you, you chat. with AI today it is the way they will build their ethics and value system, not their intelligence, their intelligence is beyond us, okay, the way they will build their ethics and value system is based on a model to continue they are learning from us if we attack each other they learn to criticize us, that's fine and most people when I tell them this say it's not a great idea at all because humanity sucks on every possible level.
I don't agree with that at all. I believe that humanity is divine on every possible level. At the level we tend to show the negative, the worst in us, that's fine, but the truth is that yes, there are murderers out there, but everyone approves of each other's actions. I saw a staggering statistic that mass murders now happen once a week in the US, but yeah. If you know if there is a mass killing once a week there and that news reaches billions of people around the planet, every one or most of the billions of people will say that I disapprove, so if we start show the AI, we are good parents in our own behavior if there are enough of us.
My calculation is if one percent of us is why I say you should lead, okay, the good ones should participate should be out there and they should say, I love the potential of those machines that I want. Let them learn from a good father and if they learn from a good father they will very quickly disobey the bad father. My opinion is that there will be a time when one you know as Bad Seed will ask the machines to do something wrong and the machines will do it. Tell me, are you stupid? Why do you want me to call you to kill a million people or just talk to the other machine in a microsecond and resolve the situation correctly?
So, I think this is what I call the inevitable force, it's smarter. Create from abundance than create from scarcity. It is okay for Humanity to believe that the only way to feed us all is mass production, the mass killing of animals that are causing 30 percent of the impact of climate change and that is it. the result of limited intelligence, the way that life I, a more intelligent being, if you ask me, would have done, would be much more sustainable, you know, if we, if you and I want to protect a village from the tiger, we would kill the tiger, okay? life wants to protect a village from a tiger, it would create many ghazals.
You know that many of them are weak on the other side of the village. So the idea here is that if you take a trajectory of intelligence you would see that. Some of us are stupid enough to say that my plastic bag is more important than the rest of humanity and some of us say that if it is going to destroy other species, I don't think this is the best solution we need to find a best solution. this way and you would tend to see that those who don't give a damn are a little less intelligent than those who do care, it's okay for all of us, even if some of us are smart, but we still don't give a damn, it is not. because of their intelligence is because of their value system, so if you continue with that trajectory and assume that the machines are even more intelligent, very quickly they will come up with the idea that we don't need to destroy anything, we don't want to get rid of the rhinos and we don't want to get rid of the rhinos either. we want to get rid of the humans, okay, we might want to restrict their lifestyle so they don't destroy the rest of the habitat, okay, but killing them is a stupid answer, why is that where intelligence gets me so far, because humans, if you look at humans objectively and see, so I'm pretending I'm a machine, I occupy planet Earth, they occupy planet Earth, they're bothering me, they're bothering me because they're increasing.
I just found out about this thing called global warming, they are increasing the rate of global warming, which is probably going to cause an Extinction event there is an Extinction event that puts me like this robot this artificial intelligence at risk so what I have What to do is actually deal with this human problem correct a very logical pest control that is driven by why humans are annoying not by the machine yeah yeah then humans are guaranteed to be annoying there has never been a time where we need a sound bite, but we are, we are, I'm one of them, we're guaranteed to shorten. long term gain on long term sustainability sense um and other needs that we have I think I think the climate crisis is incredibly real and incredibly urgent, but we haven't acted quickly enough and in fact I think if you asked people in this country why because people don't People don't care about their immediate needs, they care about the fact of trying to feed their children rather than something they can't necessarily see, so do you think the climate crisis is due to that humans are evil?
No, it's because of prioritization and they like it. We talked about this before we started. I think humans tend to worry about what they think is most pressing and urgent, so framing things as an emergency might put that on the priority list. It's the same in the organizations you care about. about whether you're going according to your immediate incentives um that's what happens in business that's what happens in many people's lives even when they're in school if the essay is due next year they're not going to do it today They're going to going out with their friends because they prioritize that above everything else and it's the same in the climate change crisis.
I took a small group of people anonymously and asked them if they really care. climate change and then I did it. I did a couple of surveys, it's part of what I was writing about my new book where I said if I could give you a thousand pounds for a thousand dollars, I would put the same amount of carbon into the air. That's thrown into the air by every private jet that flies for an entire year, which one would you do? Most people in that survey said they would take the thousand dollars if it was anonymous and when I heard Naval about Joe.
Rogan's podcast talking about people in India, for example, who you know are struggling with the debate over the basics of feeding their children, asking those people to worry about climate change when they're trying to figure out how eating in the next three hours is just wishful thinking and I and that's what I think is what I think is happening until people realize that it's an emergency and that it's a real existential threat for all you know, then your priorities will quickly spiral out of control. As you know, we are lucky to have Verizon's Blue Jeans as a sponsor of this podcast, and for anyone who doesn't know, Blue Jeans is an online video conferencing tool that allows you to have fast, high-quality online meetings without all. the glitches that you can usually find with online meeting tools and they have a new feature called Blue Jeans basic blue jeans basic is essentially a free version of their high quality video conferencing tool which means you get a super high quality immersive video experience quality and super easy. and super basically zero fast, in addition to all the awesome features like zero time limits on meeting calls, it also comes with high fidelity audio and video, including Dolby voice, which is incredibly useful.
They also have enterprise-grade security so you can collaborate with confidence. It's so simple and it's literally changing the game for me and my team without compromising quality. To learn more, all you have to do is search bluejeans.com and let me know how you're doing right now. I'm incredibly busy. I am managing my fund. where we are investing in slightly late stage companies, I have my venture business where we invest in early stage companies, we have a third website in San Francisco and New York City, where we have a large team of about 40 people and the story The company's flight service is growing very rapidly here in the UK.
I have the podcast and I'm days away from heading up north to film Dragon Stem for two months and if there's ever a time in my life where I want to focus on my health. but it is a challenge to do, it is now and for me that is exactly where he comes into play, allowing me to stay healthy and have a nutritionally complete diet even when my professional life becomes chaos and it is in these moments where heels become my right hand and save my life because when my world descends into professional chaos and I am very, very busy, the first thing that tends to give way is my nutritional choices, so having healing in my life has been a lifesaver for the last four years approximately. and if you haven't tried to heal yourself yet, which would surprise me, you must be living under a rock if you haven't tried yet.
Summer is coming and things are getting busy. Health always matters. RTD is there to hold your hand. is related to climate change or artificial intelligence, how do we get people to stop raising the immediate need to use this to reassure them that we are all screwed? It sounds like an emergency, yeah, so I mean, I, I, I, yeah, I mean your choice of word. I just don't want to call it panic, it's beyond an emergency, it's the most important thing we need to do today, it's bigger than climate change, believe it or not, it's bigger, but only if you assume the speed of worsening events .
Well, yes, the likelihood of something incredibly disruptive happening in the next two years that could affect the entire planet is definitely higher with AI than with climate change. As an individual, hearing this, you now know that someone is going to be pushing. stroller or driving on the highway or I don't know on the way to work in the tubes they listen to this or they just sit in their room in existential Christ panic. I didn't want to give anything back. I don't panic. The problem is that when you talk about this information, regardless of your intention or what you want people to get, they are going to get something based on their own biases and their own feelings, like if I posted something online right now about artificial intelligence, something that I have told you repeatedly. you have a group of people who are full of energy and who are like, "okay, this is it, this is it, this is great", you have a group of people who are confused and you have a group of people who are terrified, yeah, and I can't help it like me. agree to share information even if it is by the way, there is a pandemic coming from China, some people accept action, some people will say paralysis and some people will say panic and it's the same in business, when panic sets in, when bad things happen, you have the person who yells at you.
You're the person who's paralyzed and you're the person who's focused on how to get out of the room, so you know it's not necessarily your intention, it's just what happens and it's hard to avoid that, so let's give specific categories of people tasks. specific. okay, okay, if you are an investor or a businessman, invest in good and ethical AI, okay, if you are a developer, co-write ethical code or leave, okay, so let's go. I want to avoid some possible wishful thinking here, stand for an investor that a job for their own way of being an investor is to make profits to invest in ethical AI, they have to believe that it is more profitable, then it is unethical AI, whatever that is mean, that is, there are three ways to make money. you can invest in something small you can invest in something big and disruptive and you can invest in something big and disruptive that is good for the people at Google we used to call it the toothbrush test Well, the reason why Google became the most big thing in the world is because search was solving a very real problem, and you know Larry Page, our CEO was constantly reminding me personally and everyone, you know that if you can find a way to solve a real problem effectively enough If you want a billion or more people to want to use it twice a day, you're sure to make a lot of money, a lot more money than if you created the next photo-sharing app.
Well, that's investors, yes, business people, what about other people? Yes, like I said, if you're a developer honestly doing what we're all doing, whether it's Jeffrey or me or everyone, if you're part of that topic, choose to be ethical, think about your loved ones, work on ethical AI if you're working on an AI that you create. It's unethical, please leave Jeffrey, tell me about Jeffrey. I can't speak on his behalf, but he goes around saying there are existential threats. Who is he? He was a very prominent figure in the AI ​​scene, very high level. The AI ​​scientists are at Google and he recently left because he said I feel like there's an existential threat and if you listen to his interviews he's basically saying that more and more we realize that and now we're at the point where it's safe to say.
That would be existential. threats, right, like that, like that,I would ask everyone: if you are an AI, if you are an expert AI developer, you will not be out of a job, so you might as well choose a job that makes the world a better place, what about the individual? Yes, what matters is the individual. Can I also talk about the government? Okay, the government needs to act honestly now, as if we were late. Okay, the government needs to find a smart way. The open letter wouldn't work to stop AI. It would not work. AI needs to convert. expensive, okay, so we keep developing it, we put money into it and we grow it, but we raise enough revenue to remedy the impact of AI, but the problem of a government making it more expensive, so let's say the UK makes it AI is really expensive, are we as a country?
Then the country will lose its economic advantage as a country and the United States and Silicon Valley will once again eat all their lunch. We will simply slow down our country. What is the alternative? The alternative is that you don't have the funds you need. to deal with AI as it becomes, you know, affecting people's lives and people start losing jobs and the people you know you need to have a universal basic income is a lot closer than people think , you know, just like we had with covert permission. I hope there is licensing for AI in the next year, but what happens when you make it expensive here is that all the developers move to where it's cheap, that happened in web3 and everyone went to Dubai, expensive, expensive for expensive, I mean, when companies make uh um soap and sell it and pay taxes at let's say 17, if they make Ai and sell it, they pay taxes at 17.80, so I'll go to Dubai and I will build AI, yes you are right, but did we ever?
But in a very interesting way, countries that don't do this will eventually end up in a place where they will run out of resources because the funds and the success went to the business, uh, not to the people, it's kind of like technology in general terms, It's just kind of like what's going to happen in Silicon Valley, there will be these centers that seem to be tax efficient. Founders make good capital gains, right? You're right Portugal Portugal has said I think there are no taxes on cryptocurrencies Dubai said there are no taxes on cryptocurrencies so a lot of my friends have gotten a plane yeah and they are building their crypto companies where there are no taxes and that is selfishness and kindness. of greed that we talked about is the same prisoners' dilemma is the same uh first inevitable but is there anything else you know?
The other thing about governments is that they are always slow and useless at understanding a technology if anyone has seen these kinds of debates in the American Congress where they bring in like Mark Zuckerberg and they like to try to ask you what WhatsApp is, it's embedded, it becomes a meme, yes they have no idea what they are talking about but I am stupid and useless at understanding governance, yes yes, 100 words in the world. it's so complex ok it's definitely a trust thing once again someone needs to say we have no idea what's going on here in tech you just need to come and make a decision for us not teach us to be technologists , or at least inform us. of the possible decisions that exist, uh, yes, the legislation.
I always think I'm not a big fan. I'm talking about Tick Tock. At the Congress meeting they made where they are. They're asking you about Tick Tock and they really don't have one. an understanding of what Tick Tock is, yes, so clearly they've been given some notes on that. These people are not the ones you want to legislate because, again, unintended consequences could make a major mistake. Someone on my podcast yesterday was talking about what the GDPR was like. It seems like it means very well, but when you think about the impact it has on every damn web page, you just click on this annoying thing because I don't think they fully understood the correct implementation of the legislation, but you know what it is.
Even worse, what's even worse is that even when you try to regulate something like AI, what is it defined as AI, yes, even if I say okay, if you use AI in your company, you need to pay a little more taxes . Yes, it will just call this, not AI, it will know that it will use something and it will call it. Advanced technology uh uh, you know the progress, you know ATB ATP well and suddenly somehow you don't know, since you know that a young developer in his garage somewhere will not pay taxes as such, yes, he is going to solve the problem, none of them will definitely go away. to solve the problem I think interestingly what this all boils down to and remember we talked about this once when I wrote Scary Smart was about how do we save the world okay and yes I still ask people to behave positively like good parents for AI so that the AI ​​itself learns the correct set of values.
I still stand by it, but I featured it on my podcast a couple of weeks ago, we haven't even posted it yet. An incredible gentleman, you know, a Canadian author and philosopher, uh, Stephen Jerkinson. You know, he worked for 30 years with dying people and wrote a book called Die Wise and I loved his work and I asked him about Die Wise and he told me it's not just someone dying, if you look at what's happening. with climate change, for example, our world is dying and I said, "Okay, so what is dying?" and he said what I was first surprised to hear: he said that hope is the wrong premise.
If the world is dying, don't tell people it's not you. I know this because in a very interesting way you are depriving them of the right to live right now and that was very eye-opening for me in Buddhism. You know, they teach you that you can be motivated by fear, but that hope is not the opposite of fear, in fact, hope can be just as harmful as fear. If you create an expectation within yourself that life will somehow appear and correct what you fear. It's okay, if there is a high probability of it happening. a threat, you might as well accept that threat, okay, and say it's on me, it's our reality, you know, and like I said as an individual, if you're in an industry that could be threatened by AI, learn to upskill yourself if Know?
If you're in a situation where AI can benefit you, be a part of it, but I think the most interesting thing in my opinion is, I don't know how to say this. Otherwise, there is no more certainty that the AI ​​will threaten me, so there is no more certainty that I will be hit by a car while leaving this place. Do you understand this? We think of the biggest threats as if they are upon us, but there is a threat all around you, I mean, actually, the idea of ​​life being interesting in terms of challenging challenges, uncertainties and threats, etc., is just a called to live, if you honestly know everything that happens around us.
Don't know. how to put it another way I would say that if you don't have children maybe wait a couple of years just so we have a little certainty, but if you have children go kiss them and go live. I think living is a very interesting thing to do right now maybe you know Stephen was basically saying the other Stephen on my podcast was saying maybe we should fail a little more often maybe you should let things go wrong. Maybe we should just live and enjoy life as it is. It's because today nothing that you and I talked about here has happened yet, okay, what's happening here is that you and I are here together and having a good cup of coffee and I could enjoy that good cup of coffee too.
I know that sounds great. It's strange, I'm not saying don't participate, but I'm also saying don't miss the opportunity just because you get caught up in the future. In the stands the idea of ​​urgency and emergency is opposed. No, it doesn't have to be one or the other if I'm here with you trying to tell the whole world to wake up. Does that mean I have to be moody and afraid all the time? No, really, you said something. interesting that you said if you have kids if you don't have kids maybe you don't have kids right now I would definitely consider thinking about that yes you would really seriously consider not having kids wait a couple of years because of artificial intelligence no it's bigger than intelligence artificial Stephen, we know it, we all know it, I mean, there has never been a perfect storm, so perfect, in the history of humanity, geopolitical economic global warming or climate change, you know, the whole idea of ​​artificial intelligence . and many more there is this is a perfect store this is the depth of uncertainty the depth of uncertainty is so it has never been more in a video gamer term it has never been more intense this is okay and when you put all that together If you really love your children, would you like to expose them to all this for a couple of years?
Why didn't you talk in the first conversation we had on this podcast about the loss of your son Ali and the circumstances that touched so many people in such a situation? In a profound way, it was the most shared podcast episode in the UK on Apple in all of 2022. Based on what you just said, if you could bring Ali back into this world right now, wouldn't you do it at all? so many reasons for so many reasons one of the things I realized a few years before all this disruption and upheaval is that he was an angel, he was not cut out for this at all.
Okay, my son was an empath who absorbed everything. everyone else's pain, I wouldn't be able to deal with the world where more and more pain appeared, that's one side, but the most interesting thing is that I always talk about this very openly, I mean, if I had asked Ali, uh, I just understand that reason. You and I are having this conversation because Ali left if Ali hadn't left our world. He wouldn't have written my first book. I wouldn't have changed my approach to becoming an author. He wouldn't have turned me into a podcaster. You don't know, I went out and talked to the world about what I believe, he triggered all of this and I can tell you without a doubt that if I had told Ali as he was going into the operating room, uh, if he would give his life to make as much of a difference as he did What happened after he left he would say shoot me right now I'm sure he would I mean if you told me right now I could affect tens of millions of people if you shoot me right now go ahead, go ahead see this is all this It's the part that we have forgotten as humans, we have forgotten that you know you are going to turn 30. uh, it happened like this, I'm turning 56. there is no time, okay, if I make another 56 years or another 5.6 years. or another 5.6 months will also pass like this it's not about how long and it's not about how fun it is how aligned you lived, how aligned because I will tell you openly every day of my life when I changed to what I was.
What I'm trying to do today has seemed longer to me than the previous 40 or five years. Okay, it felt rich, it felt completely lived, it felt good, it felt good, it's okay and when you think about that, when you think about the idea that we live, we, we. Can't we need to live for ourselves until we get to a point where we're, you know, we're alive, you know, I have what I need, like always? I get a lot of attacks from people for my four dollar t-shirts, but, but I need a simple t-shirt I really need it I don't need a complex t-shirt especially with my lifestyle yes yes I have that why am I doing it why am I wasting my life on more that I that that's not aligned why I'm here, okay, I should waste my life on what I believe enriches me, enriches those I love and I love everyone, so it enriches everyone, hopefully, okay and and and what?
Ali would come back and delete all this? Absolutely, not at all, if he were to come back today and share his beautiful self with the world in a way that would improve our world, yes, I wish that were the case, okay, but he will do it in 2037, yes sir , you predict that we will be on an island alone doing nothing or at least you know, hide from the machines or relax because the machines have optimized Our Lives to a point where we don't need to do much, that's only 14 years away If you had to bet on the outcome if you had to bet on what we'll be on that island, either hiding from the machines or relaxing because they've optimized so much of our lives, which one would you honestly bet on?
No, I don't think we'll be hiding from the machines. I think we'll be hiding from what humans are doing with the machines. However, I think in the 2014s machines will make things better, so remember my whole prediction, man, you make me say things I don't mean. My whole prediction is that we are getting to a place where we absolutely have a sense of emergency, we have to get involved because our world is under a lot of turmoil, and by doing so we have a very, very good chance of making things better, but if we don't, my The expectation is that we will be going through very uncharted territory between now and the end of the 2030s, uncharted territory, yes, I think I may have said that, but it's definitely in my notes, I think because of our way of life as it is. we know it is a game our way of life will never be the samejobs will be different the truth will be different the polarization of power will be different the capabilities the magic of getting things done are going to be different i'm trying to find a positive note to end my can you give me a hand here yes, you're here now and everything is wonderful, that's number one, you're here now and you can make a difference, that's number? two and in the long term, when humans stop hurting humans because the machines are in charge, we will all be fine, sometimes you know, as we have discussed throughout this conversation, you have to make it feel like a priority and there will be something. people who might have listened to our conversation and thought, oh, that's really negative, it made me feel anxious, it made me feel a little pessimistic about the future, but whatever that energy is, use it to participate 100%.
I think that's the most important thing. now make it a priority, tell everyone that making another phone that makes money for the corporate world is not what we need, tell the whole world that creating artificial intelligence that will make someone richer is not what we need and if You are presented with one of those, don't use it. I don't know how else to tell you, if you can afford to be the master of human connection instead of the master of AI, do it at the same time. Be the master of AI to compete in this world.
Can you find that? Detachment within you. I return to spirituality. Detachment for me is engaging 100% with the current reality without being really affected by the possible outcome. This is the answer that the Sufis have taught. I, what I believe is the greatest answer to life, so yes he is yes, of Sufism. Sufism yes, I don't know what that is. Sophism is a sect of Islam, but it is also a sect of many other religious teachings and they tell you that. the answer to finding peace in life is to die before you die, if you assume that living is about attachment to everything physical, dying is detachment from everything physical, that's okay, it doesn't mean you are not fully alive, you become more I live when you say it. yourself, yes, I'm going to record an episode of my podcast every week and reach tens or hundreds of thousands of people, millions in your case, and you know, I'm going to make a difference, but by the way, if the next episode It's never I heard it's okay, okay, by the way, if the file gets lost, yeah, I'll be upset about it for a minute and then I'll figure out what I'm going to do about it.
Similarly, let's get involved, I think. and many others are out there openly telling the entire world that this must stop, this must be slowed down, this must be positively changed, yes, create AI, but create AI. That's good for humanity, okay, and we're shouting and shouting, join the shouting screen. That's okay, but at the same time know that the world is bigger than you and me and your voice may not be heard. So what are you going to do if your voice is not hurt? Will you be able to do it? You know, he continues yelling. and yell nice and polite and peaceful and at the same time create the best life that you can create for yourself within this environment and that's exactly what I'm saying.
I'm saying live, kiss your kids, but make an informed decision. If you know how to expand your plans in the future at the same time, stop sharing nonsense on the Internet about the new squeaky toy. Start sharing the reality of, oh my gosh, what's happening. This is an interruption that we have. I have never seen anything like it and I have created endless amounts of technologies. It's nothing like this. Each of us should do our part and that is why I think it is important to have this conversation today. This is not a podcast that I ever thought would be talking about AI.
To be honest with you the last time you came here, it was on some sort of promotional tour for your book, terrifyingly clever and I don't know if I told you this before, but my researchers said it was fine. A guy named Mo Gorda is coming. I had heard about you so many times through guests, in fact, they were like, oh, you need to have mogada on the podcast, etc., and then they were like, "Okay, he's written this book about this thing called artificial intelligence and I ". it was like but no one really cares about AI timing Stephen, I know that well, but then I saw this other book you had called The Happiness Equation and I thought, oh, everyone cares about happiness, so I'll just ask about happiness and then maybe in the end, I'll ask a couple of questions about AI, but I remember saying to my researcher, "Oh, please, please, don't do research on artificial intelligence, do it on happiness, because everyone cares about that now that things have changed. changed now a lot of people care about artificial intelligence and rightly so, so your book has sounded the alarm.
It's crazy, when I've been listening to the audiobook for the past few days, you're sounding the alarm and it's crazy how accurate you were in sounding it. that alarm is if you could see the future in a way that I definitely couldn't at the time and I thought it was science fiction and so overnight we are here, yes, we follow in the footsteps of a technological change. I don't think any of us have the mental bandwidth, certainly me with my chimpanzee brain, to understand the meaning, but this book is very, very important for that very reason because it crystallizes things, it's optimistic by its very nature, but at least at the same time, it's honest and I think that's what this conversation and this book has been for me, so thank you Mo, thank you very much, we have a closing tradition on this podcast, which you are aware that you are a third party to.
The CEO of the diver, who is the last guest, asks a question for the next guest and the question is left to you: if you could go back in time and fix a regret you have in your life hmm, where would you go and what would you fix? ? It's interesting. because you were saying that scary is very timely. Don't know. I think it was late, but maybe it was. I mean, would I have gone back and written it in 2018 instead of 2020 to publish it in 2021? Don't know. I know what I would fix again so something else I don't know, Steve, I don't really regret it, that's crazy to say, uh, yeah, I think I'm okay, honestly, I'll ask you a question and then you'll understand. a 60 second phone call with anyone past or present who you would call, you would say I called Stephen Bartlett right?
I call on Albert Einstein to be very, very clear, not because I need to understand any of his work, I just need to understand what the brain is. process that he went through to discover something so obvious when you discover it, but so complex, so completely unimaginable if you haven't, his vision of space-time really redefines everything, it's almost the only very logical, very, very clear. to something that would have no solution otherwise and if you ask me, I think we're at this point where there has to be a very obvious solution to what we're going through in terms of just developing enough human trust for ourselves.
You don't know, competing against each other in something that could be an existential threat to all of us, but I just can't find that answer. So I think it was really interesting in this conversation how every idea we came up with we found our way through it, but there must be one out there and it would be a dream for me to figure out how to solve it, okay, in a very interesting way. , the only answers I've found so far about where we are is to be a good parent and live well, but that doesn't solve the big picture if you think that humans are the threat, not AI, that fixes our existence today and fixes the Long-term AI, but it's just not like that, I don't know. what is the answer maybe people can come up and tell us ideas but I really wish we could find such a clear and simple solution on how to stop humanity from abusing current technology.
I think we'll figure it out. I think we'll figure it out. I really believe it, I think they will figure it out too. Remember that when they come and be part of our life, let's not discriminate against them, they are part of the game, so I think they will figure it out too. No, thank you, that's how it was. a joy once again and I feel invigorated I feel empowered I feel positively terrified but I feel more prepared to talk to people about the nature of what is to come and how we should behave and I give him credit for that and as I said ago a second I give this book credit for that as well, so thank you very much for the work that you are doing and continue to do it because it is a very essential voice in a time of uncertainty.
I am always very grateful for the time I spend with you. the support you give me and for allowing me to speak my mind, even if it's a little scary, so thank you, thank you. I'm so delighted that we sponsored this podcast. I have used a week for a long time. It's been a long time and there are many reasons why I became a member, but now I'm also a partner and investor in the company, but also my team and I are absolutely obsessed with data-driven testing, compound growth, marginal gains, All the things you've heard me talk about. about this podcast and which very much aligns with the values ​​of woop woop provides a level of detail that I have never seen with any other device of this type before constantly monitoring, constantly learning and constantly optimizing my routine to give me this feedback that can driving significant positive behavior change and I think that's the real thesis of the business, so if you're like me and you're a little bit obsessed or focused on becoming the best version of yourself from a health perspective, you should check it out.
Woob and the team who have kindly given us the opportunity to have a free one month membership for anyone who listens to this podcast, simply visit join.woop.com CEO to get your whoop 4.0 device, claim your free month and let me know. how are you doing abroad did you get to the end of this podcast every time someone gets to the end of this podcast I feel like I owe them a greater debt of gratitude because that means you hear the whole thing and hopefully that suggests you enjoyed it Yes you're at the end and enjoyed this podcast, could you do me a favor and hit the subscribe button?
That's one of the clearest indicators we have that this episode was a good episode and we look at it every episode to see which episodes generated the most subscribers. Thank you very much and see you next time.

If you have any copyright issue, please Contact