YTread Logo
YTread Logo

You and AI Presented by Professor Brian Cox

Jun 01, 2021
Hi, thank you, we thought we'd stay here for a moment to take a photograph for them for the Royal Society archive. Would you mind not saying hello? I think you should probably look like you're interested and intellectual and uh. How is that? Oh, still interested, now you can, now you can say hello if you want. Hello, okay, okay, thank you for that so thank you for joining us, which is the final event in the Royal Society's you and ai series and I want to. to welcome everyone here tonight at the barbican and people, well, uh, looking online, so thank you for joining us, I'm

professor

brian

cox, the royal scientist,

professor

for public participation in science, now They have to go back to, in fact, we found in Homer's research.
you and ai presented by professor brian cox
Iliad in the year 800 BC. C. to find the first accounts of automata and, over the centuries, those ideas have evolved into more familiar ideas of robots, cybernetics and now artificial intelligence, and it was a fellow royalist, Alan Turing, who started to deal with the specific notion of a machine. Throughout the 1940s and 1950s he posed the question now known as the traveling test: can machines think? The idea is that a machine could be understood or assumed to think if it exhibits intelligence than a human being. You might think that it was actually human and this is actually something that we will discuss tonight that could be seen as simply pointing to the idea that humans are gullible rather than a measure or a good insight into machine intelligence, if It is wanted, but perhaps this tells us. that our relationship with AI could be as important as the relative intelligence of the machine itself as the use of artificial intelligence grows and spreads throughout society, how do we feel about it making decisions on our behalf and doing Well did it work for us last year, royalty?
you and ai presented by professor brian cox

More Interesting Facts About,

you and ai presented by professor brian cox...

The society released a landmark report on machine learning, the technology driving many of the current advances in artificial intelligence, and the society has been supporting a well-informed public debate ever since. It's a bold idea, isn't it? A well-informed public debate could be helpful. That in many other fields I think of the development of AI to help create a society in which the benefits of AI are shared equitably. Now with me this afternoon are some of the world's leading thinkers on AI and together we will discuss aspects of AI. potential to revolutionize fields such as healthcare and education, but also the challenges and issues surrounding the ethics of mass use of publics, which is our data now, before introducing the panel, I would like to thank Deepmind who has kindly supported to all this u and ai. series, but now, without further ado, could I welcome you to our panel?
you and ai presented by professor brian cox
Let's see, by way of introduction, professor succia is the john c malone joint assistant professor in the department of computer science at john hopkins university. uh, professor peter Donnelly f med sci frs is a professor of statistical science. Oxford University and CEO of Genomics Plc and Dr Vivian Ming is a theoretical neuroscientist technologist, entrepreneur and co-founder of socos. Now the first question before we really get started, I think, is to ask each panel member for a definition that the AI ​​has captured. I think a lot of our imaginations, maybe our nightmares in a sense, but they mean different things to different people, so maybe if I could ask each of you, first of all, to define what AI is, oh, it's okay, so, you know, I think it may be artificial intelligence. follow a variety of definitions, one of the most boring, but perhaps the most accurate, is any autonomous system that can make decisions under conditions of uncertainty, so if there is a problem that has no correct answer and there is no human there to make a decision, an AI would be a system that could do that there is no movement of the right chest there is no right answer to how fast should I take this right turn in my car or should I give this person a loan or not they are just fundamentally uncertain decisions actually I like something a little more practical when I describe ai particularly because I work a lot in what is called the future of work and for me then ai is particularly modern ai is any brief expert human judgment reading a contract deciding whether hire someone is there much to it? -ray is increasingly being made faster, cheaper, and often better than a human can make it.
you and ai presented by professor brian cox
I think Vivian totally nailed it. I don't have much to add except to mention a specific form of AI and one of the developments that is driving many of the things that impact our lives now and that is something called machine learning, so it is a form of AI and it is the idea of computer systems that can learn by themselves from examples and data and experience the idea that instead of programming a computer like in the old days to tell it what to do in every eventuality in machine learning, you program a computer so that it can study the examples and be able to learn for yourself what the patterns are to help you make the kinds of decisions that Vivian was talking about, I think for the most part, we've captured most of the essence of what AI is just kind of of historical context, which is AI as a field that emerged in the 60s, where it was primarily the ambition of researchers from a diverse group of fields, from physics and cognitive science to computer science, if you like, engineering, where the goal was to build computers that could behave intelligently like humans did and now you could argue well, do humans behave intelligently?
So essentially, you can see that the goal was ambiguous and over time. The last 50 or 60 years we have worked hard to determine what intelligence is, what human intelligence is, what machines should be like, how we should build machines that are intelligent and whether we should imitate humans or should we determine what is right in a certain moment. situation and that's the field as a whole, you know, and it's worth noting that a lot of the things that we really consider examples of intelligence, like you can take a math test or you can play chess, where in fact some of the Early Newell and Simon programs invented in the early 60s that could do these things while one of you in the front row smiled turned out to be an enormously difficult problem to solve for a long time, although now it turns out we can. throw it on your phone as a free app, but it turns out that it's a much more complex problem than we originally appreciated it to be not only because we can't see them because of the lights, yes, the glaring lights, although the ones in front.
They're clearly hoping to get their money, uh, value for this performance that everyone has described, I think in a sense, what we might call general ais, which is, you know, something that is very human and multipurpose, but there is also ai. systems in machine learning systems that have very specific tasks, so would you like to comment on the difference between those two? I'm going to do this again, okay, so there are these two big concepts in a lot of modern discussions about AI, one is called artificial general intelligence, this is something that thinks like us, actually understands and navigates the world and I'm going to hold that there is no system like that in the world today, no one is about to invent it, however, these systems that can take a very specific approach to understanding the world, for example, is someone in the front row smiling?
Is there a loophole in this legal contract? Is there a giraffe in this picture? effective, but I would say that anyone who can recognize giraffes does not understand anything about giraffes, does not have the concept of a giraffe and when you look at the errors made between people who look at photographs and those who look at ambiguous photographs to make a decision, is there a draft here? or not, you can understand why they might be wrong, maybe this is a llama with a cold or something, whereas when these ais make these mistakes it often looks nothing like a giraffe or um, think that something that is looks exactly like a giraffe it's not and you I've come to appreciate that we're really making decisions in markedly different ways, so I would say that these very simple systems that are really the core of AI today address the problems of a way dramatically different from all of us.
Does that mean the word intelligence? It's a complicated word, isn't it? I think there's a lot of baggage associated with it when you say intelligent system, like a lot of people think of something like us rather than something that's good at a specific thing. I think you already know. After moving, I traveled from India and moved to America at the age of 17, I learned new things that I couldn't see as a child and I remember it clearly, so humor humor is an example of something you know and It's very cultural and contextual and it's something I find fun now.
I may not have found it funny when I was 15 and living in India, but I clearly remember thinking of it as a problem I had to solve, you know, someone was saying something and people were laughing. I was trying to figure out what was going on and I finally understood the context, so what I mean is that I think one thing that's difficult about intelligence is almost anything you know, this ability to reason through things like solving a difficult math problem or To be able to recognize giraffes from llamas, you can sit down and think about it in some deductive or logical way where you can, I think, train a computer to do it or even ask kids how they do it.
We're doing it and they're going to describe it in a way that you could have computers do it, so I think this notion of what human intelligence is and what separates it, you know what the essence of human intelligence is that we're trying to capture. , believe. It's an open question from my point of view, I don't know what you think, um no, I completely agree and I actually go back to both points and Brian's earlier question, in a sense the reason we're having the discussion. . I suspect there are many reasons, but it is due to advances in machine learning, in particular approaches to very specific tasks and problems, the kinds of things we interact with on smartphones every day, the ability to speak on phone and I don't think I understand it. what we are saying, but knows how to react to what we say, recommendation systems when we buy online, the ability to tag friends in photos on Facebook or other applications, all those kinds of things are due to progress on these very topics. specific.
Image recognition tasks are one of them, and I've had to revisit history for most of my academic career. You know, for example, computer vision has been a very active field, but throughout the '80s, which dates back to me from early in my career through the '90s and even the early and much of the 2000s, the Computer systems just weren't very good, they would get a little better every year, but they weren't as good as people and then in the last six or seven years they got better. from not being very good at that specific task to now on many tests as good or better than people, whether it's detecting giraffes, whether it's looking for pathology on an x-ray of some kind, or computer systems because of machine learning. it can now outperform people and similarly with speech recognition and with some translation tasks, so it's that massive progress over the last five or six years that has meant that we interact with these things all the time, but We are interacting with them as you said.
Brian in terms of the ability to do very specific things now remarkably well, well, let's move on to your question, so we've divided them into three sections and the first one we've titled who is benefiting from AI right now and the first question is from uh anya and the question is who benefits the most from the current ais system? Who likes it? I can start this every time I'm a paid pompous asshole, so you know, I think one of the truths about almost all technology is that at least initially it always benefits the people who need it least, nature itself, particularly, I have worked a lot in educational technology, in fact, I apply artificial intelligence to education and to everyone who participates in it. this field does it because let's face it, we want to save the world, we want to say: some little kid, can we build a little AI tutor?
Imagine that and every child in every home in the world and the simple truth is a business. something like this is successful because it can sell its product to a bunch of wealthy parents who want to get some extra points on a standardized test that won't change anyone's life at all, and that's why we build these things and they tend to get adopted. And part of the crux of it is that when we build them we don't really understand people, many of us understand a lot about people.machines, although there is a lot more to learn, but how machines and people interact, what people believe is valuable.
I will definitely do it. Let's say in the world of education, if you put an app on the app store, you haven't solved anything because the people who really need that help will never go there, they will never download that app, they will never use it because they don't believe in their hearts. that will really make a difference and so we look again and again and see that this really is more of a power of concentration, it inevitably increases inequality at least in the short term and AI in particular is interesting because it can substitute for the human judgment, at least in specific tasks and, in particular, highly trained professional human judgments, in itself is a profound inequality that increases in ways that we have never been able to do before, because now don't bother someone who I don't think is a guy bad, but now jeff bezos can replace his ais with a bunch of people he would have had to pay to make those judgments earlier and that also has a big influence on inequality, so you're essentially suggesting that there is subtext to this question which is that it does not necessarily benefit the public;
You're essentially taking a cynical view that many of them are just increasing corporate profits. I wouldn't build these things if I didn't think it was worth it, and I hope we get a chance to talk about that kind of work, but I'll be very clear about building something. Because you think it will do good in the world is dramatically different from doing good in the world and we can build tremendously powerful systems. They are tools. They are not truly autonomous in the sense that they do not understand the world, but they are tremendously powerful tools, but if we simply think that a hammer by its very existence builds a house, then we have really lost our understanding of how the world works.
I think this partly has to do with the way funding is structured right now. Therefore, most of the funding in this field right now comes from individual research grants, meaning that no professor or lab has enough grants to build anything of significant importance. On the other hand, venture capital and private corporations have a good amount of funding, but also the vision, they see the vision of how this can bring good to the world, but then, as Vivian said, the challenge is that if you go for That way, you need to have a sustainable business model, which means you're naturally heading in a direction where you're going to initially serve audiences that can pay for it.
I work a lot in healthcare and I think one of the promising things I see in healthcare is its centralized nature, meaning data is maintained. In large companies or here, the NHS, for example, if we can harness this data and identify ways to diagnose diseases earlier, treat patients in a more targeted way, that technology can really be distributed at scale and can benefit many. I think I'm an optimist. Well, I agree with the two previous comments. I suspect I will say that many times you will stand out for the first time next time, but it is a complex and layered question: who benefits from the systems?
Right now, in a sense, most of us benefit because our lives are made a little easier through our smartphones and the systems we interact with. There is a convenience, I mean for those of you who are my age and have children, it is impossible for you. I imagine how you managed to meet someone in the days before you could use these systems to figure out where you are and where they are and converge, but somehow we manage to muddle through, so there is a sense that our lives are more convenient at this time. Thanks to these systems, I think there is a real meaning and I am very positive about the potential benefit in really important areas - healthcare is one of them and education is another - and the people on either side of me are making a difference in both. of those where the possibility of improving so many aspects of our lives is real and yet I think Vivian is absolutely right that we need to think very carefully as a society about how we want to act, I think if we let this happen without intervening o If we care or manage in any way, there is a very real danger that the short and medium term consequences will not be to improve the world as we hope, but to increase inequality and I believe that as a society we have a huge duty to be active. and to figure out how to manage this, we must be managers, we must look ahead and think.
Here are different scenarios for how it could play out, which ones society wants to happen and which ones we're less interested in. And how can we try to use the levers we have to push things in the right direction? Because I think it's pretty clear, as Vivian said, if we don't do a lot, the impact in the short and medium term will be an increase in inequality. improving the lives of people who are already quite well off and not helping people who are not and widening those gaps, which I don't think we want, but we need to be active in solving and increasing equality.
I mean, the most obvious thing that it's replacing low-skilled jobs is that essentially or there's something else. It would be even more provocative. You can go read lunch with the ftps I made and which was very provocatively titled. The professional. the middle class is about to be surprised it turns out that it is not impossible to build a robot to pick strawberries or drive cars, but it is actually very difficult and much more difficult than building a robot to do financial analysis or read x-rays, so those Last things seem like very sophisticated jobs, we pay people a huge amount of money to make those kinds of decisions, although those decisions are often made very well by rote if they are written down and economically valuable and people make them regularly.
You just described a machine learning training. set and it turns out that it is much easier to build an AI to do those jobs than it is to build an AI to pick strawberries and it has interesting implications because in many of the discussions that I'm sure many of you have read about. or heard on the radio is largely about what do we do with all these low-skilled workers who will be left out of work, which is still a legitimate question, even if AI is good, I grew up in Salinas, California, which is where John Steinbeck That's why I grew up in the kingdom of the grapes of wrath and you know the devaluation of humanity in the countryside 12 hours a day and it still continues every day and it would be a human good for that to end, it would be Well, if we could build robots, they could go out and collect all this for us, so it's still a legitimate question to say what do we do with millions, that this is the biggest vertical job in the world is agriculture and the next one is transportation, but what do we do with a group of people who tell us that if they went to college and worked really hard they would have an amazing job that would take care of them for the rest of their lives and it's quite possible that many of those jobs?
They won't go away, but they will be, as I call it, deprofessionalized, where it will be possible to hire a person with much less education and essentially put an AI in their hands to make tough decisions and then pay them much less. We've seen it happen through globalization, we've seen it through factory automation. The first instinct of CFOs everywhere is to reduce labor costs to zero and AI won't necessarily create more jobs than it destroys, but how many people will? be qualified for those elite creative jobs and how many will fall into a kind of low agency services sector where they don't really distinguish themselves from everyone else because this has something to do specifically with artificial intelligence in the sense that you described many cases.
In history, when technology has caused these problems, so it's displaced particular groups of workers, etc., is this really the first time we know we're thinking about it when a new tool becomes available or is there something unique about it? the AI? and machine learning systems that you believe will cause a dislocation greater than the dislocations of the past. I have an opinion, but I'm going to keep quiet for a month. Well, I think it's one thing to echo one of Vivian's comments. What will probably be different this time is that machine learning with artificial intelligence will impact the world of work probably in substantial ways and I think they are not that easy to predict in any kind of detail and there are like 10 different learnings. reports that are also diametrically opposite things, so it's more evidence that it's hard to predict, but I think one thing that is clear is that it will have a much greater impact than the previous revolution, like the industrial revolution, etc., the type of administrative work. uh, in the way that Vivian was describing, uh many of those tasks can be done or many, at least many parts of those jobs can now be done uh or soon will be able to be done as efficiently or better using machine learning systems. than the people who have done it.
I had years of training uh those are interesting questions about how that's going to change roles in the way you know first of all the calculators uh helped a lot of people but they probably didn't put a lot of people out of work I mean there were people in in the old days, which were actually people called computers that are responsible for doing sums, they don't exist anymore, but they have existed, they were a small segment, but calculators and then spreadsheets probably increased our ability to do things instead of replace them, but I think there is a real possibility that the impact this time will be on the entire article, but much more than before at that level compared to, say, the industry, just to give it a little face, this is one of my favorite examples.
It's not my personal project, but recently there was a little notorious competition at Columbia University between a startup that had created an AI to read contracts and a group of human lawyers, and in the competition they had designed these confidentiality agreements. a form of contract with a bunch of loopholes and then they had the two groups do it, the AI ​​and these are very rough numbers, the AI ​​found 95 of the loopholes that the human lawyers found 88, so whatever they are, They are only human. Let's call it a tie, but the much more interesting real number is that it took the human lawyer 90 minutes to read each contract and the AI ​​took 22 seconds.
Now, what is done with those gaps, how judgments are made, remains a fundamentally human task, but the vast majority. Of lawyers, especially junior lawyers, spend their time reading contracts and finding loopholes or doing case studies, doing intense work to learn how to be this other type of lawyer and it is very possible that that type of work will disappear, those tasks disappear, eh, and then they become a choice: we do what Peter says, which is all lawyers, everyone out there becomes some kind of augmented super lawyers the moment they know all the loopholes and They start thinking about what to do with them or we say, well, my God. so you know, we're going to keep the five best lawyers and get rid of the rest.
Wouldn't it be great if we could know that instead of lawyers we are all super lawyers and indeed, if we could have our contracts? reading in half an hour with solutions instead of five days of reading and it cost us, you know, a tenth of what it costs now, it seems like everyone would be more productive, so in a way I think I think the role will change. is that version of you that said right now: we need to do it, we have no choice, you have to spend your time, your junior lawyers have to spend time reading those documents for loopholes and they would rather be doing more interesting things, so you really I appreciate it.
I mean, I really appreciate it because when I say this is a choice, I think it really is a choice and the ability to do it. I'm not going to describe it right now, but we built a little system that can analyze children's artwork and add, uh. to our ability of one of our systems to do some educational interventions, so there are like six people in the world who study children's art and analyze its implications for their cognitive and emotional development and we took their work and were able to incorporate it into a system and train it in the artwork of these children and now we give it away for free as part of this little system that we in turn give away um and in that sense, yes, you can communicate, probably even the best example is the idea that I could even have a dumb mobile phone if it could take pictures, take a picture of a mole that worries you and ask it to tell you if it's cancerous or not, or at least if you should go see a doctor, that would be amazing. valuable, but in this question of whether we have been here before,It really depends because we've been here in multiple ways.
In the United States we have this chain called Jiffy Lube. You bring your car. Forgive me if you have it here too, but bring. your car and they change the oil and the air filter and they do all this kind of work well that used to be a middle class job where a real automotive engineer that you know would come in and make adjustments on your car, now it's a The job What do you get if you didn't do so well in high school and you have no agency, you just follow a script, a computer does all the diagnostic work, you upsell an air filter a little, the car dies and the next one arrives.
All I'm saying is that the overwhelming economic trend of the last 30 years has been toward profound professionalization. It doesn't have to be that way, it could very much be what you're talking about, but it requires an explicit choice on all our parts and let me tell you that if you leave it to the businessmen of the world, we are going to try to extract the wage value of the system and keep half for ourselves and offer the other half the other half as a discount and as a result, that class will disappear, when was the last time someone used a travel agent, for example? 10 or 15 years ago, travel agents were absolutely essential in the world because they were the only people who knew the complicated things and could read airline schedules. and so on and they still exist, but I think we interact with them much less, because it is possible to do most of these things yourself, through various online applications, we should, obviously, we have moved on over time and we have almost done the first.
It was very, very short, we have a demo. I'm going to skip to a minute, but just a question from Emily. The last question in this section is, if you can be brief, how do you think artificial intelligence in movies has impacted our view on AIs and the potential they have for how many people here have heard of a deep reinforcement learning model. Okay, there are some clues. How many people have heard of Skynet? So there is a partial answer. I think most people's conception of what AI ends up coming from Dark Mirror, which it actually is.
Not a bad description of some of the ethical choices, but a great description of the technology that comes from the Terminator movies, which many come from and probably reflects more of our fears. I mean, it feels a little more spiritual and deeper than anything else. I don't know if it's done anyone a great service and trying to get them to understand the implications of these technologies. Do you find this when you say in a field that you work in and people tend to use movie analogies and movie analogies to Imagine what you're doing, oh yes, absolutely, people get very excited and fascinated and want to know more and immediately they think about statistics and non-mathematical algorithms, they think about powerful robots, so absolutely, do I take that joke 100?
It's interesting. how often you talk about the future of work and people think that what you're talking about is that c-3po is going to come and literally tap you on the shoulder and tell you you're out, that's my seat, uh, and it's actually not none of that happening. Yes, I should say that there is actually a report on some work done by the Leverheum Center for the Future of Intelligence today that is available, it was published today and it is available on the Royal Society website, so if you want to see more of these. questions go deeper into that question, you can go to the actual site and take a look.
I'm going to go to this demo here, which is a demo that has been set up, it's a new example, in fact, of machine learning and artificial intelligence and so I would like to invite to the stage Professor Adrian Hilton and his team from the University of surrey. thanks hello, I'm going to say hello, maybe you could introduce the team, so we have charles, marco and hannah. We're going to perform live for us, yeah, so what are we going to do? This is using AI machine learning for motion capture. Yeah, so what we're doing is converting videos into 3D models of people's movement and on the left side. we can see the type of video input, what happens then is that it transforms into a three-dimensional representation of the person's movement and then we can map it to a character both indoors and outdoors, so this is a very portable system, yeah, and then what was the big breakthrough here, what are the difficulties in capturing human motion?
So what machine learning and artificial intelligence have allowed us to do is take this video data and really extract a high-level understanding, so in this case what we're doing is understanding that there's a person in the video and On top of that, we're understanding where their joint positions are in that video and that's something we haven't been able to do until the last few years and it's a powerful technology, in this case, getting into entertainment. industry, yeah, because I guess I used to see these things in movies and they used to be little dots on everyone you know, so you could see how the visual system was doing, but in this case it's just looking at one person, right? is that so?
There's a lot of intelligence there to recognize that it's a hand that it's a head, it's exactly that, so what's happening is the AI ​​machine learning is taking the video and detecting that there's a person there and then labeling the body parts of that person. the purely video person and the challenge really is being able to do that in complex scenes, like in someone's house, for example, if you detect their motion or in an outdoor scene like the one we just saw, so we'll see what those are. . Let's see up here on the top left, then we have yeah, it's just calibrating and then in the middle we have Hannah moving and her movements transmitted in 3D and then on the right side we have a computer generated.
The barbican courtyard scene that's directly above us here and the animated model in that scene, I see there has to be a recognition part, but there's also a human model there to say that this hand is moving this way . So, this is how we would do it. The AI ​​is mapping where the person is in each view and then we combine it into a three-dimensional model. What are the applications? I mean, obviously for film making, this is useful, but what are the In addition, this system was developed specifically to solve some of the problems in film production and games and things like that, to have portable systems that you can wear and put on a set, but the technology itself is applicable to a wide range of things.
So, for example, one of the things we're looking at is use in healthcare and how you can have very passive sensing technologies that will be able to understand your movement and behavior and why is that important if you want people to be able to live? at home independently for longer, then you have to have systems that can understand their behavior and in particular when there are changes in behavior, because that indicates that maybe they have an infection or something, so we have actually shown with some of this technology that Using machine learning again, you can select some of those characteristics from relatively simple behaviors, yeah, well, I think I should try it just to see and so you can see that it doesn't just work with a trained dancer.
To get in, could you talk about what's happening? While I'm here, the first thing that's happening right now is what's happening. First of all, you can see the images on the screen and what we need to do first is just Calibrate the two models because we have Brian in the image from time to time, the skeleton should be it's now picking up his movement and it's becoming in this beautiful skeleton. He has transformed into another character here, so you have a. very simple pipeline without using any sensors or anything like that that can actually interpret someone's movement and behavior, um, and that's one of the powerful technologies of AI, in terms of processing power, um, how long have we been able to do this? the difficulty is the algorithms, the computer power, a collection of all of them, so the real challenge has been how do you analyze an image in a general scene and understand that it is a person and that we all have Over the last 10 years, We are used to our smartphones having ways of detecting faces, but this goes much further than that and only in recent years have we been able to distinguish people in general scenes and then convert them. or analyze that movement and that's because of machine learning, understanding from very large groups of images what a person looks like in an image that you know and you know that the variety of people here in the audience with different clothes have to deal with all that complexity in the understanding yes oh very impressive so yes thank you very much thank you thank you smart things I thought it was getting a lecture from a superhero with Parkinson's yes yes I know they chose a very flattering type of I don't know, you know iron man type thing for me, No?
It could have been anything anyway the next question is a question from alex who is asked who will be the last oh actually we've talked a little about this it's interesting because you talked about I made the mistake I think new technologies they always displace the less skilled labor, John, less skilled, you said no, maybe it's the more professional classes that should be more concerned and the question relates to who will be the last employee, right? There is a question: are these creative artists or software engineers? So I guess we'll get into the nitty-gritty of what's the most, I guess, what's the hardest for an AI to replace.
Now I've taken the lead here on the whole thing. Well, this will be an absolutely shameless plug. I have a book coming out next year called How to Robot-Proof Your Kids. And the heart of that story is that there are some good answers to that. At least for the moment until something moves forward. uh something more advanced is coming um but we look at the kind of things that generalize better um you know, this idea that the future is unknown well, let's build people for the unknown instead of trying to guess what you should know, I think all of those reports , the only thing those future of work reports seem to have in common is that everyone should learn to code, which is one of the skills I've literally seen an AI do where a designer simply describes the website they want and writes the code , uh, and so in 10 years, if that's not the case, most of the code is not written by ais, I'd be quite surprised, there will still be people writing code to build really novel database structures and going out and exploring the unknown, but there probably won't be people writing a bunch of boilerplate code just to populate websites, which is the core of these jobs, but that points the other way. um, no one has really developed AI in a sense to explore the unknown, so when you look at the qualities associated with those things, I'm going to be very broad here, like emotional intelligence and social skills, creativity and metacognition, those are the things that even talking about AIs having those kinds of qualities doesn't even make sense because you don't really have emotions to have intelligence, so when you look deeper into who are the people who have the most creative jobs in the world today in day and I mean creative, defined in a very broad sense, so scientists are creative and engineers too, but they are also creative. the people that are typically in this scenario, those are the things that are going to be harder to automate and really focus our education system and even our hiring on those types of qualities instead of focusing on a bunch of rote skills that two years from now.
I think that's the point you made earlier, isn't that you open up possibilities by allowing people to focus more on those really productive areas? lawyers, you said that's actually the creative nature of the legal profession and I think in general, if you think about the lsats and mcats, the entrance exams for medical school or law school right now , they are mostly tests, you know what we usually describe. like IQ, but I increasingly believe that schools should change their admissions criteria to focus more on IQ and identify people who can balance the two because that's where these professions will change.
You're going to agree with everything. I'm going to agree again and I. Next I'm going to come to you first, I think I think this question of how do you believe in Nazi philosophy I don't agree with that, okay, it's true, it's true, I'm not a machine or I'm a machine, don't quote To me, out of context, things are bad enough in America as it is, it's bankrupt, it's been streamed online, people just ignore it.will cut it out and post it on Twitter. I think thinking about the right way to train and retrain people is an important task. challenge for us, I mean there are obvious things about helping people learn to think in the old sense rather than just knowing and learning things that are bound to become more and more important and we need to think about that throughout the process. standard educational curriculum and then the rest of people's lives, let me add a really fascinating finding, so for a while I was the chief scientist of one of the first companies to use AI for hiring, which I hope let's talk about it because it's one of those very deeply controversial things and we had this really interesting finding: we built a database of 122 million people of which 11 million were pretty much all the professional software developers in the world, so I mentioned software.
Social skills are one of those things that are strong for navigating computers, so in fact we find social skills, empathy perspective, taking communication skills, you could go on and on, we are very predictive of quality of people's work and, in fact, as predictive of the quality of code written by software developers as the number of sales by sales people, yes, it was much less common for software developers to have super social skills solid, but when they existed they were just as predictive, so one of the misinterpretations of a question like that is, well, then we have to train everyone to do social media. jobs you know, taking care of the elderly, which is a wonderful job, but the economy is not very good, you know, greeting someone in a store, but in fact, every job is improved by understanding other people and that It's a very important thing to remember, yeah, okay, well, let's go. move on to section two, which we call how could society benefit from AI?
We've covered a lot of these topics, but Malcolm has a question, so I'll come to you first, Peter, so the question was, if a computer gives me a diagnosis I should also have to explain how it arrived at that diagnosis. It's a really interesting question. And I think there are a couple different levels. There is a specific question in the context of diagnosis. There is a much more general question about these AI systems. How important is it that we understand why they are making a decision, and is it worth saying by way of background that many of the incredibly successful systems don't have any sense in which we can ask them why they made that decision?
We can simply measure them. how often they get it right and there are very interesting questions about the extent to which in different contexts we value the ability to be able to understand the reasoning behind the decision and I think those things tend to be context specific. The question was explicitly, I think about medical diagnoses and as part of real society work on machine learning like Brian. As I mentioned earlier, one of the things we did was reach out and talk to people. We ourselves, Samara, helped us find out people's opinions and, in fact, those opinions were very interesting in the context of medical diagnoses.
Let me tell you what they are, let's do the same. experiment, except I can't quite see it, so imagine it will be a bit hypothetical and you have to vote for one of the two possibilities here, you are sick with something quite serious and you have the option of receiving your treatment. decision made, on the one hand, by you know the consultant at the local hospital who is good at his job and we know from a lot of data that he makes this right decision ninety percent of the time, so that's option one, option two is, um, you can have an AI system, look at your symptoms and make a treatment decision and again we know from tons of data that the AI ​​system gets it right 97 of the time and except there's no way for you to understand the decision, the AI ​​Also, the artificial intelligence system has done well, so everyone has to vote, unfortunately, we can't see very easily where they are voting, so in those situations you are seriously ill, a bit hypothetical, like I said, who would choose the doctor who does it right ninety percent of the time raise your hand to the doctor and who would choose the artificial intelligence system.
Wow, this is a very biased audience, yeah, obviously, it was fair, it's about a third or two-thirds, yeah, so I think how many of you. They're wearing a badge that says "I love c-3po" right now. I think this is a real society event, so it's just a statistically literate audience. It's good that you said nineteen seventies. I'm not done with my experiment yet, so let me. I finished um, so I think different people have different points of view, as we've seen now, if I give you a third option, which is actually you can have your doctor, who gets it right 90 of the time and who knows the result of the artificial intelligence system that it obtains.
He gets it right 97 of the time and you can check his opinion. Who would choose that? Yeah, so it's pretty easy. So that's the sense in which these systems will probably increase, at least in that type of medicine. increase uh what doctors are doing uh final version of the question um those of you who have asked it, let me ask you in a different way, so here is something complex that is used all the time in medical systems, scanners MRI, so they work quite well, it depends on how in physics, but from a distance, physics is quite sophisticated, there are complicated algorithms that interpret the raw signal from the machine that gives something that is transmitted to the doctor.
I think it's probably fair to say that many of us doctors who do a brilliant job of reading the results of MRI machines or CT scans or pet scans or whatever, we don't understand any of the details of the algorithm now somehow that doesn't seem to worry us currently we just know that they work well because we've tested it so I think, like I said, it's an interesting question that that ability to understand the decision is good, latest version of my type of audience participation, with what How often do you think about those complex situations when the doctor says we should do x because y?
Then there's kind of a short, simple statement: How often do you think you really understand the doctor's thinking and why the doctor made a decision? It's about two, some of you, both of you are doctors, some of you probably understand, and so the question arises. For us it is when we should want something different, when we should impose different criteria on decision-making based on algorithms from those of people. I mean, people can give an explanation, but we know from our daily experience that most of the time the explanation is not directed at the people I interact with all the time, but sometimes we experience the fact that someone is an explanation happens after the fact, which isn't really an explanation, it's just a way of saying or just justifying what you've done. anyway, long answer, but I think it's complicated and we have to think about it a lot. sushi, I think I'm building on what Peter said, although I don't entirely agree with him because I have to disagree with him, so this notion of You know, I don't actually think we need an explanation, what we need is the ability to trust and the ability to work together.
It's that example, the lawyer example, uh, the contract reading example that we got before from Vivian, but also the same in the medical diagnosis. You had a way of knowing. You know this when you interact with a trusted colleague. You understand how they work. You understand how they think they are going to tell you something. It helps you build your own thinking, allowing you to evolve and say something that they react to and that notion of collaborative reasoning is what we need for such systems. Something that's really exciting and powerful here is that you know that computers have the ability, you know that these algorithms can sift through tons and tons of data to determine at any given moment. scenario what has happened to other patients who were in the same scenario and what was done and how they reacted and we can summarize it in a very nice and concise way, so what this means is that we can find a way to develop complementary experience where we can I can use that knowledge to be able to collaborate and come to a decision, I think that's where we want to be, but I think this is an open science and something we are actively working on, yes, so I can give a very personal answer here, um seven.
Years ago, just before the Thanksgiving holiday in the United States, my son got sick and at first it wasn't clear what it was, but it was a Sunday and by that Wednesday he had lost 25 of his body mass and couldn't stand up. , so I rush to the hospital, uh, and it turns out, in retrospect, it should have been obvious, you could actually smell what was wrong with him. He had type 1 diabetes and his sweat was sweet. Now I got some fancy degrees in smancia, but he turns out to be a neuroscientist. doesn't mean you know anything about diabetes per se huh, so it was a very hard and long four days in a pediatric intensive care unit.
My wife and I are scientists, so the moment we went out we recorded everything we were failing. Google Docs regularly recording everything you ate, if you had colds that morning, what your blood glucose readings were, what your heart rate is, everything, and then before you go to meet with your new endocrinologist, I don't know how many people are here . get an endocrinologist, but that's a fun part of your life and she's someone I really respect and she's still her doctor. I emailed you all this information thinking you'd love this and we got no response so I guess what's going on.
Here I want to say that I love data, so really you all love data, what's wrong with this woman? Then I realize what it is, so I print out a spreadsheet about an inch thick, bring it with me, and leave it on the desk. in front of her and they were angry. This was not what they wanted. What am I supposed to do with this data? So instead they gave us a small photocopy sheet. Diabetes has improved even in the last seven years, but at that time. they gave us a little photocopied trick, it had five days, three boxes for each day, morning, afternoon, night, write a blood glucose level in each box, 15 numbers, we had 15,000 numbers, but what a human can't really process 15,000 numbers, um, but I.
I'm going to admit it and forgive me if I'm reading the room wrong, but this is my genuine feeling. You got to be kidding. I make models of the brain. Are you telling me that diabetes is more complex in the brain, so that night? I bought a book on endocrinology and the next morning we hacked all of my son's medical equipment. It turns out that we violated several US federal laws and I redirected the data to my personal server and then took a predictive coding model to retina. I don't know if you realize it, but your retina literally predicts the future and I repurposed it for diabetes, the details don't matter except that it really helped, I mean, it helped profoundly, it allows the insulin pump to make its own decisions. and there are all kinds of implications, it was wonderful and we have to give it away and all kinds of things, I must admit that I don't care what my endocrinologist has to say about my son's treatment, I care what my model is.
I have to say your work is all the things that are not day to day because it's day to day, literally every five minutes you get a new number and you make a new judgment and you update the model and there's a lot of very exciting new work. In this space, we've been able to do the same thing with bipolar disorder and work on Parkinson's, so what I want to say is that I really think that models should be explainable both in medicine and in a variety of other domains that I need to be able to. to tell you why I didn't hire you or maybe why I did, but that doesn't mean I have to understand the model, it doesn't mean we have to investigate the sometimes incredibly complex interrelationships of a very, very large system, but in At some point, if you have been denied a loan or a job or if a judge does not believe you for some reason that will not be revealed, you should have the right to understand why and if they cannot provide you with one.
You should be able to guess and in the specific case of medicine, this is why there is no right answer for treatment, so you will understand what it means to get this diagnosis. What type of cancer do I have? Implying? As? that interact, I think there are interesting ideas about how AIS can explore vast possible treatment plans and then link a bunch of them together, but somewhere I think there should be one person andThere should be a why and it should be something you really understand. because otherwise we have to think about what will happen over time when we get used to the idea that there is no intelligence as we understand it in a system that makes these decisions, it's interesting, isn't it?
Of the topics we are discussing are not specific to AIS, I mean in this case it is the human need to understand if a mistake is made, which could mean an incorrect diagnosis, although 99 of the time the expert system could get it right . Unlike 90 for humans, we want to know why the mistake was made, right? And the brutal statistical approach would be that it doesn't matter because you're right more often. I mean, this is seen with self-driving cars actually as Well, the question is: will self-driving cars be much safer than human-driven cars, but when the car hits and kills a pedestrian, we want to know what it was. the decision-making process, but it's ours, it's probably because they've seen something.
The nasty thing about Elon Musk, yes, in a sense it's not particularly logical and it's not about AI, right? We're more talking about the need to know why he might have made a mistake. No, I don't know if. that desire to know that in that case it's logical, in other words, I'm conflicted with this choice of if I had a system that was 99 accurate versus using a system that was only 90 accurate, but one could make up an explanation like Peter said. we had no way to evaluate whether that explanation was correct versus a different system that said I don't know, although I disagree.
In fact, I think we can hold these algorithms at a much higher level and there is a lot of creative work going on and being able to come up with explanations that can allow you to collaborate with the machine, but even in that terrible scenario where the system said it knows That this is my decision, unless the problem is that when you make a mistake, you do it very badly and it is very harmful. I would say that we should be willing to operate with the 99 precision machine. Yes, I agree, but that's how it is. So I was thinking that the heart of this question is that if the thing has to explain how it got to the well, maybe that's how it is.
That's the point, isn't it? What we are saying is fine, no, if you do it right most of the time and more often than the human being, it doesn't matter, what we are is the consensus, I think, for some people. It will matter a lot and it will matter to others and you know, people may want to make their decisions there. I mean, as Sushi was just saying, there's a very, very active area of ​​research within machine learning that we're trying to build. systems that explain much better why they got there and it should be in everyone's interest that that research be supported and strongly developed, so it's fascinating that Peter and I were actually a royal society event plus a national academy event of sciences in Palo Alto and there was a young man who was making this very provocative statement that understanding the explainability of AI at first glance is nonsense and he gave all these very good reasons for that, all of which apply to us, Every one of the reasons he gave for not bothering to understand AI applies completely validly to understanding our own minds, uh, and I hope we don't take that as a political position of why bother understanding why. what we make decisions, because it really seems very parallel to me, okay, so let's get started. another question from um jeremy maybe let's start with your sushi uh the question is how much does Google know about us should we worry about them recovering oh boy who is sponsoring this event again?
Google, someone paid for my flights. I guess well, I mean, I guess it's not necessary, that was the question, but we don't have a specific company. What we are saying is that big companies have a lot of data about us in general. Apple Apple also has a lot of data and it has nothing to do with This event is a good point, but it's a good way, right? It's so healthy that it's the most personal piece of information in many ways. I guess that's the point. Yes, I think it's a great question. It's very interesting. and a tough question, I don't know if I can really answer that question with five seconds of thought, but I'll try.
I'm concerned, I think we should be concerned if a small number of corporations or decision makers have access to a lot of data and a lot of the skills and personnel needed to be able to develop these kinds of approaches, so I think it's important for us. decentralize, but I don't think we should. It comes at the cost of us not having access, in other words, if I were to choose a world where we could use machine learning, we could tap into a pool of people who could develop really smart algorithms that could then allow us to improve healthcare.
I would absolutely support it and prefer it to a world where we didn't have it, but my preference would be that we decentralize the development of these and expand our education, make funding widely available, and build systems where it is known how the data is not retained or linked by a single organization and, uh, the people involved, yes, and this is quite specific, isn't it because Google as a company provides services directly in the NHS and the like, like in the UK? Yeah, and I think there are really interesting and special Questions about healthcare data. I mean, it's true that Google and a lot of the other tech companies have a lot of information about us in a sense, that's because we give our consent.
Now you know, we check a box after pages and pages. We don't read the consent forms, but I think everyone thinks differently about healthcare and data in the healthcare system. Certainly the health systems themselves do this and I know that in our case in the UK it is something that the NHS and our politicians think about a lot. There is huge potential in NHS data to analyze it in ways that lead to better efficiencies and better outcomes for people, so that people live longer avoiding complications, receiving the right medicines etc, the potential is huge in the data . in large health systems and in particular we are in a very special place in the UK because we have a single supplier system, so the potential is there, no one wants to give away that data to companies, for two reasons, first Secondly, I think because we naturally think of the NHS as a kind of resource of ours, something that people in the UK care a lot about, and rightly so, I think that any version of that data that is used must have benefits for both the NHS as well as for those of us in the UK that are. patients within the NHS system, but also because we all appreciate that healthcare data is special and even more private, so there needs to be a serious debate about what the right levels of safeguards are and what the right levels are. of anonymizing data that could allow that data to be available exactly for these potential benefits to accrue and I think the other thing to say, which is obvious, is that I don't think anyone would say that it should only be the big tech companies that I can benefit from that, there will be a lot of small startups that I will run one myself, but there will be a lot of small startups that can also play a role in that, so when you think carefully about healthcare data, we have a fantastic opportunity in the UK United, but we absolutely need to get it right and understand the right way to respect privacy and anonymity, the right way to have a dialogue with those of us who are all, effectively, who are patients in the NHS to make sure we are happy with the benefits of doing everything we can to minimize the possible disadvantages and that they are real, that we can do things to minimize them and then make sure that we are happy that the advantages are justified and mine.
The view is that the potential is huge, but we have to do it right, go ahead, just to add to that, I think there is also this, I mean, I could imagine that this is certainly true in the US, where the large systems of health are great. Companies prefer to interact with large corporations or companies because they inherently think that they can probably store the data more securely and keep it private and as a result, you implicitly know that they are creating a scenario where they are blocking. the data with one or two or three large organizations.
I went to school in California in startup land and you know historically my experience has been that some of the most innovative and disruptive ideas come from small groups of people who are mission-oriented and who are completely committed to making a difference and as a result, I think in this area at least what I see is this mismatch in the potential where you know the ability to create change by small groups, small companies that may be well equipped, but large companies, You know, they're very skeptical or afraid to work with them, so I want to expand on this a little bit.
Healthcare is great, but in my opinion it goes beyond that and also expands beyond the data issue. This data has been a The big problem here is that the idea of ​​GDPR is to protect people's data, health and education data, data about children, it's people that everyone becomes very productive, all over the world, You know the only place where they are really productive with data in China, for example. is surrounded by children, but here's the thing: if you actually have no ability to exercise a right over your data and yes, I realize that through the GDPR there is a lawsuit mechanism, but you know that right now it's missing someone using your data, what are you going to do? about that, filing a lawsuit so that within six months we lose in court, you know, that's not very satisfying, right now, one of the biggest advantages that these big companies have and I don't think any of them are bad.
I've done collaborative work and been recruited to work on almost all of them, I could say something I never said yes to, but they're not bad anyway, they're all guys for the most part, um, it's that they don't just have masses. of data that no one else has at their disposal, they have a lot of talent because they have raided the university system and I'm not here to tell anyone that they can't get a big paycheck, but when is Google going to give them a bonus of a a million dollars? someone just to keep them out of another company that tells you something about how they value talent and also have an infrastructure that is hard to find anywhere else, even for my own companies.
We cannot run high-performance GPU systems at scale on our own. let's not build those things, we use Google so that they have the monopoly in multiple dimensions in this new AI space and that really worries me now that we can include China and the United States as formal entities, as well as a very small number of entities. I basically control all the AI-related computing power in the world and I could build a company and my best hope is to get bought. The likelihood of me becoming a massive new competitor to Google or Facebook is practically zero.
Most of my work. It's philanthropic but I'm still under the same limitations. I have thoughts on this, but I just want to lay out instead of my own personal philosophy, how do we think about how we as individuals can exercise our own rights in how decisions are made? is being done to us, not just control of our data per se, but the right in the same way that we have the right to judicial review in the same way that we have the right to seek a second opinion from a doctor. He should have the right to how he is.
They target me with their ads. I should have some say in how these systems interact with me and right now none of us have any and I'm just as empowered as you are in this space as you heard through my diabetes story, but I'm not. I'm going to get involved in that because I don't have enough time in my life to build a separate AI to deal with everything and I think we really need to think, right now our houses are full of these little embassies, Amazon embassies, Google embassies. from apple from baidu from alibaba and these are our phones and our smart home systems and they operate under their own laws even though they are in our homes, wouldn't it be great if we could think about how we could operate in public?
Interest in public trust, I'm not necessarily talking about governments because I'm including them as part of that concentration of power, how do we actually exercise our right to make decisions about our own lives? Okay, well, um, right? I didn't mean that, I thought I could, the point is we could talk about it, it raises a lot of issues, but we have about 20 minutes or so left and a lot of questions and a whole section to work through. I just want to say very brieflythat there are a couple of questions here that are related to the future, so perhaps we could briefly address them, one of them is to look into the future to see where aio in a in in In a biological sense, the question is whether There will be artificial intelligence created at such a level that it may be able to fight cancer and several of the diseases may begin to develop.
I'm assuming you're looking at the potential in biology, so maybe let's briefly take that up to what point. Are you far from AI being used in that sense? They're already being used not just in their own right, so science, uh, in biology, cancer, biomedical sciences, there are increasing amounts of data, our ability to read exquisite details about biological systems. is unparalleled and exploding rapidly, we can study individual cells and realize that in a cancerous tumor, for example, we can realize what happens differently in each of the individual cells in the tumor and how some of them have some properties that will ultimately lead to resistance. to a particular drug regimen, etc., so that area of ​​science is enormously rich in data in a way that simply hasn't been true for most of biology, the history of human biology, where the Artificial intelligence systems will be incredibly useful in helping. for us, as scientists, to make sense of the data, to learn things from the data and that is already happening and will continue to happen and will be one of the great drivers of the progress that we let you know in the fight against cancer and in improving the medicine.
More generally, and that was from Jaffa, by the way, that question, have you ever done that? Yeah, I agree with Peter on that, I think I think of it as the spectrum from discovery to delivery, so discovery is, you know, discovering new ideas about human beings. biology, how our body works, how we characterize disease all the way to delivery, where we will discover new and more efficient ways to get the right medications and therapy and to the right people and, I think, the whole spectrum over the last five to ten years. has been going through a transformation.
I think in the next 10 to 15 years we will see some of the most interesting discoveries and even in our own work we have seen areas of disease where doctors struggled to diagnose, but now by working with machines, not only can they diagnose it earlier by providing treatments to the right patients, but they are actually seeing improvements and conditions. I just want to ask this last question in this section. It's a fascinating question. In fact, it's almost like a Blade Runner asked the question in a sense is of will and asked if in the future there will be a section of society that he calls an upper echelon of society actually maybe the very rich who completely avoid AI for a more expensive one. the human experience does too yeah the ubiquity of ai yeah this is this is the version of wanting to use a travel agent um when you can go to court it may be even deeper than that I suspect it's about people let me go back so In real society work, you know, we try to engage and get people's opinions and one of the things that really worried people in the growth of AI was a kind of depersonalization of experiences, so it's a concern legitimate and it's something that people in general worry about and I think we should think about it.
In fact, I think so, yes, I will make more personalized experiences, not less personalized, because by knowing a lot about yourself you can relate, you know, imagine when you go on a trip that you want. to find people who are a lot alike, you see what they enjoyed and based on that you determine what would be fun to do and right now it's not really easy to figure out who these people are, you know, um, so again I've done a There's a lot I work in education, in fact, an education has a term for this personalized education and the idea is to use technology and in particular artificial intelligence to give children just what they need, but this is what I mentioned before the difference between our aspirations. and then the way it's actually used, the term personalized education, frankly, in common technological terms, it just means where you are on a fixed path and, in a sense, it deeply depersonalizes the experience of education by putting everyone on the same educational path. and then it's personalized because it puts you at some level, if you're different from that kind of uh temp that trajectory then it doesn't take you into account at all, so it's possible for us to build those models and I do a lot of work on that myself. , but the truth is that it's much easier to build a much sillier model that doesn't actually customize everything, you just have it as a marketing term uh and that gets really disturbing and disappointing, but I actually want to offer a contrary proposition.
I understand this idea very well and people have put it out there and imagine someday when only the rich can get a haircut from a real human being. You know that you can exercise your authority by ordering people. I have a real assistant, not Alexa. I have a real Alexa. In fact, I think it will be exactly the opposite. So from the beginning I answered a question talking about this idea. of artificial general intelligence versus the kind of things we're experiencing now and I said I didn't know when we would invent such a thing. However, this is a bit provocative.
I think I can give you a very rough timeline of when. There are people who are artificially more intelligent than other people, so one of my fields of research, in fact, my main academic field of research is what is called neuroprosthetics, which is actually the literal fusion of computer systems and people, so we have three organizations, for example, one is working on people who are locked up seem to be in a coma, we are building systems that allow them to communicate with the outside world. Another is working on optimizing athlete performance. It's less interesting to me, but I can learn things through the The last collaboration is perhaps the most provocative, it's a company called Hum and we're helping them develop a technology that turns out to exist.
You can reserve it right now. It's a wearable headband, so it doesn't block things in your brain yet, but we're getting there, so I'll take volunteers. Survival rates are average. But in this case, with the hum band, you flip a switch and the working memory increases by about 20, so I don't know if anyone has ever played it. In Simon's game you know where you press buttons and colors in a row, very much in this audience most people will be able to get to about five, six or seven patterns before you start, you can't remember what pattern it was and we flip . change and add one or two to that memory capacity now, that may not sound so exciting, but people with a higher working memory capacity literally live longer, advance in their education, earn more money, start at the individual level , but more or less on a population level, if some of you are seven and some of you are five, you will have much better lives and now we can build a device that turns you into seven, that's probably not what this device does, at least not They should use it. long term, believe me, I'm not selling you this, it's steroids for your brain and God knows what the long term implications are, but we're helping to develop this because, for example, children with traumatic brain injury, you know, maybe they fell. their bike and at one point they were taken away from who they could have been, so we are developing an educational intervention that will be combined with it. 15 minutes a day we flip the switch, do a deep literacy and math intervention and try to put the pen back in their hand to write their own life story, if we say no to technologies like that then we are saying no to changing the lives of all these children, but by saying yes, we are saying that at some point in maybe the not-too-distant future we could be fundamentally changing what it means to be human and the fundamental question is for whom and I think if we talk about the rich and this type of technologies it will be inevitable that there will be an effort for them to get first access and it will be a 3 16 gift for their children it actually works much better if you do it when they are babies um and I love experimenting with helpless orphans um so um then um that's another one of those clips that's going to be cut um but is this a human right like a vaccine or is this something that the rich can do before anyone else and I shouldn't be the one to make that decision no matter what I think of my self?
This should be a decision we all make and right now I feel like that's not happening, yeah, or even the market, which is the point of the question. Okay, let's move on to final section three, which is titled How do we get there? So supposedly the path forward is going to be quite a bit broader and there's a question from Gary saying that AI could be a brilliant opportunity to expand our knowledge if there were restrictions on it not becoming proprietary. by large corporations actually intersects with what you just said, especially the regulatory framework, um, so maybe we'll expand it to that for sure, um, I think there's, um, I certainly think there's a mix of you know, Hollywood, uh, painting movies with um, you know. weird abstractions that are not actual representations of AI, but instead conjure up an image of the apocalypse and you know Terminator etc. so I think that motivates our understanding of what we need to regulate and how we need to regulate, let's take a step back and go to the right, like how is mathematics regulated?
I mean, that's a bit of a strange question. Should we regulate mathematics? How do we regulate mathematics? If we go one level deeper into the use of mathematics and say that you know how to make decisions to decide whether to insure someone or not, we can now start to think about it in a much more concrete way, that is, yes, we should not deny them the safe someone with a pre-existing condition so effectively. I think that in the same vein for everyone we will have to do it. several levels of depth to understand very specific areas where AI will be used, how it is used and then determine what the appropriate regulatory framework will be and we need broader education to engage people from other fields that you know to think about the ethics and the um. you know, the consequence of its use in a variety of different scenarios, yes, this is a really more complex question than talking about any other new technology, we regulate everything, we don't regulate airplanes, we regulate cars, obviously, you have to do it.
So is there a specific problem here that makes it more complicated to regulate? I think one part I totally agree with is that we need to think very differently in different contexts and some of those contexts already have a good relationship. regulatory framework, so if artificial intelligence technologies were used, they are not currently used, but if they were used in flying airplanes, then there is an incredibly strong regulatory framework that would be involved in testing them if they are used in medicine as new. medical devices again, there is a framework that may or may not be perfect for this purpose, but there is a framework that thinks about how to do regulation, so I think we need to think differently because the costs of doing it wrong are very different than blowing up a plane. to recommend the wrong movie on Netflix uh and we need to respect that and look at the different contexts some of those contexts have good regulatory frameworks um some that around Vivian will have a better sense in education um they may or may not have them and others don't have them and we need start thinking about them, but we should approach it differently in different contexts, you know, I have guidelines and regulatory frameworks, principles are being established.
In fact, I met with someone today from The UN, they have great advice that we should have some principles about how data and artificial intelligence are used in the world. I generally appreciate this sort of thing, but frankly I don't know how clearly people make decisions, particularly decisions. which are very technically complicated and are very difficult for anyone else to understand about how they implement these types of technologies in the world. One of the things I've heard is that we should have ethics classes in computer science schools because it has worked very well. Well, in business schools, I actually think this is not a magic bullet for anything more than one of the cool things and I'm going to work through a metaphor here, but I used it before that artificial intelligence is a tool.
Exquisitely powerful and sophisticated. can't make decisions on your own can't solve your problems for you if you don't understand the solution; It's very unlikely to solve it in my opinion, but it's immensely powerful and completely changes the economics of those solutions. The problem as I see it and I'm not here to criticize computer science schools, but I'm going tosay this: we somehow deployed this army of machine learning experts, mostly very young and certainly very male, who have spent their entire short careers. They are learning how to build a hammer, but they have never actually built a house that they are given and this will be a little complicated, but they will understand.
They are given these perfect data sets like Imagenet and asked to solve pre-specific problems like naming all the breeds of dogs in these images, but what they don't understand is a four-year-old boy with diabetes. What they don't understand is here's Amazon's hiring history. They build a deep neural network that recruits the dogs. The right kind of people, well Amazon tried to recruit me to do that and I told them it wouldn't work, but they went ahead and did it anyway and if anyone reads the news about this, they won't hire you if you use the word woman. on your resume take whatever you want it to say about Amazon's hiring history because that's where you learned it from, but this is one of the most sophisticated companies in the world, they have an army of machine learning experts that they brought in internally and ended up having to drag this thing out the back of the barn and shoot it in the head because it was irresolvably sexist and for a year they tried to fix this problem by manipulating the data sets and manipulating the algorithm and it didn't work, so I really want to be very careful that such maybe part of this problem is not the entire solution, but maybe part of this problem is actually training people how to solve problems instead of how to build models.
We're almost out of time. I wanted to get to the point. one question, I think we have covered the question, there was a nanite who asked what impact the growth of artificial intelligence could have on society. I think we've pretty much answered that on every question. Well, like that, but maybe we'll say positive or negative. What impact do you think you don't have to respond to? Maybe I'm putting words in your mouth. A phrase. What impact do you think it would have? I believe it will impact the world in a positive and dramatic way in almost all spheres of our daily lives. lives if we achieve it, if as a society we do it well, we pay attention to it and we actively get involved as administrators.
I think the potential is enormous and positive. Everyone insists that AI is used to improve people and I think it could actually have a positive influence on the world or I wouldn't do this job right now, I'm not so encouraged, that's fine, and in the end I can merge the two final questions, actually one from minhaj and one from chris because they are very similar. The panel thinks we will ever achieve general AI or will it remain a fiction and a related question, I think, that follows from that, if the answer is yes, do you think we will always have control over AIs in our society, even if they start to demonstrate intelligence higher than that of humans, so there are two related questions, this one is exciting, yes, since I can study brains and machines, fundamentally I think we are a phenomenally complicated computer system, nothing like any of the AIs that someone have put. together on a real computer, but in that sense, theoretically there is no reason why we can't do this, in practice it is a different matter, we will need a whole new set of models, so I have to make this something really fun once .
At a conference I had the opportunity to debate on stage with a guy named Ray Kurzweil about artificial intelligence, so he wrote a book called "The Singularity is Near" about the emergence of superintelligences. He thought it would be great since they would make much better decisions. I would never vote for Brexit, sorry, um and uh, or Trump or whatever, are you okay? Clearly, not everyone agrees with that, but, uh, but you know, we had this kind of debate about superintelligences, etc., and one of the many things. of people don't even think about if we invented something like that, would they even care?
In fact, he would even have a conversation with us somewhere where we might need that super intelligence to not drive a car because we already have dumb intelligence. that can do that we and the existing systems um maybe to manage Britain's entire transport network something that has to manage this massive distributed system and optimize all the pieces it doesn't have two eyes it has millions and millions of them and ears and Everyone the bodies scattered about what the hell he would have to tell us. Why wouldn't it be so strange for him to understand that we are intelligent and we can infer that we are?
But he manages transportation and we manage our lives. and there really is nothing to exchange there we want to think that he will desperately want to talk to us it will be just who we are but I think that could be something that is certainly divorced from a lot of research on what is called embodied cognition and how much of who we are is the same body we inhabit, well, that is a tremendously different body. I think we shouldn't think that, to echo Vivian's point, we shouldn't think that technologies will be like us in some sense.
In the early days when people wanted to fly, which people did and I'm sure you've read about it or even seen paintings of it, people would stick feathers on their arms and wave them around quite a bit and they weren't very successful. on the topic of flying and we finally came up with a technical solution to flying that is actually really different in There are many crucial aspects in the way it happens in nature and the same thing is very likely to happen with artificial intelligence, but both specifically like the systems that exist now do it differently than the way we do it, and in terms of general intelligence, will that be the case?
Sometimes I think it would be brave to rule it out but fortunately there is still a long way to go I think we should try to understand what agi is. I bet if we interviewed 10 experts in the field they would give you very different answers about what it is. Artificial general intelligence in general everyone will say is much smarter than whatever we have now, so I think part of the challenge is this, as soon as we start to understand something, we call it AI and whatever we don't understand, that's . agi so it's a strange thing to describe so I think we have algorithms we will continue to build algorithms to solve problems um I don't know what we have we I don't think there's any evidence or imagined evidence of a version where we see this superhuman or supernatural algorithm that can exhibit behavior in the one that you know we're often humanizing from the beginning.
It's very easy to understand what it's doing in the search base, but we go back and put it like a human. qualities in him to see ah he's thinking he's taking a step back he's coming back he's trying to trick you when all he's really doing is looking at the board settings to figure out what's the right thing to do what I find interesting is if then You would have to choose to build one, I mean, as you suggested, if you have a system that runs the global transportation network, let's say then, I guess what a lot of people fear is this science fiction fear of the thing being so smart that it manages the transport network. you decide to run everything else on your own as well, but presumably if we're talking about an agi, if it's possible to build one, I guess the question is: would we have to build it with the intention of building it? or it could somehow emerge from a lower level complex system, so I'll say this and it's actually a very literally debate that's going on in our field right now, what are the current technologies, specifically deep neural networks, deep enough like to one day create agi or if we need to invent something completely new, I happen to be in the category that thinks we need something dramatically different, but I will do it and part of my belief is that, maybe, hopefully, to allay some concerns, More processors can be added to our existing system. you can teach it more things you can show more newspapers you can play it eh, the BBC on a constant feed all the time everything in the world will never wake up it will never have an opinion on the issues of the day it never will I approve of Trump, he's supposed to be smarter that we, right, it's just that those kinds of things aren't going to magically emerge in some sense, your toaster will never wake up and threaten our existence, but could we invent something new? lead to this again, I don't see any theoretical reason why that wouldn't be possible, right, it's just that I don't know what form it takes and I don't know what infrastructure it requires, let me give you an example, like we now have computer systems. that control, you know, the basic utility system, like water, electricity, um, um, and what if we had a different algorithm or computer that came in and interfered and took down a node clogging it with traffic and so on or hacking the system now?
Would you call that agi? That particular example that I'm describing is very possible today, basically, by building a system that has very, you know, where the points of failure are very concentrated, you can attack those points of failure and the system can go down and now it could easily attribute it to agi, but I think all it is is, you know, computers that are optimized to build an objective function and we often program them to do that, but I love one of my favorite examples of a system that is active. up, uh, I'm blank and I'm leaving his name blank, so if anyone's going to get the reference, look it up to a Scottish writer who wrote a murder mystery and it just starts off purely as a murder mystery and then it turns out what's happening.
What all these people have in common is that they execute spam, they are all a group of spammers, a spam robot that trains itself more and more complex and the constant game of cat and mouse against the spammers reaches the sensitivity and realizes that the way to stop spam is to kill the people who produced it in the first place and that's what the police eventually find out in the end and you know again, I don't think something like that is in the works, that it was a piece of fiction, right, yes, it was pure fiction. um, but there are some things that are really exciting that don't require us to reach agi yet and some of them that are a little scary: no artificial general intelligence is required to build autonomous weapons and program a small drone with a bunch of C4 packed into it to literally recognize my face and just zoom in as fast as I can and make a smart bullet for lack of a better description.
It does not require general artificial intelligence to build technologies that keep autocracies in power. From the first professional system I worked on 20 years ago, my introduction to machine learning was building real-time lie detection systems for the caa um now that's very morally gray uh needless to say that was incredibly cool, I just read people's words. facial expressions um and by the way, then we were able to use that same system or I used those same algorithms in one case to build a system for Google Glass that could read people's facial expressions so that autistic children could learn to read facial expressions in another case of reuniting orphaned refugees with their extended family in refugee camps around the world, so it's not always so clear what is good and bad technology, but I will say some of those algorithms that we developed 20 years ago and they are all on your iPhone 10. ...I mean, literally, Apple bought that lab as a startup, so we power all your facial stuff.
So if so, have you ever made the animojis so that you can smile and talk on the phone and animate a cat that cat is worth 50 million dollars and the cia finances all the innovations, in the end you animate cats on the phones, but At the same time those algorithms are used, they also make toasters. I think one day we will surely have a smart toaster that might be too smart. So, but the last thing is that those same algorithms are now being used in Western China in ways that I don't agree with, but you know, this was just an academic work, we published our algorithms and now they're available and it's not always possible. . control these things that's why norms establish norms are so important and finally, to go back to the beginning, I mentioned the Turing test at the beginning, so what is just an agi?
How do you define yourself? It's something that happens on tour. The test is that the definition we really perceive that the thing is intelligent, but if not, what is the definition, so let's go quickly. I think it would really be worth a little bit more in-depth definition of the Turing test, so that's what the Turing test is. There are sort of two black boxes and I ask each of them questions and get answers from them: one is a person, one is an AI and they are both trying to make me believe they are a person and then I do this test over and over again. , I have the possibility to do it well.
Lots of people vomit on the news. Hey, Twitter or Facebook justThey passed the Turing test because you know someone picked up a phone and heard a voice ringing. as a person and that's why you better not, no, I have to be actively trying to deceive, that was a real Turing thing and I'm not saying it's a magic test that's right or wrong, it's just a lot more nuanced than the ones. that people ask him. If someone could violate the Turing test where you're actively trying to figure it out and over time it makes no difference, I'm not saying that's the magic ingredient for AI, but you have to admit you might as well have a conversation. with one of them versus another and there's probably some point where the differences for specific tasks might not matter much yes, you agree I just want to say one thing: you sit me next to an extraordinary woman who classifies A weekend discover your child's diabetes and get all the data.
She unites orphans with her families in remote parts of the world. She helps autistic children. I agree with that from time to time and it makes me sad to agree with it, but here. We're, I agree, maybe we should stop there, we should stop there, well, you can have the last word if you want, I think in that case of the Turing test that you described, if someone told me, here are the six things, or seven, or eight or nine or ten things that we are going to test the Turing test on. I could place 10 deep mines, each of which will work very, very hard on one of them and then I'll alternate between them so that the human on the other side is working against a very competent machine.
I could easily see a scenario where you know we pass the Turing test, so I still don't think it's qualitatively very different from where we are now in terms of so I'm not sure. that definition of the Turing test is very relevant to defining agi. I don't think we have a good one. I think that's where I started my premise, which is that I have a hard time understanding what human intelligence is, so it's really hard to think about it. What is agi? I think it's a very good place to end the conference, so thank you all for submitting questions and we haven't been able to ask them all tonight, but we encourage you to continue the conversation.
Keep asking. questions The Royal Society website has a lot of general information about this area if you're interested, but for now I'd just like to say: Can we thank this wonderful panel? Thank you so much. Good night. I'm going to ask them. what he thought you

If you have any copyright issue, please Contact