YTread Logo
YTread Logo

Bill Gates on AI and the rapidly evolving future of computing

Apr 06, 2024
SPEAKER 1: It was impressive, amazing. After the biology questions, I asked them to write "What do you say to a parent with a sick child?" She gave this excellent, very thoughtful answer, which was perhaps better than any of us in the room could have given. I thought, wow, what is the scope of this? Because this is much better. KEVIN SCOTT: Hello, everyone. Welcome to Behind the Technology. I'm your host, Kevin Scott, Chief Technology Officer at Microsoft. In this podcast, we'll get behind technology, talk to some of the people who have made our modern technological world possible, and understand what motivated them to create what they did.
bill gates on ai and the rapidly evolving future of computing
Join me to learn a little about the history of

computing

and get some behind-the-scenes insights into what's happening today. Stay. Hello, welcome to Behind the Tech. We have a great episode for you today with a really special guest, Bill Gates, who needs no introduction given the incredible impact he has had on the world of technology around the world over the last few decades, has been working closely with the Microsoft and OpenAI teams over the past few months to help us think about what the amazing revolution we're experiencing right now in AI means for OpenAI, Microsoft, all of our stakeholders, and for the world at large. general.
bill gates on ai and the rapidly evolving future of computing

More Interesting Facts About,

bill gates on ai and the rapidly evolving future of computing...

I've learned so much from the conversations I've had with Bill over these past few months that I thought it would be great to share a little glimpse of those conversations with all of you listeners today. With that said, let's introduce Bill and start a great conversation. Thank you very much for doing today's program. I just wanted to start with maybe one of the most interesting things that's happened in the last few years in technology, which is GPT-4 and ChatGPT in the work we've been doing together at Microsoft with OpenAI. By the time this podcast airs, OpenAI will have made its announcement to the world about GPT-4, but I want to set the stage, the presentation of the first instance of GPT-4 outside of OpenAI was for you last August at a dinner you hosted with Reid, Sam Altman, Greg Brockman, Satya and a bunch of other people.
bill gates on ai and the rapidly evolving future of computing
The people at OpenAI were very eager to show them this because their bar for AI had been so high. I think the push you had given us all about what acceptable, high-ambition AI would look like was actually very helpful. I wanted to ask you, what was that dinner like for you? What were your impressions, what had you been thinking before, and what, if anything, changed in your mind after watching GPT-4? BILL GATES: AI has always been the holy grail of

computing

. When I was young, Stanford research had Shakey the robot trying to pick things up and there were various logic systems that people were working on.
bill gates on ai and the rapidly evolving future of computing
Furthermore, sleep was always some capacity for reasoning. The overall progress in AI until machine learning came along was quite modest. Even speech recognition was barely reasonable and that's why we had this gigantic acceleration with machine learning, particularly in sensory things, recognizing speech, recognizing images. It was phenomenal and it just kept getting better and the scale was part of that. But we still lacked everything that had to do with the complex logic of being able to say, read the text and do what a human being does, which is to quote, understand what is in that text. As Microsoft was doing more with OpenAI, I had a chance to look at them independently a few times and they were generating a lot of text.
They had a little robotic arm. The first generation of texts did not yet seem to have a broad understanding. You can generate a sentence that says Joe's in Chicago and then two sentences later that says Joe's in Seattle, which in your local probabilistic sense was a good sentence, but a human being has a broad understanding of the world from the reading experience that I understand that that cannot be. They were excited about GPT-3 and even early versions of GPT-4. I told them, "Hey, if you can pass an AP biology exam where you answer a question that's not part of the training set or a set of questions, and you give fully reasoned answers, knowing that a book of biology text is one of many things that are in the training corpus, so it will really catch my attention because that would be a big milestone.
Please work on it. I thought they would disappear for two or three years, because my intuition has always been that. we needed to understand knowledge representation and symbolic reasoning in a more explicit way. That we were one or two inventions short of something that was very good at reading and writing and therefore being an assistant. It was surprising that you, Greg and Sam Over the summer we said yeah, maybe it won't be too long before we're going to show you this because it's actually working pretty well in science learning, and in August they said, yeah, let's go to this.
In early September, we had a pretty large group at my house for dinner, maybe 30 people total, including a lot of awesome people from OpenAI, but a good-sized group from Microsoft. Satya was there and they gave him AP biology questions and they let me give him AP biology questions. With one exception, she did a super good job and the exception had to do with math, which we'll talk about later. But it was impressive, amazing. After the biology questions, I asked them to write: "What do you say to a parent with a sick child?" He gave this excellent, very thoughtful answer.
That was perhaps better than any of us in the room could have given. I thought, wow, what is the scope of this? Because this is much better. Then the rest of the night we asked historical questions about whether there are criticisms of Churchill or different things and it was just fascinating. Then over the next few months, I got an account and Sal Khan got one of those first accounts. The idea that you could be asked to write college applications or to rewrite, say, the Declaration of Independence, the same way a famous person like Donald Trump might have written it.
He was so capable of writing poems, giving her a tune like, Hi Jude, and telling him to write about it. Tell him to take an episode of Ted Lasso and include the following. Since that day in September I have said, wow, this is a fundamental change and not without some things that still need to be resolved. But it is a fundamental advance. It confuses people in terms of, well, he can't do this yet, he can't do that, he's not perfect for this or that. Natural language is now the main interface we will use to describe things even to computers.
It's a breakthrough. KEVIN SCOTT: Yeah. There's so many different things to talk about here, but maybe the first one is to talk a little bit about what you're not good at. Because the last thing I think we want to do is give people the impression that it's an AGI, that it's perfect, that there's not a lot of additional work to do to make it better and better. You mentioned math being one of the things, so I thought maybe let's talk a little bit about what you think needs to be improved in these systems over time and where we should focus our energy.
BILL GATES: Yes. It seems to be a general problem that their knowledge of the context when I'm asked, okay, I tell you something and I generate something, humans understand "I'm making up fantasy things here" or "I'm giving you advice that If is wrong, you will buy the wrong stock or take the wrong medicine. We humans have a very deep context of what is happening. Even the AI's ability to know that you've changed context, like if you ask it to tell jokes and then ask it a serious question that a human would see in your face or the nature of that change, okay, we're not into that. joke, he wants to keep telling jokes.
Sometimes you almost have to restart to get it out of, hey, whatever I mention, just make jokes about it. I think that sense of context, there is work. Also in terms of how hard you should work on a problem. When you and I look at a math problem, we know that, well, you might have to apply simplification five or six times to get it into the right form. We are looking at how we make these reductions, whereas today the reasoning is a linear chain of descent through the levels. And if the simplification needs to run 10 times, it probably won't.
Mathematics is a very abstract type of reasoning. Right now, I would say that's the biggest weakness. Interestingly, it can solve many mathematical problems. There are some math problems where if you ask him to explain it abstractly, to essentially make an equation or a program that matches the math problem, he does it perfectly and you could pass it on to a normal solver, whereas if you say Yes allows you to do numerical work, often makes mistakes. It's very funny because sometimes you are very sure that, hey, or you say, I wrote wrong. Well, in fact, there is no typewriter anywhere in this scene.
The notion of bad writing is actually very strange. Whether these are current areas of weakness, it will be six months, a year, two years before they are largely fixed, so we have a serious mode where it's not just about making up URLs. Then we have the most fantasy mode. Some of that is already being done largely through prompting and, eventually, through training. When training for math, some special training may be necessary. But I don't think these problems are fundamental. There are people who think that they are statisticians, therefore they will never be able to do X.
That is nonsense. Every example they give of something specific you don't do, wait a few months. Is very good. In characterizing how good it is, the people who say it's horrible are really wrong and the people who think this is AGI are really wrong. Those of us in the middle are simply trying to make sure it's applied the right way. There are many activities, such as helping someone with their college application. What is my next step? What haven't I done? I have the following symptoms, which are actually well within the limits of things you can do quite effectively.
KEVIN SCOTT: Yes. Well, I want to talk a little bit about the notion that you are able to use tools to help you reason. I'll give you an example from this weekend with my daughter. My daughter had this assignment where she had this list of 115 vocabulary words. She had written a 1000 word essay and her goal was to use as many vocabulary words as she reasonably could in this 1000 word essay, which is a ridiculous task on the surface. But she had written this essay and she was going through this list trying to manually figure out what her account was on this vocabulary list and it was boring and she was like, "I want a shortcut here.
Dad, can you get me a ChatGPT account and can I just put this out there and he'll do it for me?" We did it and ChatGPT, which doesn't run the GPT-4 model, but I don't think GPT-4 would have done it well either, didn't do it well at all. It wasn't precise. But what I asked her to do with me was: Well, let's let ChatGPT write a little Python program that can do this very precisely: This is a very simple introductory CS problem here. The fact that the Python code to solve that problem was perfect and I got my solution right away is just amazing.
My 14 year old daughter who doesn't program understood everything that was happening. I don't know if you've done much thinking these past few months because essentially when we're trying to solve a complicated math problem, we have a head full of cognitive tools that we pick up like these abstractions that you're talking about. to help us break down very complicated problems into smaller or less complicated problems that we can solve. I think it's a very interesting idea to think about how these systems will be able to do that with code. BILL GATES: Yes. He's very good at writing.
That's something amazing. But when you can use natural language, say for a drawing program you want several objects and you want to change them in certain ways, sure, you still want the menus to tweak the colors, but the main interface for creating a drawing from scratch will be the language. . If you want a summarized document, that's something you can do extremely well. When you have large bodies of text, when you have text creation issues, a ChatGPT-3 was written where a doctor has to write to insurance companies to explain why they think something should be covered, that's very complicated and it was very useful.
She was reading that letter to make sure he was okay. In ChatGPT version 4, we took complex medical

bill

s and said, please explain this

bill

to me. What is this and how does it relate to my insurance policy? It was incredibly helpful to be able to do that. Explaining concepts in a simpler way is very useful. There will be many tasks where there will be a huge increase in productivity, including many documents, accounts payable, and accounts receivable. Let's just take the healthcare system, there are many documents that now the software will be able to characterize very effectively.
KEVIN SCOTT: Yes. One of the other things I wanted to talk to you about is that you have this really unique perspective in being involved in several of the big inflection points in technology. For two of these inflections, you are one of themain architects of the inflection itself or one of the great leaders. We wouldn't have the PC and personal computing ecosystem without you. You played a really important role in making the Internet available to everyone and turning it into a ubiquitous technology that everyone can benefit from. To me, this seems like another one of those moments where a lot of things are going to change.
I'm wondering what your advice might be for people who are thinking about having this new It's an amazing technology that I can now use. How should they think about how to use it? How should they think about the urgency with which they are pursuing these new ideas? How does that relate to how you think about things in the PC age and the Internet age? BILL GATES: Yes. The industry starts very small, where the computers are not personal. Then through the microprocessor and a group of companies we got to the personal computer, IBM, Apple and Microsoft were very involved in software.
Even the BASIC interpreter on the Apple II, a very obscure fact, was something I did for Apple. That idea that, wow, this is a tool where at least to edit documents you have to write everything down, was pretty surprising. Then connecting them over the Internet was amazing. Then moving the calculation to the mobile phone was absolutely amazing. Once we have the PC, the Internet, the software industry and the mobile phone, the digital world is changing huge parts of our activities. I was recently in India, seeing how they make payments digitally even for government programs. It's an amazing app from that world to help people who would never have bank accounts because the fees are too high, it's too complicated.
We continue to benefit from that foundation. I think this, the beginning of computers that read and write, is as profound as any of those steps and a little surprising because robotics has been a little slower than I would have expected. I'm not talking about autonomous driving, I think it's a special case, which is particularly difficult because of the open environment and the difficulty of the safety and security standards that people will bring to that. But even in factories where you really have a lot of control over the environment of what's happening and you can make sure that there are no kids running around near that factory.
A few people have been saying that these guys predict too much, which is certainly correct. But here's a case where we underestimated natural language and the computer's ability to deal with that and how that affects administrative jobs, including sales, service, helping a doctor think about what to put in his medical record, which I thought were many. years of rest. All books on AI, even when they talk about things that could become much more productive, will turn out to be wrong. Because we're just at the beginning of this, you could almost call it a mania, like Internet mania.
But the Internet mania, although it had its craziness and things like sock puppets or things where you look back and say, what were we thinking? It was a very profound tool that we now take for granted. Even just for scientific discoveries during the pandemic, the usefulness of the immediate exchange that took place there was simply phenomenal. This is a huge breakthrough, a milestone, as I've seen in the entire digital computer space, that really starts when I'm very young. KEVIN SCOTT: I'm going to tell you this and I'm interested in your reaction because you will always tell me when an idea is stupid.
But one of the things I've been thinking about for the last few years is that one of the big changes that's happening because of this technology is that for 180 years from the time Ada Lovelace wrote the first program to harness the power of a machine digital until today, the way to get a digital machine to do the job for you is that you have to be an expert programmer, which is like a barrier to entry, that's not easy, or you have to have an expert programmer who anticipates the user needs and create software that you can then use to make the machine do something for you.
This may be the point where we get that paradigm to shift a little. Because you have this natural language interface and these AIs can write code and they will be able to activate a lot of services and systems that give everyday people the ability to do very complicated things with machines without having to have all this. experience that you and I spent many years developing? BILL GATES: No, absolutely. Each advancement will hopefully lower the bar in terms of who can easily take advantage of it. The spreadsheet was an example of that because, although you still have to understand these formulas, you don't really have to understand much of the logic or symbols.
It had the input and the output so tightly connected in this network structure that you didn't think about the separation of those two. That is, in a way, limiting for the super abstract thinker. But it was very powerful in terms of directness. That didn't turn out well, let me change it. Here, there are a whole class of programs to take corporate data and present it or perform complex queries. Have there been sales offices where we were missing 20 percent of the staff and our sales results were affected by that? Now you don't have to go to the IT department and stand in line and be told, oh, that's too hard or something.
Most of these corporate learning things, whether it's a query or a report or even a simple workflow where if something happens, you want to target an activity, the English description will be the program. When you want it to do something additional, just select English or whatever language you use and type that. There is a whole layer of consultation and scheduling support that any employee can access. And the same thing is true, I'm somewhere in the college application process and I want to know what my next step is and what the threshold is for these things, it's so opaque today, it empowers people to go straight and interact, that's the theme this is trying to enable.
KEVIN SCOTT: I'm wondering what are some of the things that excite you most in terms of applying technology to things that you care deeply about from the Foundation or from your personal perspective. You care deeply about education, public health, climate and sustainable energy. Do you have all these things you're working on and have you been thinking about how this technology affects any of those things? BILL GATES: It's been great that, even since the fall, OpenAI and Microsoft have been engaging with people at the Gates Foundation thinking about health and education issues. In fact, Peter Lee is going to publish some of his ideas, which in some ways focus on the health of the rich world, but it is quite obvious how that work, in some sense, is even more surprising in health systems where there are so few doctors and receive advice of any kind is incredibly difficult.
It's amazing to look at saying, can we have a personal tutor to help you? Can you, when you write something, if you go to an amazing school? Yes, the teacher can go line by line and give you feedback, but many kids just don't get as much feedback on their writing and it seems like, set up correctly, this is a great tool for giving you written feedback. It's also ironic that people say: what does it mean that people can cheat and submit writing on a computer? Kind of like when calculators came out and we said, Oh my God, what are we going to do with addition and subtraction?
Of course, they created contexts where a calculator couldn't be used and we got through it without it being a big problem. I think education is the most interesting application. I think health is the second most interesting. Obviously, there are business opportunities in sales and service type things and that will happen, you don't need any kind of basic commitment on that. We did a lot of brainstorming with Sal Khan and it looks very promising because in a class of 30 or 20 students you can't give individual attention to a student, you can't understand her motivation or what keeps her interested.
They may be ahead of the class, or they may be behind. It seems that in many subject areas, by having this, dialogue and giving feedback, for the first time, we will be able to help education. Now, we have to admit that, except for this prosaic thing of looking up articles on Wikipedia or helping you write things and print them well, the notion that computers were going to revolutionize education to a large extent is still more ahead of us than behind us. we. Yes, some games appeal to kids, but the average math score in the United States hasn't increased much in the last 30 years.
Computer people kept saying, "Hey, we want credit for that." Credit for what? It's not much better than it was. Obviously, computers did not work any miracles there. I think over the next 5 to 10 years we will be thinking about learning and how you can be supported in your learning in a very different way than just looking at the material. KEVIN SCOTT: I know you think this is a global problem. My wife and I with our family foundation consider it a local problem for disadvantaged children. One of the common things we see is that parental involvement makes a big difference in children's educational outcomes.
If you look at the children of immigrants in East San Jose or East Palo Alto here in Silicon Valley, often the parents work two or three jobs, they are so busy that it is difficult for them to interact with their children, and sometimes Sometimes, they do not speak English and therefore do not even have the linguistic capacity. You can imagine what technology like this could do when it doesn't really matter what language you speak. You can bridge that gap between the parent and the teacher, and you can be there to help the parent understand where the obstacles are for the child and even potentially personalize the child's needs and help with the things that she needs.
I think it's really very exciting. BILL GATES: We often forget about language barriers, and that comes up in developing countries. India has many languages ​​and I was at the Bangalore Research Laboratory as part of that trip. They're taking these advanced technologies to try to deal with the language queue, so that's not a huge barrier. KEVIN SCOTT: One of the things you said at the GPT-4 dinner at your house is that you had this experience early in Microsoft's history where you saw a demo that changed the way you thought about what the computing industry was going to be like. staff. developed and that made him change the direction of the company.
I wonder if you would be willing to share that with everyone. BILL GATES: Xerox had made a lot of money with photocopiers. They got ahead, their patents were there, the Japanese competition had not arrived, so they created a research center in Palo Alto, which was always known by its acronym, Palo Alto Research Center, PARC. At PARC, they brought together an incredible set of talent. Bob Taylor and others were very good judges of talent. Then you end up with Alan Kay, Charles Simonyi, Butler Lampson, I don't want to leave anyone out, but there are a lot of other people creating a GUI machine.
They weren't the only ones. There were people in Europe doing some of these things, but they combined them with many others. They put it online, they got a laser printer. Charles Simonyi was there programming this and he made a word processor that used that very graphical bitmap display and allowed you to do things like fonts and things that we now take for granted. I went to visit Charles at PARC in the evening, and he demonstrated what he had done with this Bravo word processor, and then printed out on the laser printer a document of all the things that would have to be done if there were cheap computers and omnipresent.
He and I brainstormed about it, and he updated the document and printed it again. He just blew me away. Microsoft's agenda emerged: It was 1979 that I'm with, computers are still completely in character mode. This is when the commitment to create software for Mac arises, since Steve Jobs had a similar experience with Bob Belville at Xerox PARC. Xerox built a very expensive machine called the Star that only sold a few thousand because people didn't think word processing was something they would pay for. You had to really come in at the lower end, so PCs with character mode first, but then graphical word processing.
I hired Charles. Charles helped make Word and Excel and a lot of our great things. Finally, 15 years after Charles showed me his thinking and we brainstormed, we largely achieved through Windows and Office on both Windows and Mac, we largely achieved that piece of paper. I told the group that this was the other demo that had blown me away and made me think: what can be achieved in the next 5 to 10 years? We should be more ambitious by taking advantage of this progress. Even with the imperfections that we are going to reduce over time. KEVIN SCOTT: That was a really powerful and motivating anecdote that you shared.
Maybe one last thing here before we go, or maybe two more things. Which ones do you thinkAre the big challenges we should be thinking about over the next five to ten years? In a sense, I have this piece of paper that Charles wrote, it's here next to my desk, framed because I think it was one of the most incredible predictions of a technological cycle that anyone has ever written. I don't know why everyone doesn't know about the existence of this thing. It's just amazing. As we think about what lies ahead in the next 5 to 10 years, what is your challenge?
Not just to Microsoft, but to everyone who will be thinking about this? Do you think we should try hard? BILL GATES: Well, there will be a number of innovations in how to run these algorithms and a lot of chips. Some move from silicon to optics to reduce power and cost. Immense innovation that Nvidia is a leader in today, but others will try to challenge them as they continue to improve and use even some radical approaches because we want the cost of running these things and even training to be dramatically less than they are today. Ideally, we'd like to move them so that you can often do them on a standalone device, without having to go to the cloud to get these things.
There is a lot to do on the platform you use. Then we have an immense challenge on the software side: do you have many specialized versions of this or do you have one that keeps improving? There will be immense competition. Those two approaches. Even Microsoft will pursue both in parallel. Ideally, within a contained domain we will get something whose accuracy is demonstrably extremely high by limiting the areas in which it works and by having the training data and perhaps even some pre- and post-check type logic applied to that. I definitely think that in areas like sales and service, there's a lot that can be done there and that's very valuable.
The notion that there is this emerging capability means that the impetus to try to scale even further will be there. Now, what corpus exists? Once you've gotten through each piece of text and video. Are you generating things synthetically? Do you still see that improvement as you expand? Obviously this will be pursued and the fact that doing so costs billions of dollars will not stop this from moving forward at very high speed. Then there are a lot of social questions that they raise: where can we push it in education? Not that I immediately understand students' motivation or cognition.
There will have to be a lot of training and integrate it into an environment where adults see the student's commitment and see the motivation. Although you free the teacher from many things, from that part of the personal relationship, you will still want all the context that arises from those tutoring sessions to be visible and help the dialogue that exists with the teacher or with the patient. Microsoft talks about making humans more productive. Some things will be automated, but many things will simply be made easier when the ultimate commitment is largely a human being, but a human being who is capable of doing much more than ever before.
The number of challenges and opportunities created by this is quite incredible. I can see how committed the OpenAI team is to this, I'm sure there are plenty of teams I can't see that are pushing this. And the size of the industry, when the microprocessor was invented, the software industry was a small industry. We could put most of us on a panel and they could complain that I work too much and shouldn't be allowed. We can all laugh at that. Now, this is a global industry, so it's a little more difficult to navigate. I get a weekly summary of all the different AI articles that are being written.
Can we use it for moral issues? Which seemed silly to even ask me, but hey, people can ask whatever they want. This has the ability to advance faster because the number of people, resources and companies goes far beyond those other advances that I mentioned and that I had the privilege of experiencing. KEVIN SCOTT: One of the things that for me has been really fascinating and I think I'm going to say this just as a reminder for people who are thinking about pursuing careers in computer science and becoming programmers. I spent most of my IT training in the early part of my career as a systems person.
I wrote compilers and I wrote tons of assembly language and design programming languages, which I know you did too. I feel like a lot of the things I studied just in terms of parallel optimization and high-performance computing architectures in grad school. I left grad school and went to Google and thought I would never use any of this again. Then suddenly we're building supercomputers to train models and these things become relevant again. I think it's interesting. I wonder what Bill Gates, the young programmer, would be working on if you were in the mix right now, like writing code for these things because there are a lot of interesting things to work on.
But what do you think would make you, a young twenty-something programmer, very excited about this technology stack? BILL GATES: Well, there's an element to this that's quite mathematical. I feel lucky to have done a lot of calculations. That was a gateway into programming for me, including all the crazy stuff with number arrays and their properties. There are people who came to programming without that mathematical training, who do not need to learn a little mathematics. I'm not saying it's very difficult, but they should make it so that when you see those fun equations, you think, "I'm comfortable with that because a lot of the calculus will be that thing instead of classical programming." .
The paradox when I started writing small programs was super necessary. The original Macintosh is a 128 KB machine, 128 KB of which 22 KB is the bitmap display. Almost no one could write programs that fit there. Microsoft, our approach, our tools, allowed us to write code for that machine and really only us and Apple were successful, then when it became 512K some people were successful but even that was very difficult for them. I remember thinking that when memory became 4 gigabytes, all these programmers don't understand discipline and optimization and are simply allowed to waste resources. But now that you're operating with billions of parameters, the idea of ​​well, can I skip some of those parameters?
Can I simplify some of those parameters? Can I precalculate several things? If I have many models, can I keep deltas between models instead of having them? All optimizations that made sense on these machines with such limited resources. Well, some of them come back into this world where when billions of operations are going to be performed or literally hundreds of billions of operations, we are pushing the absolute limit of the cost and performance of these computers. One thing that's very impressive is that the accelerations even in the last six months on some of these things have been better than expected.
That's great because you get hardware acceleration and software speed multiplied. That means how much resource scarcity will we have in the coming years? That's less clear to me now that these improvements are happening, although I'm still concerned about that and how we make sure that businesses in general, and Microsoft in particular, allocate it wisely. Understanding algorithms, understanding why certain things are fast and slow, is fun. The systems work like this in my first queries, just one computer and then a network of computers, now that the systems have data centers with millions of CPUs, it's amazing the optimization that can be done there.
How power supplies work or how network connections work. Anyway, in almost all areas of computing, including database-type techniques and programming techniques, this forces us to think in a really new way. KEVIN SCOTT: I couldn't agree more. Last, last question. I know you are incredibly busy and have the ability to choose to work on what you want to work on. But I want to ask you anyway, what do you do outside of work for fun? I ask this to everyone who comes to the program. BILL GATES: Oh, that's great. I can read a lot. I can play a lot of tennis.
During the pandemic I was in California in the fall and winter and I still enjoy it. Even though the Foundation meeting is in person and some of these Microsoft OpenAI meetings, it's been great that we've been able to do them in person, but some we can do virtually. Anyway, I play pickleball because I've been playing tennis for over 50 years and I like to read a lot. I made a mistake and went to the Australian Open for the first time because the weather is nice and warm there and it was spectacular. KEVIN SCOTT: I actually want to push this idea that you read a lot about.
You say you read a lot, which is not the same as what most people say when they say they read a lot. You are famous for carrying a giant bag of books with you everywhere and reading an incredible amount of things. Everything from really difficult science books to fiction. How much do you really read? What is Bill Gates' typical reading pace? BILL GATES: If I don't read a book in a week, then I really look at what I was doing that week. If I'm on vacation, I expect to read more like five, six or seven because the books vary in size.
Over the course of the year you should be able to read about 80 books. I have younger children who read even more than I do. It's like, oh gosh, I have to be: what Sowell books am I going to read? I still read all the authors from Smil, Pinker and some that are so profound and have shaped my thinking. But reading is relaxing. I should read more fiction. When I fall behind, my non-fiction list tends to dominate, and yet people have suggested such great fiction to me. That's why I share my reading list on Gates Notes. KEVIN SCOTT: What is over-under?
You're famous for saying you want to read Infinite Jest by David Foster Wallace, like the over-under that happened in '23. BILL GATES: Well, if there hadn't been this damn AI breakthrough. I'm joking, but it's really true. I've enthusiastically spent a lot more time sitting down with Microsoft product groups and asking: What does this mean for security? What does it mean for Office? What does it mean for our database type? Because I love that type of brainstorming because it opens up new perspectives. So no, it's all your fault. There is no infinite joke this year. KEVIN SCOTT: Excellent.
Well, I appreciate you making that exchange because it's been really fantastic that over the last six months you've helped us think through all of this. Thank you for that and thank you for doing today's podcast. I really appreciate it a lot. BILL GATES: No, it was fun, Kevin. Thank you so much. KEVIN SCOTT: Thank you very much to Bill for being here with us today. I hope you all enjoyed the conversation as much as I did. There are so many wonderful things in this conversation that were reflections of the conversations we've had with Bill over the past few months as we think about this incredible revolution.
One of the things I've learned most from Bill in recent months, as we think about what AI means for the

future

, is how he thought about what personal computing and the revolution in microprocessors and PC operating systems meant. to the world when he was building Microsoft from scratch. Even what it felt like to him as one of the leaders who helped bring the Internet revolution to the world. Like those parts of the conversation we have today where he was telling some of his experiences, like the first meeting he had with Charles Simonyi at Xerox PARC, where he saw one of the first graphical word processors in the world and how seeing it when talking to Charles influenced greatly in the history of not only Microsoft, but the world in the years that followed.
I'm just listening to Bill talk about his passion for the things that the Gates Foundation does and what these AI technologies mean for those things, like how maybe we can use these technologies to accelerate some of the benefits for people around the world. who are most benefited. need for technologies like this to help them live better and more successful lives. Again, it's a great pleasure to be able to speak with Bill on today's podcast. If he has something you would like to share with us, email any time to [email protected]. You can follow Behind the Tech on your favorite podcast platform or watch full video episodes on YouTube.
Thanks for tuning in and until next time.

If you have any copyright issue, please Contact