YTread Logo
YTread Logo

OpenAI CEO Sam Altman and CTO Mira Murati on the Future of AI and ChatGPT | WSJ Tech Live 2023

Apr 30, 2024
So here's my first question for you. Very, very simple question. What makes you human? Me? You two, do you both have to answer what makes you human? Oh, and with one word you get one word: humor, humor, emotion. OK. Um To confirm that you are both human. I'll need you to confirm which of these boxes has a traffic light. Believe. Uh, I think I can do that now too. OK. Alright. Well, Sam, you were actually here nine years ago at our first

tech

concert and I really want to show the clip of what you said. Certainly the fear with AI or artificial intelligence in general is that it will replace drivers or doctors or whatever.
openai ceo sam altman and cto mira murati on the future of ai and chatgpt wsj tech live 2023
Um, but the optimistic view on this and certainly what supports what we're seeing is that computers and humans are very good at very different things. So a computer doctor will beat the numbers and do a better job than a human at looking at a massive amount of data and saying this. But in cases that require judgment, creativity, or empathy, we are nowhere near any computer system that is good for this. OK? The year

2023

is coming. Partially correct and partly incorrect. OK. It could have been worse, it could have been worse. What is your perspective? Now, people, I think the prevailing wisdom back then was that you were going to do the robotic type of jobs very well first.
openai ceo sam altman and cto mira murati on the future of ai and chatgpt wsj tech live 2023

More Interesting Facts About,

openai ceo sam altman and cto mira murati on the future of ai and chatgpt wsj tech live 2023...

Then he would have been a great robotic surgeon, something like that. Um And then maybe eventually I would do the higher judgment type of tasks. Uh, and then, you know, then he would build empathy and then maybe never, he would be like a really great creative thinker and creativity has been in a sense. And at this point, the definition of the word creativity is up for debate, but creativity, in some sense, has been easier for AI than people thought. You know, watching Dolly Three generate these amazing images, or writing these creative ones. stories with G BT four or whatever.
openai ceo sam altman and cto mira murati on the future of ai and chatgpt wsj tech live 2023
Um So that part of the answer maybe wasn't perfect. Uh, and GP I, I certainly wouldn't have predicted GP T 49 years ago how it turned out. But many other parts of the people still want a human doctor. Uh That's definitely very true. And I want to quickly switch to a G I, what's a G I, Look? If you could define it for all audience members, I would say it's a system that can be generalized across many domains that, you know, would be equivalent to human work. Um. They produce a lot of productivity and economic value. And, you know, we're talking about a system that can be generalized across many digital domains of human work.
openai ceo sam altman and cto mira murati on the future of ai and chatgpt wsj tech live 2023
And Sam, why is a soldier the target? The two things that I think will be most important over the next decade or a few decades to improve the human condition, the one that will most give us more of what we want. Our intelligence, uh, abundant and economical. Um, the more powerful, the more general, the more intelligent, the better. Uh, I think that's A G I and then, and then abundant, cheap energy. And if we can do these two things in the world, then it's almost difficult to imagine how much more we could do. Uh, we firmly believe that you give people better tools and they do things that amaze you.
And I believe that A G I will be the best tool that humanity has created so far. With it, we can solve all kinds of problems. We will be able to express ourselves in new creative ways. We will do incredible things for each other, for ourselves, for the world, for this unfolding human story. And you know, it's new and everything new comes with changes and changes, it's not always easy. Um, but I think this is going to be absolutely tremendous and, you know, we're going to have nine more years. If you're nice enough to invite me back, you'll throw out this question and people will say, how could we think we didn't want this?
As? I guess there are two parts to that? My next question. When will it be here and how will we know it is here? Well, either of you, I mean, both of you can predict how long I think we'll call you in 10 years and tell you that you're more wrong. I mean, yeah, yeah, probably in the next decade, but I would say it's a little complicated because, you know, when will it be here? Good. And I just give you a definition, but we often talk about intelligence and you know, how intelligent is it or is it conscious and sensitive and all these terms.
And, you know, they're not entirely right because they sort of define our, our own intelligence and we're building something slightly different and you can see how the definition of intelligence evolves from, you know, machines that were really cool in chess and a while ago and now the GP T series and then what's next, but it continues to evolve and pushes what and how we define intelligence. We, we, define a G I as something we don't have yet. So we've moved, I mean, there were a lot of people who 10 years ago would have said art if you could do something like G BT four G BT five, maybe that would have been an A T I and, and now people say, Well, you know , it's like a cute little chatbot or whatever.
And I think that's wonderful. I think it's great that the goalposts keep moving. It makes us work harder. Um, but I think we're getting close enough to whatever the A G I threshold is that we can't wave at it anymore and the definition will matter, so less than a decade for some definition. OK. Alright. The goal post is moving more. Um Sam, you used the word. Um, and, and, and earlier, when describing a G I, the term medium human, can you explain what that is? Um, I think there are experts in areas that will do better than AI systems over a long period of time.
Um And then, you know, you might get to like some area where I'm really an expert in some task and I'll say, okay, you know, GP T four is doing a horrible job. there. GP T 56, whatever, doing a horrible job there. But you can move on to other tasks that I'm good at, but I'm certainly not an expert. I'm like, maybe, an average of what different people in the world could do with something. And for that, then you might look at it and say, oh, this is actually working pretty well. So what we mean by this is that in any given area, human experts can, as experts in any area, simply do extraordinary things.
And that may take us a while to be able to do with these, these systems. But for the most average type of case performance. So, you know, me doing something I'm not very good at. Anyway, maybe our

future

versions can help me a lot with that. So am I an average human being at some tasks? I'm sure that. And it's clear that you are a very skilled human being and no GP will be taking your job anytime soon. OK. That makes me feel like it makes me feel a little better. Uh Look, how's the G BT five doing?
Um, we're not there yet, but it's kind of a need to know. I'll let you know that's a very diplomatic answer. I'm going to be happy about all this. I wouldn't have done it, I would have just said, oh yeah, this is what's happening. Brilliant. No, no, we won't send him back here. Pair of these two who paired up, whose idea was it? Um. You're working on it, you're training it. We are always working on the next thing. Just have a staring contest. That's what makes us human. Um, all these steps, though, with GP T, right?
Is it or, you know, GP T 33.5 for our steps towards A G I with each of them? Are you looking for a reference point? You are looking for? Is this what we want to achieve? Yes. So, you know, before we had the product, we were looking at academic benchmarks and how well these models were meeting academic benchmarks and, you know, open AI is known for going for scale, You know, throw a bunch of compute and data into these neural networks and watch them get better and better at predicting the next token. But it's not that we really care about predicting the next token, but about the real-world tasks it correlates with.
And that's actually what we started to see once we did real-world research and built products through the API, eventually through A G BT as well. And now we have real world examples. We can see how our clients are doing in specific domains, how it moves the needle for specific businesses. Um And of course, with GP T four, we saw that he did very well on tests like SAT and LSA, etc. So, in a way, we go back to our earlier point that we are, you know, continually evolving our definition of what it means for these models to be more capable.
Um, but you know, as we increase the capacity vector, what we're really looking for is reliability and security. These are very intertwined and it is very important to create systems that, of course, are increasingly capable, but that can really be trusted, that are robust and where the output of the system can be trusted. So we are pushing both vectors at the same time. And you know, as we build the next model, the next set of

tech

nologies, we're both betting on continuing to bet on scale. But we are also analyzing this other element of multimodality. Because we want these models to perceive the world in a similar way to how we do and, you know, we perceive the world, not just in text but also in images and sounds and so on.
So we want to have solid representations of the world. In these models, will G BT five solve the problem of hallucinations? Well, I mean, actually, maybe like, let's see, we've made a lot of progress on the hallucinations issue with G BT four, but we're still pretty much, we're not where, where we need to be. , but, you know, we are on the right path and it is unknown, it is research, it could be that, by continuing on this path of reinforcement learning with human feedback, we can go all the way to really reliable results. And we are also adding other elements like recovery and search.
So, you can, you have the ability to provide more objective answers or get more factual results from the model. So there is a combination of technologies that we are putting together to reduce the problem of hallucinations. Sam, I'll ask you about the data, the training data. Obviously, there have been, you know, maybe some people in this audience who are not thrilled with some of the data that you guys have used to train some of your models. Not far from here, in Hollywood, people haven't been thrilled. Hey, editors, when you're, when you're considering now as you walk and go to work for these next models, what are the conversations that you're having around the data?
So, some thoughts in different directions here. First, we obviously only want to use data that people are excited about us using. As if not, we want the model of this new world to work for everyone. And we want to find ways to make people say, you know, I see why this is cool. I understand why this is going to be a new way of thinking about some of these issues around data ownership and how economic flows work. But we want to come up with something that everyone will be excited about. But one of the challenges has been that people, you know, different types of data owners have very different pictures.
So we're just experimenting with a lot of things that we're doing in associations in different ways. Um And we think that as with any new field, we'll find something that just becomes a new standard as well, uh, I think as these models become smarter and more capable, we'll need less training data. So I think there's this vision right now, which is that we're just going to like, you know, the models are going to like train on every word that humanity has ever produced or whatever. And I, technically speaking, I don't think that's the long-term path here, like we have existential proof with humans that that's not the only way to become intelligent.
Um And so I think the conversation gets a little derailed by this because what's really going to matter in the

future

is particularly valuable data. You know, people want people to trust the Wall Street Journal and they want to see its content. And the Wall Street Journal wants that too. And we found new models to make that work. But I think the conversation about data and the shape of all of this, because of the technological progress we're making, is about to change. Well, editors like mine might be out there somewhere. They want money because that data is the future of this whole race over who can pay the most for the best data.
Um No, that was the point I was trying to make, elegantly I guess, but you still need something, you will need something. But the core, what people really like about a GP T model, uh, is not fundamentally that it has particular knowledge, there are better ways to discover that it has this larval reasoning ability and that's going to improve every once again. But that's really what this is going to be about. And then there will be ways in which you can establish all kinds of financial agreements as a user or as a company that makes the model or whatever you say.
Alright.Now, you know, I understand that you would like me to get this data from the Wall Street Journal. I can do it, but this is the current deal. So there will be things like that. But, the fundamental thing about these models is not that they memorize a lot of data. So, kind of like the model that you're also embedded in right now, it goes out, looks for some of that data, and can retrieve some of it. And that's, you know, on the Internet, we decided again, in the early days of the Internet, there were a lot of conversations about different models that could exist and all, you know, here's the core framework and there's different pieces in there.
Of course. And we're all going to have to sort that out for a moment, well, speaking of Bing, you and Satya Nadella, your $10 billion friends or frenemies. Yeah, I won't pretend it's like a perfect relationship, but it's nowhere near the top of my category. Is very good. Like we had our fights. It seems that more and more, as they release more and more products, they seem to compete in some places. Um, I mean, I think there's something core to this language interface that's very important and so there's going to be a lot of people doing things for that and then there's other places like, you know, we offer an API version, They offer an API version, but like this, it is something very friendly and we all like it, we work on it so that we all benefit and are happy and, and we just want to jointly want the greatest possible use of our models, of ours.
So we're super aligned there. Um But yeah, it's like it makes sense, friends, friends, that text message. We do a lot. OK. Alright. Alright. Um Look, you, over the last few months you've started to implement a lot more customization into the model, right? You can put personalized instructions. You get the voice now and we'll pull one of them out, we'll pull the screen down to hear one of these clips. But I wanted to replay one of these conversations I had with the GP TS chat just yesterday. It is a slow screen to which G BT responds faster.
That's really embarrassing. Yes. Although we are going to improve that. We'll get it faster which I can't help you with. But we will get chas G BT. No, I'm saying cha G BT is faster than this. But, and I mean it's like, it's not often that people say chag BT is faster than X. I have a big interview with Open A I executives and I'm nervous. Any advice. Absolutely. Given his experience as a technology columnist, he is well equipped for this. Just be yourself. Do your homework on the latest Open A developments and trust their experience.
Remember, they'll probably appreciate thoughtful, well-researched questions and, well, everyone gets nervous. Take a deep breath and move it. Am I rocking it? I'm rocking it. Alright. Are we rocking here? OK. But what's so surprising about that? First of all I want everyone, that's the voice of Chat GP. They have one of five. Yes. Yes. That, I mean, that's the voice. It's, it's, it sounds so human. It sounds so natural. He knows about me because I have already given him personalized instructions. I am a technology journalist. He also knows that I am allergic to avocado. He's always putting that in there.
Don't eat avocado. I say, I'm not asking about the avocado. We have work to do. Is there, is there a future and this is what maybe you're trying to build here, where we have deep relationships with this type of bo? It will be a meaningful relationship, right? Because, you know, we're building the systems that will be everywhere, in your home, in your educational environment, in your work environment. And maybe, you know, when you're having fun. And that's why it's so important to do it right? And we have to be very careful about how we design this interaction so that it's ultimately, you know, elevating and fun and, uh, improves productivity and improves creativity.
Um And, you know, ultimately, this is where we're trying to get to. And as we increase the capabilities of the technology, we also want to make sure that, you know, on the product side, we feel in control of this, these systems in the sense that we can guide them to do things. what we want them to do and the result is reliable, that is very important. And of course, we want it to be personalized, right? And as it has more information about your preferences, the things you like, the things you do and the capabilities of the models increase and other features like memory etc.
Of course, it's going to become more personalized and that's one goal, it's going to be more useful and it's going to be more fun and it's going to be more creative and it's not just a system, right? Like you can have many of these systems customized for specific domains and tasks. But that is a great responsibility. And you will have control of people's friends, perhaps of people, and you will become lovers of people. Uh, how do you guys think about that control? First of all, I think we won't be the only players here, but there will be a lot of people.
So we have, we have, we can give our push to the trajectory of this technological development and we have some opinions. Uh, but we really think that decisions belong to a species of humanity, to society as a whole, whatever you want to call it. And we will be one of many players building sophisticated systems here. Then it will be a discussion throughout society. It is, and, and there will be all the normal forces, there will be competitive products that will offer different things, there will be different types of social hugs and rejections, there will be regulatory issues.
Uh, it's going to be like the same complicated mess that any new technological birth process goes through and then we, very soon, will turn around and we'll all feel like we had intelligent AI in our

live

s forever. And, you know, that's the way to progress and I think it's amazing. Um, I personally have deep doubts about this vision of the future where everyone is very close to AI friends and no more so than human friends or whatever. I personally don't want that. Uh, I accept that other people will want that. Um And you know, some people are going to build that and if that's what the world wants and what we decide makes sense, we'll get it.
I personally think customization is great. The personality is great, but it's important that it's not like a person like that and, and at least that, you know, when you talk to A I and when you don't, you know, we call it Chat G BT and no, There's a long history behind that, but we call it Chat G BT and not a person's name very intentionally. And we do a lot of subtle things in the way you use it to make it clear that you're not talking to a person. Um And I, I think what's going to happen is that in the same way that people have a lot of relationships with other people, they're going to continue to do that.
And then there will also be these like in the world, but you know they are something different when you say this is another question for you. What is the ideal device on which we will interact with these? And I wonder if you, from what I heard, you and Johnny Ive have been talking, would you bring something to show us? Um, I think there's something cool to do, but I don't know what it is yet. You must have some ideas, many ideas. I mean, I'm interested in this topic. I think it's possible. I think most of the thinking in the world today is pretty bad about what we can do with this new technology in terms of a new computing platform.
And I think every big enough new technology enables some new computing platform. Um, but a lot of ideas, but like at a very nascent stage. So it's not like that, I guess the question for me is whether there's something about a smartphone, a headset, a laptop, or a speaker that's not quite working right now. Of course, many smartphones are great. As if I had no interest in trying to compete with a smartphone. Like he's something phenomenal at what he does. But I think the way that what AI enables is so fundamentally new that it's possible and maybe we don't like it, you know, maybe, maybe it's like for a lot of reasons it doesn't happen.
But I think it's worth the effort to talk or think about, you know, what can we do? Now before we had computers that could think, or computers that could understand whatever you want to call it, it wasn't possible. And if the answer is nothing, I would be a little disappointed. Well, it looks like it doesn't look like a humanoid robot, which is good. Definitely not. I don't think that works at all. OK. Speaking of hardware, are you making your own chips? Do you want an answer now? Um Headed here. Are we making our own chips? We're trying to determine what it will take to reach the scale we think the world will demand.
And at the scale of the model we think the research can support, it may not require any custom hardware. Um And we have wonderful partnerships right now with people who are doing amazing work. Um So the default route would certainly be not to do it, but I wouldn't, I would, I would never rule it out. Are there good alternatives to NVIDIA? Uh, NVIDIA certainly has something amazing, amazing. Uh, but, you know, I think the magic of capitalism is doing its thing and a lot of other people are trying it and we'll see where it all goes.
We had Renee Haas here from the guns. I heard you've been talking to her friends. Did you say hello? Not as close as Sata. You're not, you're not that close, not that, okay. I have it, I have it. Um um, this is where we're getting to. Yes, we are getting to the hard part, actually we are about to get to the hardest part. So, my colleagues recently reported that you guys are, you're, you're, you're actually looking at the valuation is 80 to 90 billion and you're expected to hit a billion in revenue. Are you raising money? No. Well, I mean, always but not like right now.
Not now, no, not now. Here are the people with money. Alright, let's talk. Um, we will need enormous amounts of capital to complete our mission and we have been very candid about that. There has to be something more interesting to talk about in our limited time here together than our future capital raising plans, but we'll need a lot more money. We don't know exactly how much we don't know exactly how it's going to be structured, what we're going to do. But you know, it shouldn't be a surprise because we've been saying it all along. Like it's just a tremendously expensive endeavor, but which part of the business is growing the most right now, you can also get involved.
Definitely on the product side. Yes, with the research team it is very important to have talent density, small teams that innovate quickly on the product side, you know, we are doing a lot of things. We're trying to drive great uses of AI both on the platform side and on our own and in working with clients. So that's certainly it, and the revenue comes primarily from that API, the company's revenue. Oh, I would say both sides, both sides. Yes. So my, my Chat G BT Plus subscription. Is that? Yes Yes. How many people here are actually subscribed to Chat G BT Plus?
Thank you very much to all. OK. You make a family plan. Is seriously. It's serious because I spend it on two and we already talked about it. OK. This is what we're really here for tonight. Um, moving a little bit towards the politics and, and some of the fears, it's not very cheap to run if we had a way to say, hey, you know, you can have this so we can give you the same way. more for the 20 dollars or what we would like to do. And as we make the models more efficient, we'll be able to offer more, but it's not because we don't want more people to use it that we don't do things like family, family plan for $35 for two people that are the type of bargaining, you know. .
Well, I gave you the sweatshirt. And then, you know, there's something we can do there. How do we go from the chat we just heard that told me to shake it up to one that I don't know can shake the world and end the world? Well, I don't think we're going to have a chatbot that ends the world. But how do we arrive at this idea? We have, uh, we have, we have simple chatbots that are not simple. They are advanced in what you are doing. But how do we get from that idea to this fear that is now omnipresent everywhere?
If we are right about the trajectory, things will continue like this and, if we are right, not only in the type of escalation of the GP TS but also in new techniques that interest us and that could help generate new knowledge and Someone with access to a system like this can say: help me hack this computer system or help me design, you know, a new biological pathogen that is much worse than COVID or anything else. It seems to us that it does not take much imagination to think of scenarios that deserve great caution. And once again, we all come and do this because we are so excited about the tremendous benefit and the incredibly positive impact.
And I think it would be like a moral failure not to pursue that for humanity, but we have to address and this happens with a lot of other technologies, we have to address the disadvantages that come with this. And that doesn't mean you don't do it, it doesn't mean you just say something like this A A I. We're going to like, you know, we're going to like go like a complete dune and explode, you know, and we don't have computers or nothing like that. Um But it means you like it, you think about the risks. Try to measure what the capabilities are andtries to build its own technology in a way that mitigates those risks.
And then when you say, hey, here's a new security technique, you make it available to others. And as you think about moving in this direction, what are some of those specific security risks you're looking to put? I mean, like Sim said, you have the capabilities and there's always a downside when you have such an immense, big capability, there's always a downside. So we have a tough task ahead of us to figure out what these disadvantages are, discover them, understand them, and build the tools to mitigate them. And it's not, you know, a one-size-fits-all solution, you usually have to intervene everywhere, from the data to the model to the product tools.
And of course, politics. And then think about all the regulatory and social infrastructure that can keep up with these technologies that we're building. Because ultimately what we want is to slowly deploy these capabilities in a way that makes sense and allows society to adapt. Because, you know, progress is incredibly fast and we want to enable adaptation and all the infrastructure necessary for these technologies to be productively absorbed to exist and be there. So, you know, when you think about what the concrete security measures are along the way, I would say number one is actually implementing the technology and slowly coming to terms with reality, understanding how it affects certain use cases and industries and actually dealing with the implications of that, whether it's regulatory copyrights, you know, whatever the impact is, it's really absorbing that, dealing with that and moving to more and more capabilities.
I don't think that building the technology in a laboratory in a vacuum, without contact with the real world and with the friction that is seen with reality, is a good way to implement it safely and that could be the way where we are going. But it seems like you're also checking on yourself right now, right? You're setting this up better and, Sam, that's where I was going to ask you. I mean, it seems like you spend more time in Washington right now than Joe Biden's dogs, and I'm sure I've only been twice this year. Actually, I think it will take your dog about three days or so.
Anyway. Um, but what specifically would you prefer the government and our regulators to do instead of what you have to do? First? I think what I was saying is really important: it's very difficult to make a technology safe in the lab. Um, society uses things in different ways and adapts in different ways. And I think the more we implement AI, the more AI is used in the world, the safer AI becomes and the more we like it, we collectively decide, hey, here's one thing that's not acceptable risk tolerance and this other thing. thing that people are worried about, that's totally fine.
Um, and, you know, as we see with a lot of other technologies, airplanes have become incredibly safe. Um, even though they didn't start out that way and it was, uh, it was like careful, thoughtful engineering and, um, understanding why when something went wrong, it went wrong and how to address it. And, you know, the best practices shared there, I think we'll see in all kinds of ways that the things that we worry about with AI in theory don't play out in practice. Um, you just like to talk a lot right now about deepfakes and, you know, the impact that it's going to have on society in all these different ways.
I think it's an example that we were thinking too much about the last generation and that I will disrupt society in all these ways. But, you know, we're all like, oh, that's so fake or, oh, that could be a big hoax. Oh, we like that image, video or audio, we learn it quickly, but maybe the real problem is like speculation. It's hard to know in advance that this is not the deepfake skill, but the one-on-one personalized persuasion type. And that's where the influence happens. It's not, it's not like the fake image. It's just that this has a subtle ability, these things have a subtle ability to influence people and then we learn that that's the problem and we adapt.
So in terms of what we would like to see from governments, I think we have been very mischaracterized here. We believe that international regulation will be important for the most powerful models. Nothing that exists today, nothing that will exist next year. Uh, but as we get closer to real super intelligence, as we get closer to a system that is more capable than any human. I think it's very reasonable to say that we should treat that with the same caution and with a coordinated approach. But we think what's happening with open source is great. We believe that startups should be able to train their own models and deploy them around the world and a regulatory response to this would be a disastrous mistake for this country or others.
Um So the message we're trying to convey is that you have to accept what's happening here. You need to make sure we get the economic and social benefits of it. But let's look at where we think this could go and let's not be surprised if that happens. You mentioned deepfakes and I, I want to talk about A. I generated content that's on the internet. Now, who do you think is responsible or should be responsible for policing any of this or not policing, but is the detection of any of this due to the social media companies? Is this at Open AI and all other AI companies?
We are definitely responsible for the technologies that we develop and publish and, you know, misinformation and that is clearly a big problem as we create more and more capable models. And we've been developing technologies to deal with the provenance of an image or text and detect the output, but it's a little complicated because, you know, you want to give the user some flexibility and they don't either. I don't want them to feel supervised. Then you have to consider the user and also the people who are affected by the system and who are not users. So these are quite nuanced topics that require a lot of interaction and input not only from the users of the product but also from society at large and figuring out, you know, also with partners, who bring this technology and integrate.
What are the best ways to address these problems? Because right now there is no way or tool to open AI, at least I don't, that can put an image or part of the text. And ask, is this A that I generated for the image? We actually have technology that's really good, almost, you know, 99% reliable, but we're still testing it. It's early and we want to be sure it's going to work. And yet it is not just a technological problem: misinformation is a very broad and nuanced problem. So you still have to be careful with how it is implemented and integrated.
Um, but we're certainly working on the research side and for the picture, at least we have a very reliable tool in the early stages. Yeah, and let's say it's worth it, when could you post this? You said, you said you're, you're working on this right now. Is this something you plan to publish? Oh yes yes. For both images and text, we're trying to figure out what really makes sense. Um, for images, it's a little more direct and simple problem. Um But anyway, we definitely tried it because we don't have all the answers, right? Like we're building these technologies first, we don't have all the answers.
Very often we will experiment, we will publish something, we will get feedback, but we want to do it in a controlled way, right? Um And sometimes we retire it, improve it and implement it again. I'll add that too. I think this idea of ​​watermarking content is not something where everyone has the same opinion about what is good and what is bad. There are many people who really don't want their generated content to have a watermark and that is understandable in many cases. Uh, it's not either, it won't be super robust for everything. Maybe you could do it for images, maybe for longer text, maybe not for short text.
But over time there will be systems that will not put watermarks. And there will also be people who will really like, you know, this is like a tool and it's up to the human user how they use the tool. And I don't like it, which is why we want to participate in the conversation. Like us, we are willing to follow the collective wishes of society at this point. And I don't think it's a black and white issue. At least I think people are still evolving, as they understand the different ways we're going to use these tools, they're still evolving, their thoughts on what they'll want here also goes back to Sim's point above.
It's not just about truthfulness, right? And what it is, what is real and what is not real. I actually think that in the world that we're going to move towards, the biggest risk is really this individualized persuasion and, and how to address that, is going to be a very complicated problem to address, right? I realize I have five minutes left and we were going to ask some questions from the audience so we can get to one or two questions from the audience. I'll finish the last 111 thought here. Um, I can't actually see anything out there. So I'll ask one last question and then hopefully we'll have time for one or two.
So 10 years ago you were here 10 years ago. What we touched on in this as we were, we're starting here. But what is your biggest fear about the future? And what is your greatest hope with this technology? II, I think the future will be incredibly great. Uh, we wouldn't work so hard on this if we didn't. I think this is going to be like, I think this is one of the most important inventions that humanity has made so far. Um, so I'm really excited to see how it all plays out. I think things can be a lot better for people than they are now.
And I have high hopes for that. We cover many of the fears. It's like, again, we're clearly dealing with something very powerful that will impact all of us in ways that we can't perfectly foresee. Um, but what a time to be a

live

and witness this. You're not so afraid that I was going to ask this, but I'll ask him. Now, do you have a bunker? This is, this is, this is the question, the question, no better than you. I'm going to let the clock run. I'm not going to pay attention to that. But as we think about the fears, I'm just wondering what happens if you have a bunker and what would you say you have, that you say I have similar structures, but I wouldn't say bunker-like structures.
None of this will help if a G I goes wrong. To be honest, this is a ridiculous question. OK. Well, well, well, look. What is your hope and your fear? I mean, the hope is definitely to boost our civilization by increasing our collective intelligence and our fears. We talk a lot about fears, but you know, we have this opportunity right now. Um, and we have summers and winters in A I and so on. But, you know, when we look back in 10 years, I hope we do well. And I think there are many ways to mess it up.
Um And we've seen it with a lot of technologies, so I hope we get it right. Alright. We have time right here. Hello. Um Pam Dylan, preferably sensory consumer products. A My question has to do with the turning point. We are where we are with respect to A I and A G I. What is the tipping point? How do you define that moment when we go from where we are now to how would you choose to define what A G I is? I think it will be much more continuous than that. We're just on this beautiful exponential curve.
Whenever you're on a curve like that, you look forward, it looks vertical, you look back, it looks horizontal. That's true at any point there. So a year from now we will be in a dramatically more impressive place than we were a year ago. We were in a dramatically less impressive location, but it'll be hard to put your finger on it. People will try to say, oh, it was Alphago that did it, it was GP T three that did it, it was GP T four that did it, but it's just brick by brick, 1 foot in front of the other going up this. exponential curve right here in front.
Thank you. My name is Mariana Miguel. I'm the CIO of the Port of Long Beach, but I'm also a computer scientist by training a few decades ago. I'm older than you. I remember working with some of the first AI people. I have a general question. I agree with you. This is one of the most important innovations that has occurred. One of the things I've struggled with over the last 20 years when thinking about this is that we are about to change the nature of work. This is that significant and I think people are not talking about it, there will be a significant period of time, there will be a transition where a significant population in the world and in this country will not have had the types of discussion and the meaning that have.
For them to be able to, as you mentioned, society must be part of it. There is a large part of society that is not even in this discussion. Then the nature of work will change. It used to be that things were just going to be automated. There will come a time when people who have defined themselves by work for thousands of years will not have that andWe are rushing towards that. What can we do to make sure we take this into account? Because when we talk about society, it's not like everyone is together, ready to discuss this.
Some of the effects of some of the technologies that we brought into the world have actually separated people from each other. How do we get some of those not regulations but how do we create some of those frameworks and voluntarily accomplish things that actually result in a better world that doesn't leave everyone else behind? Thank you. OK. I, I will give you my perspective. I think I completely agree with you that it is one of the latest technologies that could really increase inequality and make things worse for us as human beings and civilization. Or it could be, you know, really amazing and it could bring a lot of creativity and productivity and improve us and, you know, maybe a lot of people don't want to work eight hours or 100 hours a week, maybe.
They want to work four hours a day and do a lot of other things and, you know, I think it's certainly going to cause a lot of disruption in the workforce and we don't know exactly the magnitude of that. um or, or the trajectory along the way, but that's for sure. And one of the things that, in retrospect, it's not that we specifically planned it, but in retrospect I'm glad that with the release of Child G BT, we kind of brought an I to the, um, you. You know, the collective consciousness and people are paying attention because they don't read about it in the press.
Um People not only tell you about it, but you can also play with it. They can interact with it and get an idea of ​​its capabilities. That's why I think it's really important to bring these technologies to the world and make them as accessible as possible. Um. You know, Sam mentioned before, we're working very hard to make these models cheaper and faster, to make them very widely accessible. But I think that's key for people to really interact with the technology and experience it. Um And kind of visualize how it could change their way of life, their way of being and participating, as you know, by providing feedback on the product.
But also, you know, institutions need to prepare for these changes in the workforce and the economy. I'll give you the last word. Yes, I think it's a super important question. Um, every technological revolution affects the job market and throughout human history, you know, every maybe 100 years, you feel different numbers over these 150 years, half of the types of jobs disappear, it totally changes whatever. . Um, I'm not afraid of that at all. In fact, I think that's good. I believe that this is the path to progress and we will find new and better jobs. What I think we need to face as a society is the speed at which this is going to happen.
It seems like we've been through, you know, two at most three, probably two generations that we can adapt to, society can adapt to almost any number of changes in the labor market. But a lot of people like their jobs or don't like change and go to someone and say, "Hey, the future is going to be better." I promise you and society will win, but here you will lose. That, that doesn't work. That's not, that's not right. That's not a nice message, it's not an easy one to convey. And although I firmly believe that we are not going to run out of things to do, people who want to work less well will be able to work less.
But, you know, probably many people here don't need to continue working and, and we all like it, there is great satisfaction in expressing oneself, in being useful and in a way contributing to a society that is not going well. far. Uh That, that's such an innate human desire that evolution doesn't work that fast. Uhh. Also the kind of ability to express yourself creatively and to leave something behind to add something to the trajectory of the species is a wonderful part of the human experience. So we're going to keep finding things to do and people in the future will probably think that some of the things that we, we think that some of the things that those people do are very dumb and don't work in a real way, like a hunter- collector.
You probably wouldn't think this was a real job either. You know, we're just trying to entertain ourselves with some silly status game. That's fine with me, that's how it works. Um The, but we're really going to have to do something about this transition. It is not enough to give people a universal basic income. People need to have agency, the ability to influence this. They need and we must together be architects of the future. And one of the reasons we feel so strongly about implementing this technology the way we do, as you said, is that not everyone is involved in these discussions, but there are more of them every year.
And by putting this in people's hands and making it widely available and getting billions of people using G BT chat, people not only have the opportunity to think about what's to come and participate in that conversation. Um, but people use the tool to power the future. Um And that's really important to us.

If you have any copyright issue, please Contact