YTread Logo
YTread Logo

Jaron Lanier Looks into AI's Future | AI IRL

May 10, 2024
you feel relaxed, yes you have to be like Jiren Laner is among the OG Silicon Valley utopians who had grand visions of what the Internet could be long before we had PCS cell phones or social media. He was the pioneer of virtual reality in the 80's. He was a fierce critic of the industry he helped build because we are used to the idea. that the Internet should be mysterious, we don't know where things came from, but that's a very bad way to think about AI, but he chooses to remain a part of it, so in this episode of Linear AI IRL, the Insider definitive gives us his unfiltered look at the dark side of technology and what that means for the

future

of AI. jiren lenier, welcome to the show you're now Microsoft's chief unifying scientist, although I'm speaking in a personal capacity here.
jaron lanier looks into ai s future ai irl
I am fascinated by the idea of ​​being a chief unifying scientist. That is, how exactly it works. The title is funny. I report to the CTO the CTO's office combined with the lead unifying scientist spells octopus and I have both resembled one or that, my students tell me and I am also very interested in their neurology, they have incredible nervous systems. So we thought it would be an appropriate title. I totally agree and we have a long list of things to talk to you about because you have a fascinating career that spans many decades and I think it might be helpful to just set the stage here.
jaron lanier looks into ai s future ai irl

More Interesting Facts About,

jaron lanier looks into ai s future ai irl...

I know 40 years ago you were pioneering technologies at companies like Atari um and I'd love for you to tell us how you felt about the trajectory of the industry back then and how today it compares to what you expected. It's an interesting question. No I don't think they asked me much before assuming I remember those days. Pretend we're in a time machine. Well, in the late '70s and early '80s, I had a mentor named Marvin Minsky who I worked for as a young researcher, a teenager, and he was the main author of the way we think about AI these days, many of the little tropes, stories, concerns, and ways of thinking come from Marvin.
jaron lanier looks into ai s future ai irl
I hated it. I always thought this is a terrible idea. AI is just a way for people to work together, it's just people and masks, we're just confusing things, why are we trying to build this entity, you know, like an artificial God or something like that? Marvin loved to argue with me because all of his other lab employees totally agreed with him, so the last time I saw him before he died he said, Jaren, can we have the discussion? It was great to be able to have her one last time, so I always thought no, no, no.
jaron lanier looks into ai s future ai irl
No, this whole idea is wrong and that led me to this other way of thinking that was supposed to represent the other thing and I called it virtual reality and then I had the first startup in virtual reality, I named it and we ended up making the first virtual headset with support For the head, were you optimistic? Were you optimistic about where things were going? I am even more optimistic now, but I think that to be an optimist you must have the courage to be a fearsome critic, it is the critic who believes that things can be better, the critic is the true optimist even if he does not like to admit it even if he does not want to feel soft and Squishy the critic is the one who says this can be better and that means optimism is acquiescence it's like no This is it, those people are a little useless, so I was critical.
Then he used to fight with Marvin about the idea of ​​AI. Since then, I've been very critical of the way we did social media. I have been very critical of many things. keep it up, it's the true face of optimism, let's talk about misinformation and its links to advertising and the embedded nature of social media and you know generative AI is disrupting a lot of things but it still seems like advertising is inextricably linked How will companies be able to monetize part of that technology? What is your opinion on this? Well, look, I don't speak for Microsoft, but I want to say to anyone who says oh, I wish I didn't have to pay a monthly fee to use it. the highest quality AI, I will use the free one, that fee is your freedom fee, it means that the mechanism that pays for it is not your manipulation and there are really only two options if we are in a market economy.
It has to be a client, so either the client is you or it is someone who wants to manipulate you, those are the options that you know, so we have to get to the point where we accept that to be participants in a market economy we have to be full participants. in a market economy or we subjugate ourselves to those who are, so we cannot expect things to be free. Yeah, I mean, I wish we could. I don't like spending money any more than anyone, but the thing is, you know. a market economy if you are not willing to pay for what you get, it means that someone else is getting some benefit by manipulating you.
I mean, it's just logical, it's inescapable, we are sailing somewhere close to the Realms of Open Source. here and there there is some debate in AI about whether the push towards certain open source models could potentially exacerbate the risk of misinformation and I'd love to hear your thoughts on that, well look this is a bit of a complex topic and it's a topic in that my opinions sometimes provoke illicitly hostile responses from my peers, but let me try to explain my thinking on the matter. I think the idea of ​​open source comes from a really good place.
I think the people who believe in it believe that it makes things more open, democratic, honest and secure, and the whole problem is the math of network effects, so let's say if you have a group of people who share things for free, It could be music, it could be computer code, it could be all kinds of things and you say oh, we're being very communal here we're sharing equally, it's a giant barter system, the thing is that the exchange of all that free stuff will tend towards Monopoly because of mathematics and then it will become the greatest glory of something like Google or whatever and then you end up with, instead of decentralization, you end up with Hyper centralization and then you are incentivized to keep certain things very secret and proprietary. like their algorithms or derived data, which is actually very expensive to generate, about how things correlate, so I think this idea that opening things up leads to decentralization is just mathematically false and we have infinite examples to prove it and, however, it remains a widespread practice. believe because it feels really good to believe it, but it's just something that didn't work, but I guess what I mean is that part of the argument seems to be that you know if there's this idea that artificial intelligence systems and, in particular, large language models . is that it's a black box and we can't see how it works and therefore if we have an open source model we can at least see how it works well then there is a slightly different idea of ​​openness that I think would be everyone's benefit , including the owners of the large models, the user society in general, everyone and that is instead of publishing the code that simply tends to support these emerging monops, which is why some of them support doing that, what we should do It's having Providence, so let me.
I'll give you an example, let's say you have an interaction with a chatbot and the chatbot gets weird with you and says oh I love you, you need a divorce, we should be together, how do we avoid that? If you say you want the AI ​​to rule the AI ​​and you have this AI looking at the first Ai and saying, don't be so weird, stop that, stop it, stop it because it gets hard and the reason it gets hard is because of the limitations of the language itself and this has been an interesting problem that Humanity has explored for thousands of years if you think about the ancient stories of the lamp with the genie and you rub the lamp the genie comes out and grants you wishes and if you are unlucky it is a Genie clever and no matter what you say, words get twisted the wrong way and this is exactly the problem with AI governing AI: words can always be twisted, you never can, words are actually more ambiguous than you think.
We think, we go through life thinking that words are accurate, but that's not the reason why large models work because, first of all, they are a statistical thing, so they work very well with our statistical calculations, but there is another way of doing it that works every time you have a result from an AI, of course, in some small way, it is based on millions or billions of examples of Humanity in general, billions of photos, billions of examples of text, but in any particular example of an output it will not, in a particular example there will be only a few a small number maybe a dozen this my hand is doing this because it is a small statistical mound yes, it is a distribution.
I'm doing a little math here, so if you can track down who the most important people were that entered things. that was relevant to your thing so let's say everything gets weird and then you tell the robot why you said that and the robot says oh well, Dre some stuff from some sexy fanfiction and some soap operas you know and then you say you know what not use those things and then all of a sudden you know basically where the sources are that were combined to create the AI ​​result. You see the magic of AI as we know it. you can take examples of people and combine them coherently, you can have a recognizer for cats and a recognizer for hot air balloons and say, put a cat in a hot air balloon and to satisfy both recognizers, you will get this randomly. that does both and suddenly you have a cat and a hot air balloon it's great but you can always go back it's just that we don't because we're used to the idea that the internet should be mysterious, we don't know where things come from but that's a very bad way to think about Ai and because you're a business-oriented channel, a wonderful thing happens that as soon as you can attribute who contributed to an AI result, then you can incentivize. that people input new data that improves AI A and B, they can start compensating people, so instead of saying oh, we're just going to displace all these workers and they're going to have to establish a universal basic income. a trajectory because I think it's a bad outcome because whenever you have everything concentrated in a single payer society it becomes a temporary situation, you start with the Bolsheviks and you end with the Stalinists because the worst people want to control that central thing, let's talk about this because you call it data dignity, which is the notion that if I publish data I should be compensated, especially if it's used to train algorithms, how does that work in practice?
Well, it's not like that right now. I mean, to do that we have to calculate and present the Providence of which human sources were most important for a given AI production and we currently don't do that, we can, although we can do it efficiently and effectively, it's just that we haven't done it yet. In fact, it has to be a social decision to go on to do that and then you know there are a lot of little psycho dramas where people say well, I don't know if I'd like to get paid, I like just being out there or whatever, I think there is a lot of room to adjust how this will work in detail, but the point is that I don't want to create more people who only depend on the state payment to survive.
I want to create more, if you like, creative classes of people who are really good at providing new data that makes models work better so that everyone benefits in the '90s, you talked about how chatbots could eventually affect democracy and the elections and that was probably about 30 years ago and the good old days when I wrote about this much earlier, not in those days, instead of chat Bots, we called them agents, which sounds scarier, it's occasional language. The Matrix movies used it to make these AI guys look like secret agents. I think they were actually quite scary, but in the early days.
I called them agents, they're going to be these agents that help you in life and then I thought no, they're going to be corrupt, it's going to be stupid, it doesn't make any sense, it's the wrong way to do it. It's always been obvious how do you feel about the Chat Bots we have now in relation to the risk to democracy? We will find out very soon. I'm scared, I'm scared because, um, the latest round of social media. The entry of the media into politics left many people with various forms of fear and concern that have actually resulted in the reduction of certain levels of controls and corrections within large technology companies, so in a sense we could be worse, but in another sense.
I feel like the tech culture or just the collective sensibility of the engineers who actually do everything has matured a lot in general. Do you trust the technology leaders behind the AI ​​models that exist right now? Well, I know most of them andmaybe I. That might bias me, but I think we're in a pretty fortunate place in that, you know, I think, do you think they learn from their mistakes essentially? I hope so. I mean, we really haven't had any mistakes to learn from yet. I want to say that? Well, about social networks, no social network is giant, but that precedes the big models.
I mean, it's an AI. You may think that social networks are halfway to the great models in that AI. I don't like the term, but rather its algorithms. Target people through social media so you can think of it as something driven by help and that didn't work well for society in general, it works very well for some people and for some you know. It's like it's a very complicated story, but many of these companies are the same companies that have collectively made the mistakes and we as users have allowed that to happen or rejected them and now there are many of those same ones. companies, engineers or people who are now doing these new things, so I wonder what they have learned, if they had the opportunity, what I think they should learn, okay, so look at the business models of the tech titans.
There are actually two in the US that are true titans that depend on the advertising business model and that would be Google and the alphabet. I just want to point out that the tech titans that don't depend on it, we do some of that, but they don't depend on companies like Apple, Microsoft, Amazon, you can complain about us, you can complain about these other companies and I hope you do. I think we should be responsible, so do it, do it, but, but, but they don't. They cast the same kind of strange, creepy darkness into the world, but their market limits are also smaller.
I mean, what I'm saying is that meta and Google are undervalued because they have the wrong business model, as many things do, they really should be. more successful like me, I think this is a stupid business model and an authoritarian platform is going to overtake us, which is exactly what is happening with Tik Tok, so why are we doing it? It works a little, but not maximally in my opinion, so why? Are we doing it and it's just a habit, it's an impulse, it's the hassle of changing, but we do it well? One of the big issues coming up in terms of everything from misinformation, to democracy, to social media, to AI is deepfakes and that's something that's very hard to argue that doesn't have the potential to become a lot more serious as a result. of AI and what we've seen with generative models uh over the last few years in particular and I'd love to hear your thoughts on whether there's a way we could prevent this from becoming the next problem.
Can I mention a little May guilt in that regard back in the '90s? Some friends and I had a small Machine Vision startup and I think we made the first tra deepfakes. so you're to blame, I'm absolutely to blame, yeah, and we actually used the first deep fake uh system to block minority reporting scenes with the Guinness, yeah, the year I actually incorporated that into the script, but that was based. in a real early prototype of a deepfake, but okay, the answer to deepfakes is Providence, like what I was saying before, if you know where the data is coming from, you don't worry about deepfakes anymore because you can tell where did this come from and the Providence system has to be robust and not fake, but it should say oh yeah, well this is a combination, this was someone from Chinese military intelligence who combined this thing and that thing and that thing and okay, cool, yeah, get rid of it, you know. as if Providence was the only way to combat fraud.
Actually, I think what should happen is that regulators should be involved, so all of us, Microsoft opens up AI, everyone goes before regulators around the world and says regulate us, this is important, so that the question is how and then if regulation means the genius approach where you say we're going to have AI judge whether the first AI was good or not, it becomes an infinite regress if you say it's data-driven. Providence, suddenly you have a stock that you're No, you're not using terms that you can't define anymore like nobody knows. Everyone says that AI has to be aligned with human interest, but what does that mean?
It is a very complicated word. I worked in privacy for a long time and helped start what became the privacy framework in Europe and, um, the GDPR and I have to say that I'm still not sure that we know what privacy is in the context of the Internet, it's a very complicated concept, so that's where legislators come in and say this is how we should regulate it, but my question to you is: do you think politicians are too afraid to really challenge? Well, you know, technology, this is a very good question. I feel like there was something that we in tech culture did for too long where we would go to Congress or parliament somewhere else in a different country and intimidate the people there and say, "Oh, senator and congressmen, or you know what? "They don't understand technology, they're idiots, we are." the smart ones can't say anything, you know, and this always happened and it was this constant, like they couldn't believe what an idiot that senator was and, um, this went on for years and years and years and I think we intimidated them to the point where they became timid in a way that really hurts us, you know, and now everyone in AI of any scale goes and says we actually want to be regulated, this is a place where regulation makes sense.
When does that relationship between big tech leaders and policymakers become too cozy? It's already cozy. I don't know, I mean that panel, that hearing with Sam Alman and the people in it was a C hearing, right?, right?, you know, I'm serious. I felt comfortable at times, but not all the time, and there was a general rhetoric of, please regulate us, yes, yes. I mean, I think we want to be regulated because everyone can see it, especially if you think this is like the social media problems. a thousand times we want to be regulated we don't want to ruin society we depend on society for our business and uh, uh, it's a little ambiguous, there is also a kind of libertarian vein in technological culture that all regulation should be suspect. but it's clearly not that regulation is that layer upon which we can do free enterprise, without that we don't have the order within which we can function, otherwise it's just a kind of uh survival of the fittest that doesn't lead, I'm saying that oh really. it leads to natural evolution, but very slowly, like Matt, you know that markets are fast and creative and you don't get that without a stable layer created by regulation.
Have you talked to him recently about your concerns? Oh, yeah, all the time, so, I won't speak for Sam, I certainly know, I'm very comfortable working with people that I don't have complete agreement with, but you know we have more. agree than you might think, Sam wants to make this a universal scan based currency, uh, cryptocurrency, yeah, to reward people once the AI ​​does all the jobs, obviously that's not within my framework recommendation. I don't think it's a good idea. I think some criminal organization is going to take care of that, no matter how robust they try to make it, look at cryptocurrencies, cryptocurrencies are mathematically perfect and then at the edge they are all criminals, fraud and incompetence, there is some prominence, isn't there ?
Yes, you know. I'm of the opinion that big tech companies have become so important to society that it's really useful to show that you can have freedom of expression within them and that people still buy the products and buy the shares and it's not like a giant catastrophe and I've tried to create a proof of existence of that where I can say things that are not official from Microsoft. I keep looking. I spend all day working to make things better at Microsoft and I'm really proud that people want to buy our stuff and want to buy our stock I like it I like our customers I like working with them I like the idea of ​​making something that someone loves I like that economy enough to pay you money for it, that to me is the market economy I want I like that economy I enjoy participating in it and What I would like to do is persuade my friends at some of the other tech companies to talk a little bit about their situations, it might actually be healthy, it might be good for them.
I think it would really improve the business performance of companies. Like Google and Meta, you know they're notoriously closed, they don't have people talking and I think they suffer for that. You might not believe it because they are big successful companies, but I really think they could do it. Plus, what would make you walk away? Don't know. I mean, of course, you can always think of some scenario. I don't know what it would be, but do you have something like a line in the sand? You know, this is like no, it would have to be based a lot on the circumstances at the time and the trade-offs.
I don't think you can really draw a line on something like this and it's very personal and by the way it's not. It's just that there are four or five other people at Microsoft who have public careers where they speak their minds and I think it's been a successful model, it's worked for us and I agree with absolutely everything that happens at Microsoft, of course No, I mean, listen, it's a Big thing is like it's as big as a country, you know, and of course, there's all kinds of things, so I don't think being kind of a pure perfectionist is very functional.
I don't think it helps anyone, although here in the Bay Area there are a lot of people who try to be that way and whatever, but I really think you always have to try to find the balance and it's never perfect Jaren, this has been a fascinating conversation, thanks for taking the time. It was brilliant thanks for inviting me thanks

If you have any copyright issue, please Contact