YTread Logo
YTread Logo

Intelligent Thinking About Artificial Intelligence

Apr 09, 2024
Welcome everyone to this conversation about AI AR VR, maybe some other Twitter abbreviations, who knows where it will go, but it's an exploration of these ideas with Jarn Lanir, let me give you a quick introduction. I'm sure most of the audience is quite familiar. who he is, but he is the chief unifying scientist at Microsoft, where he helps lead the development and implementation of

artificial

intelligence

systems in the company. He is a leading computer scientist who has done pioneering work in the field of virtual reality. A best-selling author who has written extensively about technology and its impact on society is a musician with a deep interest in unique and historical musical instruments and a composer of new classical music in 2018 he was named one of the 25 most influential people in the previous 25 years of Tech History by Wired magazine, which is pretty good, he has also been named one of the 100 most influential people in the world by Time magazine and one of the 100 most important public intellectuals in the world by Foreign Policy magazine , so that's quite a variety of credentials, so welcome Jaren, that's very nice See you Hi Brian, good to see you, how's everything going?
intelligent thinking about artificial intelligence
Oh, it's great, it's great, yeah, you know, as I was preparing for this conversation, I was

thinking

about the first time we met. I don't know if you remember it was about 25 years ago. I arrived at your apartment in Manhattan, which was very similar to what I'm seeing in the background in your shot right now, filled with instruments, that's right, yeah, you know. I remember walking delicately because there was hardly any space on the floor that wasn't covered. for an instrument, yes, since then I learned to place them more on the walls. Having a kid taught me the value of having FL space, so yeah, no question, no question, so this won't be the focus of our conversation per se.
intelligent thinking about artificial intelligence

More Interesting Facts About,

intelligent thinking about artificial intelligence...

But where did that fascination with musical instruments come from? I think it came from my mother. What happened was that she was a Holocaust survivor who kept me very close to her and before the Nazis took her away at age 13, she had been a prodigy performer. in Vienna and she really valued teaching me how to play the piano and some other instruments and then she died when I was very young in a car accident and somehow music became my continuing connection with her and in particular this experience of learning a new instrument so I keep learning new ones and all I can say is that I think it's still cheaper than heroin and as far as ridiculous obsessions go I hope it's one of the harmful ones but it gets a little extreme but anyway that's what I am and I won't pretend to be otherwise I'm a ftic instrument that's how it is that's a remarkable story um and and then do you have a particular instrument that you consider your instrument or is it really you Just keep expanding the repertoire, Well, not really, I mean the piano a little bit, I guess it's this black covered thing right there, but it always varies which ones I'm playing more than others and I've been burning them all out. types of instruments from different cultures for decades um, who can understand it?
intelligent thinking about artificial intelligence
Yes, you know we are GNA posters, of course, but I also remember that you and I were part of a music festival together in California that Philip Glass was celebrating. At a music festival and I was collaborating with him on the right and you were also collaborating with him, uh, yeah, Philip and I did a lot together, he produced some of my records back in the '90s, when we were, we were, guys, I guess, and um, we. We still played together at least once a year until recently and, you know, it's getting to that point in years and no, I don't know what the future holds, but I really love the guy, yeah, no, he's pretty wonderful, I I mean our My collaboration with him, you know, and the world science fesal collaboration with him is really a highlight of the performing arts things that we've done, you know, it also turns out, and I suspect you're not aware of this, that our lives intersected even before that in at least in principle because when I was at IBM, back in the 1980s, I worked for a guy named John.
intelligent thinking about artificial intelligence
I don't know if you know that name, but he introduced me to Marvin Minsky and Ed Fredkin at MIT and I understand that those guys, especially Marvin, had a profound influence on you, right, yeah, Marvin was the one I disagree with on absolute in everything. Marvin was a surprisingly generous mentor to me, he was my boss. I was hired as a young researcher at MIT when I was, I think. I'm still a teenager and really yeah, it's been a long time um and uh, I love Marvin and I really deeply disagree with him on almost everything, but he is and Ed Fredin of course was a friend, but no, we never worked together um, but Marvin, yeah. very important person in my life and pretty much Marvin, I guess along with John McCarthy maybe others you could tell me about.
I don't know the history very well, but this is really where

artificial

intelligence

began. Ah yes, well, the story. There is some history. I have a high degree of confidence in being able to relate to you because I was there, but some of this is before my time, so I have to go back to trusting what people tell me and they tell me different things, so I have to do everything possible to infer that it's one of those things like virtuality where if you want to find an antecedent that goes back a long way in history, you can find it with the ancient Greeks or whatever, but I think a good point of departure is aah LEL the first.
The programmer and um thought about it and decided that AI was not a good way to think and it's just the human programmer that we should focus on as the entity um and then about the human, the human programmer, the human programmer, there it was. A's where she where she landed on that and then, um, probably the next great thinker, I think it would be Forster, the famous novelist who wrote a vision of what the future of the Internet and computing could be like in I think it was in 1907. that's a little bit incorrect, but kind of like that first decade of the 20th century, um, which is kind of a surprising act of premonition.
I mean, it's almost supernatural and makes me wonder if time is what it seems to be because it seems impossible that he could have written it. and it also had a role very similar to a technical document or because I'm not really a Nolla, that's what we normally call it, it's an intermediate point between a short story and a novel and it's the prototype of a dystopian science fiction novel, um. 1984 and Brave New World and the Matrix movies and many others obviously follow the plan laid out by the machine, they all have a similarity in the characters and the general themes and uh uh, that was a deeply humanist work that was critical of the idea of putting too much value on the machine as a center or as an entity um where we really see the tables turn is without ur um oh and I should mention Vanar Bush, who thought of computers and networks and algorithms as a way to help people, but it was still person-centered, now Turing turns it around and proposes, well, maybe we should think of these things as entities, but what I always ask people to do is think about the context in which Turing did That's it, here it is. a guy who helped win World War II, who was declared one of the most important fighters against the Nazis, was a savior of all those who didn't want to be killed because of their arbitrary identity and couldn't do anything about it.
Yet here he was after the war, condemned precisely because of his identity, which he couldn't do anything about because being gay was illegal, yes, and he was forced to undergo some strange quack medical treatment in which he was getting hormones. feminine and to understand why someone would try it. curing homosexuality with female hormones sounds very twisted, but the metaphors at the time were before computers, it was about the steam engine and the idea was to equalize the pressures, so maybe if you, the opposite hormone will equalize the pressure and I do not. I don't know, you know what, yeah, trying to make sense of something stupid can only go so far, but anyway he was starting to develop female secondary sexual characteristics that he didn't want, as far as we know, he killed himself in quite a poetic way by eating a cyanide-laced Apple in front of the first computer, some kind of anti-eve or something, um, the Turing test, his famous idea was written right in this last phase of his life before his death and he proposes well , really the only judge of whether a person is another person and if that person really can't give you a rational basis for saying that this is a person and that is not, then call them all or call them all. people who are not p, but they don't make the decision and I think something came from a deeply absurd dysphoric place because that's where his life was and I think we have to understand it almost as a criticism of what was happening to him. which as an idea to take it at face value, right, that's assuming it, you could say that I don't have any right to interpret the tours to that degree, but then who really does, you know, yeah, sure , okay, then, after the tour, the next one.
The main figure turns it around again and this is where we get to Norbert Weiner, so Norbert Weiner says: you know, there is a standard model of what a computer is that Turing developed together with the amazing mathematical physicist John Vyan, who almost like a person of surreal talents that it is very difficult to even treat as real, but there it is, anyway the Turing machine against Noyman is an abstract device that receives an input, a strip of value processes and then gets stuck to always or comes back to an end and gives you an answer now Norbert Weiner looked at this and said mathematically this is very general this is great but it doesn't help us understand the real world where we see organisms and we see complex systems that interact with the climate and the oceans and yes, all these things, so let's reconceive this in a different way than what I'm going to call the cybernetic way, which was your term. uh cyber comes from ancient Greek for navigate and he said instead of

thinking

about a system it's just logic gates.
Let's think of it as a tangled network of thermometers, let's say we have measurement and feedback networks and the thing is always on, it's always connected, so when the environment around it changes, what it gives back to the environment changes now. Strictly from a computability point of view, they are equivalent, but in terms of the way they can be placed in their environment, they are different in terms of that is part of his technical legacy and at that time he was quite a celebrity. The liberty scientist is less well known now, but in um, he put Turing's argument aside.
He wrote, for example, a book called The Human Use of Human Beings and he ends with this crazy thought experiment which is what if you could give it to every person in the world? world a small radio-connected device that would give them some kind of experience based on what's happening on some Central Computer somewhere and that Central Computer was watching how they behaved and changing the signal they received in a feedback loop to put them in a behaviorist. The type of experiment like the one that BF Skinner or Pavlof did with their animals is the one imaginable of turning the entire human world into one big cybernetic system and he says that this is just a thought experiment as a scientist.
I want to tell you that this is not possible, yes. Of course, we did it there. His point is that if you start thinking that people who use computers are equivalent, you'll start treating people like computers and you can destroy society and destroy everything, um, and then, well, now here's what I've heard. Some people's opinion and I'm not in a position to judge whether this is true is that in person Verer was a bit arrogant or difficult, I don't know, but either way he apparently upset a lot of people and part of what happened with the initial movement of artificial intelligence there was a group of people who were just uncomfortable or angry with it or something, so McCarthy and Minsky and a group of others had a famous meeting and 58 I think at Dartmouth, where they coined the term artificial intelligence which was initially designed as an alternative term to cyber, yeah, yeah, sure, I've heard it and then they throw, they try to throw it back to the other side and there we are, so that's the shortest story from what I understand, no, it's a great story. and of course for most of us it's been an abstract idea, obviously I'm not in the field and, you know, my whole professional life, everyone until recently, everyone who talked about artificial intelligence, it was always something that could happen and then order.
November 2022 takes place and suddenly it feels like it's not as abstract as it once was with gbt chat and such. Do you consider in the science of theme development that GPT chat is a vital moment or is it? It's a passing moment oh yeah, you know, like and look, it's kind of a strange irony of faith that it would end up in the middle of this thing that I'd always been a little cynical about, that just happened to happen that nobody planned it, yeah , see, the way I interpret it is that there is no AI, there is no entity.
I'm still with Ada andNorbert Weiner, etc., uh, in that um, and I could continue the story beyond that, he's always been recovering. and forward gelbart after Minsky pushed him in the other direction, you know, and many other important figures, there's a long game of tennis between the camps here, but, from my perspective, the right way to think about big models like the chat. GPT is a collaboration between people: you take into account what a group of people have done well and you combine it in a new way that is very good at finding correlations and then what emerges is a collaboration of those people that is different and in many ways more useful than previous collaborations, but there's still no entity, there's no AI, there's just the people, you know. collaborating in this new way and when I think about it that way I find it much easier to find useful applications that really help.
I find it much easier to think about its role in society and it just in itself makes more sense to say It's like a giant Internet mashup, basically, you know all the text that exists and you just combine it and recombine it in New Way. Some of it is from the Internet, some from other sources, you know, without going into details. We've done our best to bring in the highest quality sources that might not actually be online to try to bring in contributors that are good at particular technical or informational aspects that might be an abuse to people, but it's still a collaboration of human individuals at the end of the day and I think it's good.
I find it useful. I find it useful. I don't think there is an entity there, but of course I mean that perceiving an entity is a matter of faith if I want to believe that your plant is talking to you, can you? You know I'm not going to judge you, but I'm sure this is similar to that, like POS. I totally agree with you on that, Jaren, I mean, you know theory of mind or intentional stance, whatever word you know, evolutionary psychology or otherwise you could use, it benefited the species, it was better to imagine that there was entities out there, even if maybe they weren't, because if they were there they could eat you and it would be better to anticipate. their presence than overlooking them by not assigning them agencies, so I totally understand, but I don't see an entity there either, but I wonder and I don't mean to put you on the spot at all, but the notion of what's there is so opaque .
Do you have a functional way to describe what happens inside a large language model? Well, I think I can help with this. I don't have it yet, but give me a little more. time, I have a simple explanation of how our large models work that will be published soon in a major magazine that I think people might find useful and I have been trying a way to talk about it, which I can tell you. about if you want, yeah, I'd love it if you were willing, um, but the other thing is, well, okay, I'll do it.
Why don't I give the shortest version of how this works? But then I also want to describe it. you are an extra layer I think we can add to you to make your role in society and the role of people in it much clearer yeah okay so how it works let's start very simply with what we were all obsessed with 10 or 12. years ago, can you get a program to tell a cat from a dog so you can do a statistical measurement of a frame and see if it's bluer or redder? That's easy, it doesn't tell you if it's a cat or a dog, yes you can find fiducial points which maybe are points where lines cross and things change and that might give you the outline of a face, but that still doesn't help you because they are similar, they have snouts, fur and everything, yes. and he was doing that kind of thing.
In fact, gosh, I sold a company to Google with some friends a quarter century ago. That did this fiducial point tracking and you can get pretty far, but it doesn't help you, what's it like? the computer knows your face to log in, it still has its function, but it doesn't know a hat from a dog, so what works is what we call deep learning and what that means is that you have a kind of real grid. simplistic statistical measures along the lines that we just talked about and then there's another grid that looks at that one and another one that looks at that one and it becomes some kind of skyscraper and then even that doesn't help, but then you have to train it, you give it known cats and dogs and whenever the particular weights, meaning how much you value the output of a particular grid in one of each of these many stories that is divided into grids whenever it seems to work, you make the configurations more valuable when they are less valuable, you get rid of them, yes, you gradually train them, this is called gradient descent and I'm oversimplifying it, but it's really interesting because the thing about gradient descent is its antiviral meaning, like online, the way we choose things.
As we say, if this is popular, it will become even more popular and then more popular, more popular, and then the problem with that is that you only have these viral things that are often the worst stupid things, but to train a network you have than looking at the overall mix, so you're constantly fighting virality, that's what makes the desent gradient work, it's a wonderful thing, like we don't have the wisdom to do that with our society, but we have to, we're forced To do it. that in training to then end up on the top layer gives you cat versus dog as a result okay so it's just cats and dogs the way you do it for everything everything everything you look at the whole damn internet and assume that adjacency is It is important that you say that if a word tends to follow another word or tends to be close to it, maybe that means something, yes, if some words are close to some image, maybe they describe the image, no with absolute certainty, but they will tend to be. then instead of knowing in advance that you've tagged dogs and cats, you use adjacency to tag the entire internet roughly to train everything and it takes amazing computational resources and like a year to do one of these loops so every time you see GPT turning like three to four is one of these giants, yeah, and now what you have is this huge kind of uh, you can think of it as a virtual forest of these towers that are like the one I just described, except they don't. they're like outlined, they're practically there and they're all mashed up, but here's the magic that comes out is that you can call up more than one of them at a time to mix them up, so you have There have been a number of reviews like Timu's and many others who say, well, they're stochastic parrots, you know, they're just regurgitating and that's true, I mean, that's accurate, but the magic is the combination, so if you say I want a photo of um, a general, uh, writing. um uh writing a stock car on the moon or some crazy thing and I want it Monae style.
You'll start with random pixels, place them across that combination of these trees and then if you get a little bit closer to satisfying them all then you'll keep the result and if not you'll throw it away then you just keep moving forward until you get close each time. more to S to address them all at once and then you have this an image comes out that looks like that, which is kind of crazy and cool, and then you can say write me, write me a title for this image that sounds like a pirate edit and then it will do the same thing using adjacency in the text and it will I think it's cool I think it's very useful Yes, I'm happy it's there, but this is what I would suggest as an image to get an idea of ​​its limits, if you you imagine one of these messages that combines access to different towers if you want, if you can accept this kind of confusing set of metaphors, maybe you have a gerbal stock car, was it Mars Monae?
Whatever, okay, then you have the set of things in between them there was never a tower to recognize the combination and it sort of builds a virtual tower for the combination in order to be the feedback to generate this new image and the AI generative. The key thing to understand is that I can fill in the gaps. but usually it will only go a little higher randomly, it doesn't go higher than the original Towers, so it's a collaboration, but not an arbitrary constructive intelligence. Now when I say people will say, well how do you know people do more?
Wrong question I'm not talking about people. I'm trying to say, let's understand what these models can do and this is a good rough intuitive way to understand what they're good at and what they're not good at, but even if that's the case. go ahead, but even if it's the wrong question, I can't help but ask your intuition about the relationship between what you just described really wonderfully well, you know in plain language what's happening, which in essence is that these systems end up having the ability to give results based on recognizing similar patterns of patterns of patterns of patterns as you go up the tower, how does that relate, if at all, to what we humans do?
Because our intelligence is certainly in part about finding patterns and patterns and patterns and being able to extrapolate to something that maybe wasn't initially there as part of the data set, yeah, I mean, I think those are the best answers that we don't know in right now because we have this thing that works: big language models. It's very natural that people would look for something similar to that in the way that natural biological neurons tend to organize themselves in layers, which might remind you a little bit. There are some important differences. Natural biological brains can learn things with far fewer examples.
Yes of course. and every day, and I'm not exaggerating, every day there is a new paper published in some AI magazine somewhere that says we've cracked it. We have a program that can now learn with as few examples as a person ever can. It turns out to be quite true, let's say that every day, every day, there is someone who has a new unified theory, uh, that's a little insight into your profession, but never mind that, and then, uh, ours doesn't come out all the time. the days in our field, but right? I looked at the file, I mean, okay, not every day, not every day, don't do it, it's a wise decision, but anyway, the thing is, there is that difference and we don't have to do it either. getting it to work in our computer programs, this problem of having the deep neural network, which means that a lot of these layers are really important, and it doesn't seem like biological systems don't have that requirement and then the other thing is that the methodology training seems to be different rather than some kind of global gradient, uh, sent thing, it's a different type of mechanism, so there are differences, how deep those differences are is really not well understood.
I'm all for doing the I love research. You know we had a conversation with Yan Lon um and his. I'm wondering what you think about that, and obviously I'm sure you know much better than I what his perspective is, but the point that was of interest to me. Was he emphasizing that what these big language models are missing compared to this thing inside our heads is this ability to reason? You know this ability to have a built model of the world that you put the data into and instead of just throwing it all away. the data there and try to find patterns, try to put the data within a template within a rubric that reflects how the world really works.
Do you think that is, for example, the natural next step in knowing where the technology will go well? So this reflects a bit of history in the community, from the time Marvin and his henchmen started having a triumph against the cybernetics and all that. They promoted a style of AI that is often called symbolic AI, where everything is model and it's in the early days when I was a kid, it was described using formal logic, you know, the idea is that we're actually going to pretend that we're arguing with Whitehead. and we'll just describe the world from these fundamental things, so then The type of AI that started to work more in this century, let's say in the last 10 years, is different and it's this big model, something very statistical, so it is very natural that many people say: can we combine these and a lot of A lot of effort is put into that and you will see it once again, without exaggeration, every day there will be a report from you of some kind of combination of statistics and logic, the so-called AI, they are very different, so I don't know why they are both called AI, but they are and some of them have had good results.
For example, you can train a model on a bunch of examples that come from a logic-based system and this has been used, for example, to create an AI to solve U geometry problems recently announced by Google and um, so, I think While there are all kinds of cases where the combination works so far, we don't really have a way to generalize that approach. be nichy, but it seems worth following, we certainly are, and so, when you experience the current state of things in a field, you know that you are there from the beginning of thinking about this kind of thing or at least you know, In the modern way of thinking about these kinds of things,Did you have a wow moment playing with any of these systems and saying I never thought to put enough data in and follow this procedure? fixing the weights through this lowering mechanism could we do what we do or is it like that, yeah, it's okay, but for me, I feel a little sad saying this, like I'm a little pathetic, but I never had the wow moment .
I did it for VR stuff and I still do it, but for AI, you know, the generative message where you combine, combine things and have constraints, transform random pixels into something that, like, yeah, I was looking at my own stuff. that we had proposed. and he talked all about how it could work already in the '80s, in the mid-'80s, we were talking about that process and in some ways it hasn't been surprising and I wish it were because everyone likes that feeling, you know, but, whether it is or not. Surprising to me, I don't think it's particularly important to anyone, eh, but no, I think I was denied that sense of surprise about it and what about the fear that certainly some of your colleagues and certainly you know many commentators? around the world describing how you know this could be the beginning of the end, does any of that control you or do you think it's just fear mongering and I mean, I really have to emphasize that it's about people, it's about people. humans, then the correct question is to ask: could humans use these things in such a way to bring about a Calamity-threatening species?
Yes, and I think the clear answer is yes. Now I have to say that I think that's true for other technologies as well. uh, and it's been true for a while. I mean, the truth is that the better we become with technologies, the more responsible we have to be, the less indebted we are to destiny and the more we take charge and the natural correlate of that is more power. we have to destroy ourselves and there is no way out of that. I mean power, the power to sustain a large population means the power to transform the Earth, which means the power to transform the climate, which means the responsibility to take charge of the climate. when we didn't do it before and there's no way out of that chain that leads to more responsibility, so I think the particular way that the thing is framed based on the movies that people grew up with, like The Matrix or the Terminator movies with Skynet, it's not like that.
It's useful in general because it tends to frame it as another entity arises and that thing is going to be the threat that it takes on and ultimately the way to fix this is to frame it over and over again as human responsibility the more we hypothesize that we're creating aliens that will come and invade, the less we take responsibility for our own things, yeah, no, that makes a lot of sense to me, but even framed that way, you know it feels like it's pretty difficult, but obviously we know it's not. entirely impossible for individual non-governmental actors to obtain a nuclear weapon.
It's not impossible by any means, but it is a little difficult. If someone challenged me to build a nuclear weapon, I know how they work. the inside, but I'm not good with my hands. I'd have a hard time putting one together, you know? But when it comes to these systems, it feels like they're more within our reach and that's what I think you know, but but okay, look, let me describe what I think we should do to help with all of this that we still I haven't gotten there, so this is called dignity, so the idea is when you're training the big guys. model if you think of it as a kind of forest of towers using the Met metaphor that I just gave you, yes, let's imagine that you can leave breadcrumbs in that forest and you can say that this particular source document from this individual person was disproportionately important for this particular result that we got from the system, so when I say I want a picture of a Geral, well what a Geral, I can actually say, hey, it's mainly this particular Geral that comes from a Geral race in New South Wales or I do not know.
Whatever you know, you can lose anonymity and link it to specific sources. Now why is it so important? I'll tell you a few reasons, let's say you want to put protective barriers on these things so they don't get used terribly and believe me. We do it and we make it great, but I must assure you that I do not speak for Microsoft. I'm sure there are many people at Microsoft who wouldn't accept everything I say, so it's just me, it's an agreement I have with them, but in any case what I can say is that we are talking about Microsoft open Ai and the community broader company that does this at a high level, seriously works on the guardrails to prevent it from being terrible and that's why nothing terrible has happened.
So far, in the first year of this or a year and a half, whatever it has been and, uh, that has involved, so you can think of this as something cryptic like a theonym b triptych or something like that, you have the data input, you have the outputs. and then there is this medium that is Hell, which is a strange thing that is well understood, so right now, because the parts that are intelligible are the entrance and the exit, we have railings in those two parts, so the input means trying to avoid destructive training, so we try not to train, you know, things to kill people or whatever, but of course it's very difficult to do because you can, the system will infer that by combining things and then, On the way out, we try to capture things, so here's a very hypothetical example, let's say we're having one of those conferences where we have kids pretending to be bad actors and trying to misuse the system to show us where we're wrong. doing such a good job and there's been a lot of that and then someone says I want a cake recipe and then I'm sorry I didn't hear a cake recipe like a wedding cake but I want the cake and they find some language I don't even know what would it be, they want the cake.
There will be plans to make a bomb, maybe maybe this atomic bomb that you could still figure out how to do right, so this cake will come out with this PL bomb inside and in doing so it evades the security barriers by coming. in and out, but the thing is that if you track down what source content was needed to make that result correct, you'll inevitably track down something about how to make a bomb and suddenly you'll be able to illuminate where the dangers are and trap them in a way. , that's very difficult to do if you're just trying to characterize the inputs and outputs and how they might combine in unforeseen ways, but do you think that's realistic?
I guess the two things that come to mind in that art A technique are levels of influence on an outcome that are different enough that you can actually point to one and say yes, this is how it really happens, this is still basic research in progress, so it's possible that everything I'm saying works. does not work well, however, there are indications that I can and I am my main research partners are not in the business world, they are in universities. I'm trying to do it in such a way that whatever we do doesn't go anywhere. particular camp like us versus Google Camp or whatever, but anyway, the question is okay, first of all, how do you characterize what deserves a breadcrumb and it's not a one-dimensional thing, it's not just influence, it's influence plus fungibility, so it's at least dimensions and it's like you can think of it as representing a field that's going through the training process and the geometry of the field is not something that we fully know yet and then the next question is how can we characterize that field so that we get a beginning but really solid in the result, so we can say that for this particular result there will tend to be less than a dozen main inputs that matter, although in a broader sense all or yes, so The question is what the pre-peak is like and so far, based on some examples, it looks like there will be a nice healthy prito peak almost all the time.
There is still a lot of math to do. Some of them are really challenging. Sorry to report, so yeah, I mean, that sounds like a promising direction, but I guess the other thing that comes to mind is what if it's in the mixing phase where things are most difficult, dangerous, or good. It still has to crawl certain types of content it still has to crawl and this has to do with limited capacity, so that's what I'm saying: the system can fill in the blanks between the trees but not go over them, yeah, now, um, the counterargument.
That's because if you combine it with the logical type of system or the formal system that we're talking about, maybe that thing can go higher, but you know what, so far, I'm not too worried about that because the question is: No. I don't know how well the system can work, but how well it can work, losing any Providence about what allowed it to work that way and that's a different question and I feel pretty confident that there is a solution space at this point. You also advocate. I suppose this breadcrumb approach could be a way to share the reward at some point from those sources, so tell us a little bit about your thinking on that, I mean a popular idea in Silicon Valley, or do you know the world of AI or whatever. that we will leave a lot of people out of work but then we will have this universal basic income and I don't trust that solution and there are a lot of reasons why I don't, one is that at any time there is some kind of Single Payer for a society, it is it becomes very temporary for the worst actors to take power, yes, and the way I have sometimes said this is that you can start with the Bolsheviks, but then you get the Stalinists because there is something that is really tempting and the monsters are G for want.
You know, um, and there's an element of human nature that's inevitable and well, I think if there were Seop pods doing it, the game theory would be the same. I think it s true. I think it's just a perverse incentive and it's really problematic. uh and also, the systemic challenges of maintaining something like this, even if only the best people do it, the politics and stability of it will probably be really unpleasant, uh, and um, uh, most of the science fiction that has attempted I envision a world like this where people are not necessary, starting with the machine stops, but then you can see a very good example of it in the Matrix movies that subdue and contain humans in some way because this idea is that people would become horrible without being horrible. totally controlled and contained, they have to be in little capsules or cells or something, and I think that intuition that has been repeated for over a century in science fiction is correct, so I think there is a better way, which is if you can trace who added the value for the AI ​​system with their examples pay them royalties pay them dividends give them entry into a professional society where they share some kind of license fee or something like why do we have to end the idea of ​​a solo economy Because we have? this new tool is better, there's actually no reason why we can't continue with the idea of ​​economics in this new era and I think it would be a healthier, more distributed, more dignified way to do it and are you sure What if that approach could be done? that you wouldn't run into problems like that there are many like, let's just talk about your gerbal example there are so many places there are so many places where you can imagine the main image of gerbal coming from and the result would still be really good regardless of which one you get used to and yet if one is the main one, that individual who I don't know took that photograph or drew that image will get that reward, everyone else won't understand it, as I was saying in the gradient descent process. that we used to train, yeah, a neural network, we have to actively fight against virality so that the system doesn't get out of control, so one way of saying this is that we have to avoid attractors, I don't know, I don't know. how technically your audience can go a little further, yes, sure, but in any case, the idea is that you are interested in this subtle combination rather than letting any component of the, uh, dynamic training dominate well, and then, what do we have to do?
It is recognizing that what works inside also works outside and doing the same in all systems at the level of our society. We have to avoid virality in the way things are calculated. I wish we did it online. I think we would have a much healthier life. and a happier world right now, yeah, I mean, again, you've expressed a lot in that direction as well and in the things that I've read. I largely agree with what you know, you talk about virality, you talk about the pernicious nature of social media, which I don't. I don't want to put words in your mouth, I'll let you say it yourself, but this idea that a system aims to get you addicted, attract you and train itself by virtue of what works for you, is that you know exactly the antithesis of a system that would allow you to know the freedom of Reign, so how does AI influence this?
Because when it seems thatcombination again you know they're human actors, but it seems scary, well you know, the funding for AI World from the late 90s until recently was all by algorithms to help addicted people more, yeah, and um, they worked , uh, what I hope and a hope is different from a reality. I've discovered a lot for my chrin, but my hope is that the advent of AI begins to force it. Some of the companies that rely on addiction to change their business model, for example, our good friends and colleagues at Google might find that simply having a conversational AI answer someone's question is more efficient for that person than a bunch of links to follow. right, I find this all the time if I'm trying to find a really obscure detailed answer in some equipment's owner's manuals on how to wire things up and there are different versions and different, you know, whatever I can use. the correlations in the AR rji model to just get the answer instead of searching and searching and reviewing some, in fact, you know it's better, it's just better, so if you can do that, you have if your model is the so-called advertising, which I don't do.
It's not like I think it is um if possible it denigrates advertising as a business it's something different it's paid Influence, channeling and amplification is something darker than advertising has ever been and advertising was never completely on the side of the Angels, but this is worse anyway, yeah if you have something like that, here is my really useful language model result from big Ai language model and now here are 20 links that you can follow if you want, no one is going to follow those links that You know, so they'll be forced to follow some kind of different model, or people will pay to use the AI ​​or something, or the AI ​​will be corrupt and start telling people before actually fixing their water heater. you need to do something with your breath and here's our new, whatever, okay, something H and I I I think at the end of that system of things, Google is forced to adopt a different business model.
That does less harm to society. I hope, I hope, yes, yes, totally, so let's go back for a second. I don't speak like Microsoft. I know I totally hear you and I also sold a company to Google. your friends, I say that with affection and camaraderie, no, I don't mean to just criticize you, right, no, that's how I'm taking it and I trust our AI will too, but I just want to circle back, you know. I started talking, you know all the instruments behind you and your deep connection to music. Have you used any of these AIs?
I'm sure you have to explore composition and I mean, do you find there's utility there or do you find there's creativity there? Not for me, not for me and the reason is that I'm actually going completely in the opposite direction to me. Musical instruments are the best user interfaces ever invented in some ways, if you believe that what a technology is for is to help. A person affects the world with increasing acuity than musical instruments, they are the most advanced technologies that have ever existed and so what I am trying to do is make computers more like them, not the another direction seems absurd to me, I just want to make sure.
I understand, I think so. I mean, you know, obviously, we've all had those transcendent moments where you're experiencing a spectacular performance. I mean, the kind of influence on the world that you're making. I think more on the spot. From playing an instrument, you play anything. I don't remember, you know, not very well, a little bit, you know, actually, I started playing the piano again, so in my old age I'm trying to see if I can do it, but yeah, not really. well, my best wishes for your neurons, thank you. I appreciate that when you play an instrument you start to have this connection where you get a lot of intention and data, and it's not just a matter of data volume. but it's the focus and the sharpness, you know, it can be really remarkable, the control that a violinist has over the string in the bow is, by some measures, close to the quantum limit, sometimes, you know, it's like it's something very intense and that's why what I want. is that computers become more like that, I want computers to be more expressive machines with which people can connect with their entire body, with their entire nervous system, with their entire cognition, with more and more subtlety and more and more acuity. , so for me instruments have a lot more to teach computers in the other direction than the idea of ​​using the current software that we have, it's very important for us not to think too much about ourselves, as I always think about that, you know , at the end of the 19th century people said oh physics is finished there is nothing left to explore this they are wrong, you know, we shouldn't think, oh, we got it, computers are finished, AI is finished, you know, that's ridiculous , like, um, we should be humble and, in particular, we shouldn't think. that the non-computer technologies of centuries past are somehow simply inferior and ready to be removed and replaced somehow, they may be superior, yes they may have a lot to teach us, so the other direction seems to me much more important is much has much more potential and interest it's like it seems crazy to me you know I I like I'm not for people if someone I'm not judging anyone else someone else finds it meaningful do it but Yes, totally, but in that sense, in terms of the interface, I mean again, like we said at the beginning, and I'm sure the audience knows that you are the creator of virtual reality, right?
I mean, it starts with you and your company, well, you or just to clarify the very idea of ​​putting Head TRCK 3D which goes to Ivan Southerland, who also invented 3D graphics in the first place. What I did was I made the first socials with avatars. I did the first commercial. I made the first ones that were resting on the head and where there were some kind of glasses that look like what we think of today, so I invented the term brutality and blah, blah, blah, you know, I did a lot of things, so we should clarify, but sure, in a sense, maybe yes, but there's a deep connection to it in any case and you know, recently, I don't know, a week or two ago, I mean, Apple comes out with, you know, their version, Yes, you have, I'm sure you have.
I tried it probably long before I, yeah, so I mean there's a long history of that, oh my God, 40 years ago when the Mac was released, some of the key people at Mac, including Andy Herzfeld, who wrote the first Mac OS, they came to help. with the first virtual reality operating system and to ship the first virtual reality system, yes, and the first commercial headset was called the iPhone, but be careful and that's good, I don't know the exact story, but it was told to me by a number of people that that led directly to the iPhone with an i because the original IDE, the original idea of ​​jobs and others at Apple when it was a very small ni and something when the Macintosh hadn't even shipped yet, uh, was that one day Apple would sell some headphones and we all estimate it in 2010, oh, that's what you thought was when back in 1980, two or three or something, we estimate 2010, well, it's actually pretty good, terrible, yeah, so, um, but yeah, always There's been a certain contingent that continued with an Apple that wanted Apple to eventually get into headphones and so, we know they've been working on this specific thing for a long time and, I don't want to get into the details, but I had access to previous versions and all that. and uh um, I'm glad they're doing it.
You know, I think there's still a lot to do, but I'm happy it's there, but you don't seem particularly excited. Am I reading it wrong or am I cursed? I've seen everything you know like that is I wish sometimes I really wish I had a different past so I could be more surprised by some of these things because you know I I see people using virtual reality devices nowadays or I see them using GPT chat and being surprised and , usually delighted, occasionally horrified, but I'm a little jealous of those who can enjoy that, you know, surprise, but I wonder about that. because every time, for example, I teach quantum mechanics or special relativity it's like I've never seen this topic before.
I find it so amazing that we were able to figure out these things and how it works when he tells us, "don't do it." I don't have that, I have that for the underlying science, I totally have that, um, the particular product experiences are a little bit different, um, right, this could oh, I don't know, let's say one of these Fusion startups starts to function and there is like a fusion energy. source and U, there is a small chance that you would be a little less surprised because you would have read the document, you would understand it well, yes, that's it, oh, yes, okay, there they are, they describe the process, it makes sense, here it is .
Yeah, I like that analogy, you know what I mean, and that doesn't mean you're any less interested that you don't find the underlying physics charming, and I'm very, I mean, I'm still totally interested. in the uh the physiology, the eyes, the cognition, the optical challenge of making a proper optical PA is still not really solved and it's really interesting and it's right on the edge of what we understand about light and how to manipulate it with materials. Great, I'm still very happy about it now. I heard you once talk about a virtual reality, general relativity, yeah, something I've never seen, never experienced, what was that?
Oh, God, there are a million of them. I use this as an example of something about VR which is, um uh, that it's actually harder to maintain apps in VR over the years than in other formats, so I first came across someone who was doing an experiential teaching of relativity in virtual reality around 1992. I don't remember it very well and I think. They were from somewhere in upstate New York, probably Cornell, but I'm not sure and I could have done it, but either way you would walk in and be in this thing where things are warped and if you tried to move your body, it would distort and, um there have been some that prove to be special, some that generate General, yes, they have existed, but the thing is, each of them only lasted a year, right, because it's technical, yes.
Yes, let's say, I say this with love, the first generation Apple headphones have a fairly narrow field of view. Hollin was worse so I'm not saying there are any but they are trying to do something a little harder than other people have done so they have the same thing as soon as you scale it up you can't use the same apps anymore, everything will have to be redone because it's very fundamental, for sure, and when someone says okay, we're going to do a great job as a teacher of relativity for these Apple headphones a year later they'll graduate from school they'll be into different things no they're going to be maintained all the time I hear you I hear you because we did, I mean, it turns out we're world science, you know, we worked with Verizon and we did these virtual R experiences, one of which is what it feels like to move close to the speed of the light.
Okay, great, so you made one, does it still work? use by anyone except the people we work with to develop the underlying programming constantly need funding to keep the experience able to run on the Next Generation this or that is like yeah, I know exactly what you mean um so the In my opinion , the solution to that is generative AI because one of the biggest successes of generative AI has been proper coding, so we see that a lot. I have seen different measurements but the lowest overall improvement in productivity I have seen was claimed by someone who submitted a serious study is 40% better but there are many who say no no no it is much higher than that .
I don't do it, but the point is that it is not trivial, since programmers are more efficient and one of the great things. One of the things I've been working on with my research interns for the last few years that has had great success is rapid virtual world creation that's spontaneous while you're in it, so you say "I." I would like a relativity simulator and I also have ADHD and am color blind or whatever. Yes, I'm sure you know, and I wish, I wish it were this and that, and it should appear as you should. I don't have to go through this whole developer process, now I don't know.
Apple may not like that because they like to be more controlled, so it may not be in their ecosystem. I don't know, that will depend on them. but the point is that's the way to solve this problem and I'm the type that you build it again every time you need it because yeah, you don't even bother saving it because it doesn't become an an, it's just this thing you ask for it there it is and then you ask for it again and it may appear or maybe you can save it if you like a particular one.
This really feels like the Hol on the right. I mean, you know you go in there and you just create. a world with some prompts or something good, sure and I mean, um, it's not what you make a point of, but in the past my group and the people whothey did The Star Trek were talking and that's not entirely a coincidence in Everything, ah, there was always some kind of vision, so yeah, wow, that's amazing, but see? Let's take virtual reality as an example. Do you see VR as going beyond engaging experiences and actually shaping our understanding of key ideas and issues? philosophical conundrum that we have struggled with over the centuries and now, by virtue of creating these worlds and experiencing things differently, we could shed new light on these old problems, well, my experience with virtual reality is that it shines in two extremes, one is Extreme. utility which is and has been a great industrial technology for decades.
It has been used universally to improve the designs of airplanes, automobiles, ships, and spacecraft. It's almost universal. It has been used for the design and training of surgical procedures and for real-time assistance. Urban planning, all that and all that and that's a completely separate ecosystem that people don't even see, it's different and it's omnipresent. I mean, I'm less aware that urban planners have used it, yes, I am the city. Vehicle prototyping, I would say, is Universal City Planning, it's widespread, but not Universal, yeah, um, uh, yeah, and so there's an ecosystem that supports those people, um, it's quickly into the hardware. and it is rapidly merging with consumer devices, as it always does.
However, it has its own world of software and distribution, its own culture, its own people, it's a whole, it's like something else, yeah, and it's a set of different niches, uh, people who design chemicals, both, both, uh.so much uh uh drug uh for drugs and for all kinds of other things for organic and inorganic industrial processes and then um, there's uh anyway, so there's that world and that world is great. I love that world, um, but then there's the Other Extreme, which is madness. art stuff and you know it's very subject to interpretation because what happens is you can create a virtual world experience that at least to me seems to teach a very specific lesson about philosophy and thought, but to someone else it might teach a different lesson. , You know?
Like, uh, there's no way to perceive without your pre-existing philosophy coloring what you perceive, it's just not possible, so for me, when I'm in a virtual world and I do all the really weird stuff I do, you can change your body into bodies of animals. Which is an amazing thing about cognition and you can, uh, you can, you can change the perception of the flow of time and a lot of other crazy things, but the crazier it gets, the more you realize that there's something in the middle that A Sometimes I call the little bump, it's like your Consciousness is there, so it's a machine that notices Consciousness and I'm an unapologetic dualist.
I think that Consciousness is something separate. I think the experience is something apart. Actually, that's a whole other conversation that we should have at some point, but yeah, and I love fighting with people about it. I used to make a living fighting people like Daniel Dennett, oh actually his books are here, the test I did, I would argue with him and other people about this for money and it's great, I love it, but of all Anyway I'm a staunch dualist. I believe in experience and consciousness as something separate. I think we need it in high-tech times to identify who is the beneficiary of the technology.
If we mix people into the background, we can no longer identify who we are doing this for, so there is a new pragmatic reason to care about Consciousness, but whatever it is, that's what I get from this, yes, I totally recognize that someone else can go to with a different philosophy and receive a different message not quite, even as a non-dissident myself, you know one of the things that I've always found interesting, you know this framing of Thomas Nagel in terms of you know how to understand if something has an inner world, is there something it means to be that entity, and of course using the bat as your primary example?
What's it like to be a bat now? If you can go to a virtual world and really be a bat and really experience what it's like to be a bat, maybe your Consciousness can change in some way and get closer to the consciousness of a bat, you know, just as a concrete example and I think that philosophically those types of experiences could have an unusual impact to really change the discussion. This is my very favorite things about virtuality, really like this, it's for me the best way to use it, the coolest thing, I love this, yeah, yeah, so one hopes that's where all of this will go at some point , well, we're out of This time it's been a fascinating conversation and, if you're up for it, I'd love to have part two at some point because I feel like there's so much more to talk about, but thank you so much, thank you so much for joining us and the best of you. good luck pushing the boundaries of all these fields in the future and it's really exciting thank you great, great, great, really, really nice talking to you.
It's a pleasure to see them. Alright everyone, thanks for joining us, that's our conversation again today, sign up for World Science. Festival Newsletter or join our YouTube channel to get alerts when these various conversations and shows come up when they're released and you should also know that we're having a nice live event in New York City at the end of May here. in 2024, so if you're in New York, then you should join our live programming. Thank you very much for joining us. I'm Brian Green from the New York World Science Festival. See you soon.

If you have any copyright issue, please Contact