YTread Logo
YTread Logo

Melanie Mitchell: Concepts, Analogies, Common Sense & Future of AI | Artificial Intelligence Podcas

Jun 09, 2020
The following is a conversation with Melanie Mitchell, professor of computer science at Portland State University and external faculty at the Santa Fe Institute. She has worked and written about

artificial

intelligence

from fascinating perspectives including genetic algorithms of complex adaptive systems and cognitive architecture imitator who places the process of creating

analogies

at the center of human cognition based on her doctoral work with her advisors Douglas Hofstadter and John Holland; today has contributed many important ideas to the field of AI, including his recent book called simply

artificial

intelligence

, a guide to thinking about humans, this is the artificial intelligence

podcas

t, if you enjoy it, subscribe on YouTube, give the

podcas

t five stars from Apple supported on patreon or just connect with me on Twitter at Lex Friedman spelled Fri D ma n I recently started doing ads at the end of the intro I'll do a minute or two after introducing the episode and there will never be any ads in the middle that could interrupt the flow of the conversation.
melanie mitchell concepts analogies common sense future of ai artificial intelligence podcas
I hope it works for you, it doesn't hurt the listening experience. I provide timestamps for the start of the conversation, but it helps if you listen to the ad and support this podcast by trying the product, the service that is advertised, this show is brought to you by Cash App, the number one financial app on the App Store. I personally use Cash App to send money to my friends. but you can also use it to buy, sell and deposit Bitcoin in just seconds. Cash App also has a new investing feature. You can buy fractions of a share, say worth $1, regardless of the share price.
melanie mitchell concepts analogies common sense future of ai artificial intelligence podcas

More Interesting Facts About,

melanie mitchell concepts analogies common sense future of ai artificial intelligence podcas...

Brokerage services are provided by Cash App investing in a subsidiary. of Square and member of IBC. I'm excited to be working with Cash App to support one of my favorite organizations called First, best known for their first robotics and Lego competitions, they educate and inspire hundreds of thousands of students in over 110 countries and have a Ranking Navigator and charity perfect, meaning the donated money is used most effectively when you get the Cash app from the App Store or Google Play and use the Lex podcast code. You'll get ten dollars in cash and you'll also donate ten dollars the first one, which again.
melanie mitchell concepts analogies common sense future of ai artificial intelligence podcas
It is an organization that I have personally seen inspire girls and boys to dream of designing a better world and now here is my conversation with Melanie Mitchell the name of her new book is artificial intelligence subtitle a guide to thinking about humans the name of this podcast It's artificial intelligence, so let me take a step back and ask you the old Shakespeare question about roses and what do you think of the term artificial intelligence for our big, complicated, interesting field. I'm not crazy about the term. I think it has some problems because it means a lot of different things to different people and intelligence is one of those words that isn't very clearly defined.
melanie mitchell concepts analogies common sense future of ai artificial intelligence podcas
There are so many different types of intelligence. Degrees of intelligence. Approaches to intelligence. John McCarthy was the one who invented the term artificial. intelligence and from what I read he called it that to differentiate it from cybernetics which was another related movement at the time and later he regretted calling it artificial intelligence. Herbert Simon was pushing to call it complex information processing, which was rejected, but it's probably that way. equally vague, I guess it's the intelligence or the artificial in terms of words that is the most problematic. You would say yes, I think it's a little bit of both, but you know it's a good size because I personally was attracted to the field because I was interested in the phenomena of intelligence and if it was called complex information processing maybe I would be doing something completely different.
What do you think? I've heard the term cognitive systems, for example, so using cognitive, yeah, I mean, cognitive has certain associations with it and people like to separate things like cognition and perception, which I don't actually think are separate, but people often talk about cognition being different from other aspects of intelligence, it's a higher level, so for you cognition is that broad. beautiful mess of things that is calm the whole thing memory yes, I think it's hard to draw lines like that when I was leaving graduate school on the night of 1990, which was when I graduated, that was during one of the IA winters and I was advised not to put AI on my CV but to call it intelligent systems, so that was something of a euphemism.
I guess what about terms and words that briefly refer to the idea of ​​artificial general intelligence or beyond Laocoön prefers human-level intelligence. It's like starting to talk about ideas that achieve higher and higher levels of intelligence and in some ways artificial intelligence seems to be a term that is used more for the narrow and very specific applications of AI and in some ways there is a set of terms that you find attractive to describe. what maybe would strive to create people have been wrestling with this for the entire history of the field and defining exactly what we're talking about, you know, John Searle had this distinction between strong AI and weak AI and weak AI could generally be AI. , but his idea was strong.
AI was the view that a machine is actually thinking, rather than simulating thinking or carrying out intelligent processes that we would call high-level intelligent if you look at the founding of McCarthy's field in sterlin and So are we more close to having a better idea of ​​that line between weak, narrow AI and strong AI? Yeah, I think we're closer to having a better idea of ​​what that line is from the beginning, for example, a lot of people thought that playing chess You couldn't play chess if you didn't have a kind of human-level general intelligence and of course , once computers were able to play chess better than humans, that revised that view and people said, "Well, maybe now we have to revise what we think." of intelligence as or and and, which has been a theme throughout the history of the field is that once a machine can perform some task, then we have to look back and say, well, that changes my understanding of what it is. intelligence because no I don't think that machine is intelligent, at least that's not what I want to call intelligence.
Do you think that line moves forever or will we as a civilization eventually feel like we crossed the line? If it's possible, it's hard to predict, but I don't see it. Is there any reason why we can't in principle create something we consider intelligent? I don't know how we will know for sure. Maybe our own view of what intelligence is will become more and more refined until we finally figure out what we mean when we talk about it, but I think we will eventually create machines in the

sense

that they have intelligence. They may not be the kind of machines we have now and one of the things that will do is make us understand our own machine. qualities that in a certain

sense

we are mechanical in the sense that, like eles, cells are somewhat mechanical, they have algorithms through which they process information and somehow, from this mass of cells we obtain this emergent property that we call intelligence, but underlying it's really just cellular processing and a lot, a lot, a lot of that, do you think we'll be able to do it?
Do you think it is possible to create intelligence without understanding our own mind? You said that in that process we will understand more and more, but do you think it is possible to create without fully understanding from a mechanistic or functional perspective how our mysterious mind works? If I had to bet on it, I'd say no, we have to understand our own minds at least to some significant extent, but I think it's a big open question. I have been very surprised how far brute force approaches based on, say, big data and huge networks can take us.
I wouldn't have expected that and it has nothing to do with the way our minds work, that has surprised me, so it might be wrong to explore the psychological and the philosophical. Do you think we're okay as a species with something that's smarter than us? Do you think maybe the reason we are pushing that line further and further is that we are afraid to acknowledge that there is something stronger, better and smarter than us humans. Well, I'm not sure we can define intelligence that way because you know the most intelligent is about what. You know that computers are already smarter than us in some areas.
They could multiply much better than us. They can discover driving routes to take much faster and better than we can. They have much more information to take advantage of. They know about traffic. conditions and all that, so for any particular task, sometimes computers are much better than us and we are totally happy with that, right, I'm totally happy with that, it doesn't bother me at all, I guess the question You know what things of our intelligence would we feel very sad or upset if machines had been able to recreate them? So in the book I talk about my former PhD advisor Douglas Hofstadter, who came across a music generation program and that was really the line for him.
He told him that if a machine could create beautiful music, it would be terrifying to him because that's something he feels is really at the core of what it is to be human creating beautiful musical art literature. You know, I don't think he wouldn't. He likes the fact that machines can recognize spoken language very well as he doesn't, he personally doesn't like using voice recognition. I don't think it bothers him to the core because okay, that's not at the core of humanity, but it might be. It will be different for each person what they really feel would usurp their humanity and I think maybe it's a generational thing too maybe our children or our children's children will adapt, they will adapt to these new devices that can do all these tasks and they say Yes , this thing is smarter than me in all of these areas, but that's cool because it helps me analyze the broad history of our species.
Why do you think so many humans have dreamed of creating artificial life and artificial intelligence throughout the history of our civilization, so no? just this century or the 20th century, but actually many of the centuries that preceded it, it's a very good question and I've asked myself this because I myself, you know, was driven by curiosity about my own thought processes and I thought that It would be fantastic. being able to get a computer to mimic part of my thought process season I'm not sure why we're so motivated I think we want to understand ourselves better and we also want machines to do things for us, but I don't know. there's something more to this because it's very rooted in the kind of mythology or dosage of our species and I don't think other species have this drive so I don't know if you would psychoanalyze yourself and in your own interest in AI, are you what Are you excited about creating intelligence? you said, understanding ourselves, yeah, I think that's what particularly motivates me.
I'm really interested in human intelligence, but I'm everything. I'm also interested in the kind of phenomenon of intelligence in general and I don't think that humans are the only thing that has intelligence, you know, me or even animals, I think that intelligence is a concept that encompasses many complex systems and if you think of things like insect colonies or cells. The processes or the immune system or all kinds of different biological or even social processes have as an emergent property some aspects of what we would call intelligence, you know, they have memory, they process information, they have goals, they achieve their goals. goals, etc., and for me that.
The question of what is this thing we're talking about here was really fascinating to me and exploring it using computers seems like a good way to approach the question. Do you think it is a type of intelligence? Do you think our universes are some kind of hierarchy of complex systems and then intelligence is just anyone's property, you can look at any level and every level has some aspect of intelligence, so we're like a small speck in that giant hierarchy of systems complex. I don't know if I would say that any system like that has intelligence, but I guess what I mean is that I don't have a good enough definition of intelligence to say that, so let me do some sort of multiple choice, I guess, so you said colonies of ants, so our ant Intelligent colonies are the bacteria in our body and then we look at the world of physics, molecules and the quantum level behavior of electrons, etc., are they those types of systems?
Do they have intelligence like words? Where is the line that feels compelling? For you, I don't know, I mean, I think intelligence is a continuum and I think the ability to, in some sense, have intention, have a goal, have some kind of self-awareness is part of it, so I'm not sure. yes I know it's hard to know where to draw that line. I think it's kind of a mystery, but I wouldn't say you know that thePlanets orbiting the Sun are an intelligent system. I mean, I would find that maybe it's not the right term. to describe that and this is, you know, there's this whole debate in the field of what is what is the right way to define intelligence what is the right way to model intelligence should we think about computation should we think about dynamics and should we think about in you know free energy and all that and I think it's a fantastic time to be in the field because there are so many questions and so many things that we don't understand, there's so much work to do, are we the most special type of intelligence?
This guy you said there are a lot of different elements and characteristics of intelligent systems and colonies is your human intelligence, what happens in our brain is that the most interesting type of intelligence on this continuum, well, it's interesting to us because , because we are, I mean, interesting to me, yes, and because I am part of the, you know, human, but to understand the fundamentals of intelligence, what I am, yes, yes, Jerry is studying the human being, it is kind of like everything we've talked about you'll talk about in your book, what is just the field of AI, this notion, yeah, it's hard to define, but it's generally talked about. something that is very similar to human intelligence to me is the most interesting because it is the most complex I think it is the most self-aware it is the only system at least that I know of that reflects its own intelligence and you talk about the history of AI and we, in terms of creating artificial intelligence, are terrible at predicting the

future

or Iowa technology in general, so why do you think we're so bad at predicting the

future

?
We are hopelessly bad, so it doesn't matter how much good there is in this decade or the next. few decades every time I make a prediction there is just no way to get it right or as the field matures we will get better and better at it. I think as the field matures we will be better and I think the reason we have had the biggest problem is that we have very little understanding of our own intelligence, which is why there is the famous story about Marvin Minsky assigning computer vision as a project summer to his undergraduate students and I think it's actually a true story, you know, there's an article. in it everyone should read it it's like I think it's like a proposal that describes everything done in that project it's funny because I mean you can explain it, but from what I remember it describes basically all the fundamental problems of computer vision, many of that they still haven't been resolved, yes, and I don't know how far they really expected to go, but I think so, and they really know that Marvin Minsky is a super smart guy and a very sophisticated thinker, but I think not. one really understands or understands yet doesn't understand how complicated the things that we do are because they are so invisible to us, you know, vision is able to look at the world and describe what we see, that's just immediate.
It feels like it's not work at all, so it didn't seem like it was that hard, but there's so much going on unconsciously, kind of invisible to us, that I think we overestimate how easy it will be to get computers to do it, and in some ways , For me, asking an unfair question. You've done your research, you've thought about many different branches of AI, and through this book, you take a broad look at where AI has been and where it is today. What would happen if you made a prediction of how many years would pass from now?
Would we, as a society, create something that you would say achieved human-level intelligence or superhuman-level intelligence? That's an unfair question, a prediction that will most likely be wrong, but it's just your notion because okay, I'll say, I'll say more than a hundred years more than a hundred years and there I quoted someone in my book who said that human-level intelligence it's a hundred Nobel Prizes away, which I like because it's a good way to do it, it's a good unit for prediction and it's So, there are a lot of fantastic discoveries to be made and of course there is no Nobel Prize.
If we look at those hundred years, your senses really have to go through something more complicated, since our own cognitive systems can understand them. creating them in artificial systems instead of taking today's machine learning approaches and really scaling them and scaling them exponentially with both computing hardware and data, that would be my assumption, you know? I think in the kind of narrow AI that these current approaches will improve, you know, I think there are some fundamental limits on how far they will go. I may be wrong, but that's what I think, but and They have some fundamental weaknesses that I talked about in the book that just come from this supervised learning approach that we require that requires some sort of feedback networks and so on, I just don't think make it a sustainable approach to understanding. the world, yes, I am, personally, divided, in a way, I have read everything that is mentioned in the book and what we are talking about now.
I agreed, I agree with you, but I depend more and more on the day. First of all, I am deeply surprised by the success of machine learning and deep learning in general, and from the beginning, when I was, there have really been many approaches to work. I'm surprised by how far it goes and I also think we're very early in these narrow AI efforts, so I think there will be a lot of surprises about how far it goes. I think you will be extremely impressed, as well as my senses, by everything I have seen so far and we will talk about autonomy. driving and so on.
I think we can go very far, but I also have a feeling that we will discover, just as you said, that although we will go very far to create something like our own intelligence, it is actually much further away than we imagine. I realized well, I think these methods are a lot more powerful than people give them credit for, so of course there's the media hype, but I think there are a lot of researchers in the community, especially not college students, But people who have been in AI, are skeptical about how far deep learning is still and increasingly think that it may actually go further than I realize.
It is certainly possible. One thing that surprised me when I was writing the book is how far apart different people are. the field is artisanal your opinion on how far the field has come and what has been achieved and what will happen next what is your sense of the different who are the different groups of people mindsets thoughts in the community about where AI is today yes, They're all over the place, so there's a kind of singularity transhumanism group. I don't know exactly how to characterize that approach, which is also there, yeah, the kind of exponential exponential progress where we're almost in the huge accelerating part of the exponential and in the next 30 years we'll see super intelligent AI and all that and we'll be able of uploading our brains and so on, so there is that kind of extreme view that I think most people who work in AI, don't disagree with that, but there are people who maybe don't, they don't know singular people, but they do think that the current deep learning approach is going to scale and it's Basically, we're going to go all the way and take it to AI or human-level AI or whatever you want to call it, and there's quite a few of them and a lot of them a lot of people like. that I have known that work. in the big tech companies in the AI ​​groups they have this opinion that we are actually not that far away, you know, I just have to stop at that point, if I can take Yannick kun as an example, I don't know if you know about his work and some points, unless he does, he believes that there are a lot of fundamental breakthroughs like the Nobel Prizes.
Yes, he still wrote, but I think he believes that those advances will be built on deep learning, right? and then there are some people who I think we need to set aside deep learning a little bit as just a module that is useful in the broader cognitive framework, so I think some of what I understand yan laocoön rightly says that supervised learning doesn't It's sustainable, we have to figure it out. how to do unsupervised learning that's going to be the key and you know, I think that's probably true. I think unsupervised learning is going to be harder than people think, I mean the way we humans do it, then there's the opposite view, you know there is it's the Gary Marcus kind of hybrid view or where deep learning is one part, but we need to bring back these symbolic approaches and combine them, of course, no one knows how to do it very well, which is the most important part. and how they fit together.
What is the basis? What is on top? Yes, the cake was the icing. Yeah, so there are people who push different things, there are people who have causality, people who say you know deep learning as it is formulated. It completely lacks any notion of causality and that dooms it to failure and therefore somehow we have to give it some kind of notion of cause. There's a lot of push from the more cognitive science crowd that says we have to look at developmental learning, we have to look at how babies function. learn, we have to look at intuitive physics, all these things that we know about physics and someone joked, we also have to teach machines intuitive metaphysics, which means that objects exist, causality exists, you know, these things that maybe they were born, I don't know. that they don't have machines don't have any of that, you know, they look at a bunch of pixels and maybe they get 10 million examples, but they can't necessarily learn that there are objects in the world, so there's just a bunch of pieces of the puzzles that people are promoting and with different opinions about how important they are and how close we are to that, you know, we'll put them all together to create general intelligence by looking at this broad field.
The thing to take away from this is who is the most impressive: the cognitive people, Gary Marcus, camp, the son of yawn camp, supervise his self-monitoring, there is the supervisor and then there are the engineers who are actually building systems. You have a kind of real Andrey Carpathia Tesla building. You know, it's not philosophy, it's real writing systems that operate in the real world. What do you get out of all this beauty? Yeah, I don't know if you know that these different points of view are not necessarily mutually exclusive and I think people like that. Jung McCune agrees with developmental psychology, causality, intuitive physics, etc., but still thinks learning is like end-to-end learning is the way to go, maybe it will take us all the way , yes, and that we do not need it, there is no type. of innate things that have to be built into this is because no, it is a difficult problem.
I, personally, you know I'm very sympathetic to the cognitive science side because that's where I got into the field. I've become more and more of an incarnation supporter who says you know without having a body it's going to be very difficult to learn what we need to learn about the world, that's definitely something I'd love to talk about a little bit to get into. the cognitive. world then, if you don't care because you've done so many interesting things, if you're looking to imitate a couple of decades ago, Douglas Hofstadter and others created and developed imitators over thirty years ago, ah, that's painful here.
What is it? What is it? What is an imitator? It's a program that makes

analogies

in an idealized domain, an idealized world of strings of letters, as you said thirty years ago. Wow, so I started working on it when I started grad school in 1984. Wow, and it's based on Doug. Hofstadter's ideas about that analogy are really a central aspect of the thought. I remember he has a very nice quote in the book written by himself and Emmanuel Sanders called surfaces and essences. I don't know if you've seen that book, but it's about analogy, it says that without

concepts

there can be no thinking and without analogies there can be no

concepts

, so the view is that analogy is not just this kind of reasoning technique. where we're going, you know, the shoe is the foot like the glove as far as what you know. this kind of thing that we have in IQ tests or whatever, but it's much deeper, much more pervasive in everything we do, in everything, our language, our thinking, our perception, so he had a vision that was a very active idea of ​​perception, then the idea. was that instead of having some kind of passive network where you have inputs that are processed through these layers of feedback and then there's an output at the end, that perception is really a dynamic process, you know we're like our eyes are They're moving around and they're getting information and that information feeds back into what we look at next, it influences what we look at next and how we look at it, so the imitator was trying to do that kind of simulation of that kind of idea where you have these agents it's kind of an agent based system and you have these agents that pick things to look at and decide if they're interesting or not, if they shouldbe examined more and that would influence other agents, how they interact, to interact through this global type of what we call workspace, so it's actually inspired by the old whiteboard systems where you had agents that posted information on a blackboard, a regular blackboard, this is old, very old-fashioned, a set we're talking about. in physical space it is a computer program computer programs agents that post concepts on a whiteboard yes we call it workspace and it is workspace it is a data structure agents are small pieces of code you can think of them as small detectors or little filters then I say: I'm going to choose this place to search and I'm going to look for a certain thing and it's just what I think is important, is it there?
So it's almost like you know a convolution, except it's a little more general and Saying and then highlighting it in the work in the workspace wasn't once it's in the workspace, how do things work? highlights are related each other, so there are different types of agents that can build connections between different things, so just to give you a concrete example, what the copycat did was make analogies between strings of letters, so here is an example of ABC changing to a BD, what does he do? ijk changed to and the program had some prior knowledge about the alphabet, the new alphabet sequence, you know, it had a concept of letter successor letter, it had concepts of equality, so it has some innate things programmed in, but then it could do things like say discovering that ABC is a group of letters in succession hmm and then an agent can mark that, then the idea that there could be a sequence of letters is that a new concept is formed or if that is a concept, it is an innate concept .
Can you form new concepts or all that? In this show, all the concepts in the show were innate, because because we weren't, I mean, obviously, that limits it quite a bit, but what we were trying to do is say, let's say you have some. Innate concepts, how are they flexibly applied to new situations? and how analogies are made. Let's back up for a second. I really like that quote that said: without concepts there can be no thinking and without analogies there can't be concepts, you know? in a Santa Fe presentation you said it should be one of the mantras of AI, yes, and you all see yourselves, you said how to form and use concepts fluently is the most important open problem in AI, yes, how Forming and using concepts fluently is the most important open problem in AI, so let's see what concept is and what an analogy is.
A concept is, in a sense, a fundamental unit of thought, so let's say we have a dog concept, okay, and a concept is embedded in a whole space of concepts, so there are certain concepts that are closer or further away from it, these concepts are really fundamental, as we mentioned, innate, they are seen almost like XE or matic, as very basic and then there are other things built on top of them or they just include everything. They are complicated? Certainly new concepts can be formed. I guess that's the question I'm asked. Yes. Can you form new concepts that our company makes complex combinations of others?
Yes, absolutely, and that's what we do, you know, learning. and then what is the role of analogies in that? The analogy is when you recognize that one situation is essentially the same as another situation and essentially it's kind of a keyword there and because it's not the same, if I say last week I did a podcast. interview actually about three days ago in Washington DC and that situation was very similar to this situation, although it wasn't exactly the same, you know, it was a different person sitting in front of me, we had different types of microphones, the questions were different depending on building.
It was different, there's all kinds of different things, but it was actually analogous or I can say it by doing a podcast interview, that's kind of a constant, it's a new concept, you know, I never had that concept before, I mean, and I can make an analogy with him. It's like being interviewed for a news article in a newspaper and I can say, well, in some ways you play the same role that the journalist played in the newspaper. It's not exactly the same because maybe they actually emailed me some written questions instead of writing the written ones. the questions play, you know, they're analogous to your spoken questions, you know, there's all kinds of this, in some way, it probably connects to conversations you have at Thanksgiving dinner, just general conversations, you could, there's like a thread that you can probably take and that extends across the board. life that connect with this podcast, I mean safe conversations between humans, sure and and if I go and tell a friend of mine about this podcast interview, my friend might say oh, the same thing happened to me, you know, let's say that You know, you're really asking me something.
Tough question and I'm having trouble answering it. My friend could say the same thing happened to me, but it was like it wasn't a podcast interview. It was not. It was a completely different situation and yet my friend is viewed essentially the same way. you know, we say that very fluently, the same thing happened to me, essentially the same thing, we don't even say the right things, they imply it, yeah, yeah, and the sight, that kind of what happened, says the cat in the cafe, everything that's that. that this act of saying the same thing that happened to me is making an analogy and in a certain sense that is what underlies all our concepts.
Why do you think the analogy you are describing is so fundamental to cognition that it seems to be the main element? action of what we think about ourselves, cognition, yes, so you can argue that all this generalization that we make concepts and the recognition of concepts in different situations is done by analogy, that is every time I recognize that, for example, you are a person , it is by analogy. because I have this concept of what a person is and I'm applying it to you and every time I recognize a new situation like one of the things I talked about in the book was the concept of walking a dog that is actually doing a analogy because everything you know, the details are very different, that's how it is now, so the reasoning could be reduced to feel your analogy, so all the things that we think about are like yeah, like you said perception, so what? what is perception? and it's somehow being integrated into our understanding of the world, updating the understanding and all of that has this giant mess of analogies being made.
I think so, if you dwell on it a little bit, what do you think it takes? To design a process like that for us in our artificial systems, we need to understand better. I think how we do it, how humans do it and it all comes down to internal models. I think you know that people talk a lot about mental models, that concepts are mental. models that I can in my head, I can do a simulation of a situation like walking a dog and there is some work in psychology that promotes this idea that all concepts are actually mental simulations that every time you encounter a concept or situation in the world or you read about it or whatever, you do some kind of mental simulation that allows you to predict what's going to happen to develop expectations of what's going to happen mm-hm, so that's the kind of structure I create that we need, that kind of mental model. that and in our brain somehow these mental models are very interconnected again, so a lot of the things we're talking about are essentially open problems, so if I ask a question I don't mean that you know the answer is already just a hypothesis, but how big do you think the data structure of the concept network graph is that's in our head, like we're trying to build it ourselves, we take it and that's one of the things we take for granted, we think?
I mean, that's why we take

common

sense for granted, because

common

sense is trivial, but the importance of concepts underlies what we consider common sense, for example, yeah, I don't know and I don't know. I don't even know what units to measure it in beautifully said, but you know we have, you know it's very difficult to know that we have a hundred billion neurons or something like that, I don't know and they're connected through trillions of synapses and there's all this processing going on. chemical going on, there's a lot of capacity for things and their information is encoded in different ways in the brain, it's encoded in chemical interactions, it's encoded and it's electrical, like gunshots and firing rates, and no one really knows how it's encoded, but it's just It seems like there's a huge amount of capacity so I think it's huge, it's just huge and it's amazing how many things we know, yeah, but we know and we don't just know as facts, but it's all built into this that we can make analogies to. , Yeah. there is a dream of the semantic web and there are many dreams of expert systems of building gigantic knowledge bases or do you see any hope for these types of approaches of building or turning Wikipedia into something that can be used in analogy, making sure and I believe?
People have made some progress in that regard. I mean, people have been working on this for a long time, but the problem is, and I think it was, the common sense problem, as people have been trying to create these common sense networks here at MIT there is this concept of net project, but the problem is that, as I said, most of the knowledge we have is invisible to us, it is not on Wikipedia, it is very basic things about intuitive physics, intuitive psychology and active metaphysics, all of that if you were to create a website describing intuitive physics, intuitive psychology, would it be bigger or smaller than Wikipedia?
What do you think? I guess that describes who. No, that's very, very good. Yes, that is a difficult question because the question is how that knowledge is represented. true, I can certainly write F equals MA and Newton's laws and a lot of physics can be deduced from that, but that's probably not the best representation of that knowledge to do the kinds of reasoning we want a machine to do, so I do not do it. I don't know, it's impossible to say, and you know the projects, like there was a famous psychological project, right, that Doug Douglass Lynott did, that was still trying, I think it's still going on, and if the idea was to try to codify all the common . -sensory knowledge that includes all this invisible knowledge in some kind of logical representation and I think it was never able to do any of the things that I hoped it could do because that's just the wrong approach, of course, that's what they always say, you know , and then the history books will say, well, the psychological project finally found a breakthrough in 2058 or something and did you know that we've made so much progress in just a few decades that yeah, okay, you know what the next breakthroughs are going to be?
It could be a certainly a compelling notion of what the psychological project means. I think Lenin was one of the first people who said that what we need is common sense and that's what we need, all this like an expert system is not going to get you to AI, you need something common. sense and he basically gave up his entire academic career to move forward. I told my er that, but I think the approach itself won't determine what you think is wrong with the approach. What kind of approach could be successful? Well, again, he knows the right answer I knew you know one of my talks one of the people in the audience a published lecture one of the people in the audience said which AI companies are you investing in advice I'm a university professor additional funds to invest but I also like no one knows what will work in AI, right, that's the problem.
Let me ask you another impossible question in case you have an idea of ​​the data structures that will store this type of information. Do you think they have already been invented in both hardware and software? or is it necessary to do something else, do you know perfectly well? I think we have to invent something else. I suppose the most promising advances would be in hardware and software. Do you think we can go far with today's computers? We have to do something you're saying. I don't know if the Turing calculus will be enough. I probably guess so.
I don't see any reason why we need anything else, but in that sense, we've invented the hardware we need, but we just need to make it faster and bigger and we need to figure out the right algorithms and the right kind of architecture. That is a very mathematical notion when we try to build intelligence, it is not engineering. notion of where you throw all that stuff in, I guess, I guess it's a question that your people have raised this question, you know, and when you asked about our current hardware, our current hardware will work fine, the Turing calculus says that, as our The Current hardware is, in principle, a Turing machine, so all we have to do is make it faster and bigger, but there have been people like Roger Penrose, if you remember, who said that Turing machines cannot produce intelligence. because intelligence requires numbers with valuescontinuous. it was kind of reading the argument from him and quantum mechanics and whatever you know, but I don't see any evidence that we need new computing paradigms, but I don't know if we know, I don't think.
We're going to be able to expand our current approaches to programming these computers. What is your hope for approaches like mimics or other cognitive architectures? I've talked to the pain creator, for example, I've used that arm myself. I don't know if you're familiar with that, yeah, Woody, what do you think is your hope that approaches like that will help develop increasingly intelligent systems in the coming decades? Well, that's what I'm working on now, I'm trying to take some of those ideas and extend them, so I think there are some really promising approaches being implemented now that have to do with more active generative models, so this is the idea of this simulation in your head, a concept when you want it, when you want it.
You are perceiving a new situation, you have some simulations in your head, those are generative models, they are generating your expectations, they are generating predictions that are part of a perception, you have not fulfilled the model that generates a prediction, so you come like a parrot. now and then the difference and you also that that generative model tells you where to look and what to look at and what to pay attention to and I think it affects your perception, it's not that you only compare it with your perception, it becomes your perception in a way it's kind of a mixture of that bottom-up information coming from the world and your top-down model that opposes it in the world is what becomes your perception, so your hope is that something like that can improve perceptual systems and that They can understand things better, yes, understand them. yeah, what's the step, was the analogy, step there, well, the idea is that you have this quite complicated conceptual space, you know you can talk about a semantic network or something like that with these different types of conceptual models in your brain. that are connected, so let's take the example of walking a dog that we were talking about, okay, let's see, I mean seeing someone on the street walking a cat, some people walk their cats, I guess this seems like a bad idea, but yeah, my model of mine, you know, there are connections between my dog ​​model and my cat model and I can immediately see the analogy that those are analogous situations, but I can also see the differences and that tells me what to expect, so You also know that I have a new situation, so another example with the dog walking thing is that sometimes I see people riding bikes with Elise holding a leash and the dogs running alongside, so I know I recognize it as some kind of dog walking situation, even though the person doesn't walk well and the dogs don't walk because I have these models that say it's okay to ride a bike is similar to walking or is it connected, it's a means of transportation, but since they have their dog there , I guess they're not going to work, but they're going to exercise and you know, these analogies help me figure out what's going on, what's likely, but these analogies are a very human interpreter of Bowl mm-hmm, so that's it. that kind of space and then If you look at something like current deep learning approaches, they kind of help you take raw sensory information and just automatically build role hierarchies.
You can even call them concepts, they are just not interpretive or human concepts, what is your link here? Do you expect this to be some kind of question about the hybrid system? How do you think two can begin to find each other? What is the value of learning in these analogy-forming systems? The less visual insight was that it would make the system learn to extract features that at these different levels of complexity can be edge detection and that would lead to learning that it knows simple combinations of edges and then more complex shapes and then whole objects or faces and this.
It was based on the ideas of the neuroscientists Hubel and Wiesel who had seen this type of structure and brain and I think it's true, to a certain extent, of course, people have discovered that the whole story is a little more complex than that and the The brain, of course, always is and there is a lot of feedback, so I see it as an absolutely good brain-inspired approach to some aspects of perception, but one thing it lacks, for example, is all that feedback, which It is extremely important for interaction. element, you mentioned expectation, the sexual level comes and goes with expectation, perception and yes, it comes and goes, very good, that is extremely important and you know one thing about deep neural networks is that in a given situation as you know They are Well trained, they get these weights on everything, but now I give them a new image, let's say yes, they treat each part of the image the same, you know, they apply the same filters on each layer to all parts. the picture mm-hmm no comments to say like oh this part of the picture is irrelevant right I shouldn't care about this part of the picture or this part of the picture is the most important part and that's what we humans are .
We can do because we have these conceptual expectations, there is a little bit of work in the sense that there is certainly a lot more in a tent than what is called attention in natural language processing, ease of knowledge, it is exceptionally powerful and it is very just like you say it's really powerful idea, but again, in a kind of machine learning, everything works in an automated way that is not human, it's not, it's also not right, so yeah, it's not dynamic, I mean, in the sense that the perception of a new example is processed. those attention weights don't change well, so I mean there's this kind of notion that there's no memory, so you're not adding the idea of ​​this mental model, yeah, yeah, which seems like a fundamental idea, no.
There is a really powerful idea. I mean, there are some things with memory, but there's no powerful way to represent the world in some way that's deeper and it's very difficult because, you know, neural networks represent the world, they have a mental model, right? ?, but it just seems superficial I like it it's difficult it's difficult to criticize them on the fundamental level for me at least it's easy it's easy to criticize and we'll seem exactly like you're saying mental models almost from a second, I'll put a psychological point of view, let's say, look , these networks are clearly not capable of achieving what we humans do by forming mental models, but the analogy does, but that doesn't mean that fundamentally they can't do that like you can, it's very hard to say that, I mean, before , do you have a notion of what learning approaches really do?
I mean, not only will they be limited today but they will always be limited in their ability to construct such mental models. I think the idea of ​​dynamic perception is key here, the idea of ​​moving your eyes and getting feedback and that's something you know, there have been some models like that, there are certainly recurrent neural networks that operate at multiple time steps, but the The problem is that this is how it is. the real thing, the recurrence is, you know, basically, the feedback is for the next step, it's all the hidden state, yeah, the network, which one is, that, that, that doesn't work very well, did you get that right?
What I'm saying is. mathematically speaking, you have the information in that recurrence to capture everything, it just doesn't seem to work, yeah, so, like I, you know, it's like it's the same question about the tour machine, right, yeah, maybe, in theory, computers and anything a universal Turing machine throws up can. Be smart, but in practice the architecture could be a very specific type of architecture to be able to create, so I guess it's like asking almost the same question again: what role do you think deep learning needs will or should play? In this perception thing, I think deep learning as it currently exists, you know, that kind of thing will play some role, but I think there's a lot more going on in perception, but who knows, you know the definition of depth. learning, I'm serious, it's quite broad, it's kind of an umbrella, so what I mean is purely kind of neural networks, yeah, and feedback neural networks essentially or there could be recurrence, but yeah, sometimes it seems like which for us I will talk to Gary Marcus, it seems that the criticism of deep learning is like us birds criticizing airplanes for not flying well or because they don't really fly.
Do you think deep learning? Do you think I could make it to the end like you were looking at things? Do you think so, the brute force learning approach can go the distance? I don't think so, no, I mean, I think it's an open question, but I tend to be on the innate side of Ness that there are some things that we have. We've evolved to be able to learn and that learning just can't happen without them, so an example here is an example that I had in the book that I think is helpful for me at least in thinking about this, so this has to do with Deepmind's Atari game program is fine and he learned to play these Atari video games by simply receiving information from the pixels on the screen and he learned to play the game to be a thousand percent better than humans.
Well, that was one of the results and it was great. and he learned this thing where he would tunnel through the side of the bricks in the runaway game and the ball could bounce off the ceiling and then just clear the bricks. So there was a group that did an experiment where they took the paddle. I know you move with the joystick and you move it up to pixels or something and then they looked at a deep Q learning system that had been trained in breaking and said it could now transfer its learning to this new version of the game of course. a human could, but he couldn't, maybe that's not surprising, but I guess the point is that he hadn't learned the concept of a paddle, he hadn't learned the concept of a ball or the concept of tunneling, he was learning something, you know, we picked it up, we looked at it, we anthropomorphized it and said, oh, this is what he's doing and the way we describe it, but he didn't actually learn those concepts and therefore, because he didn't learn those concepts.
This transfer couldn't be done, yes, so it's a beautiful statement, but at the same time, by moving the paddle, we also anthropomorphize failures to inject them into the system, which will then show how impressed we are by it, what I mean with that it is for me. The Atari games were to me deeply impressive that that was possible, so that guy first paused on that and people should look at it like the game of Go, which to me is fundamentally different than what Deep Blue did, to even though there is still a lot of power. called distilled research, it is simply that everything in the deep mind is done in terms of learning, however limited it may be, it still deeply surprises me.
Yeah, I'm not, I'm not trying to say that what they did wasn't impressive, I think it was incredibly. Impressive to me it's interesting is moving forward on the path aboard just another love, another thing that needs to be learned, so maybe we have been able, through current neural networks, to learn very basic concepts that are not enough to make this general . reasoning and it can be with more data, I mean the data that you know, the interesting thing about the examples that you talk about and they are beautiful, they are often failures of the data, well, that is the question, I mean, I think that that is the key question: whether it's a failure of the data or not or Mexico, the reason I mentioned this example was because you were asking me if I think you know that learning from the data could go all the way, yes, and that's why I mentioned the example, because I think, and this is not at all to take away from the impressive work that they did, but it is to say that when we look at what these systems learn, do the human learn the things that we humans consider relevant concepts? and in that example, I wasn't sure.
If you train him on film, you know the pat paddle is in different places, maybe he could handle it, maybe he would learn that concept. I'm not totally sure, but the question is, do you know how to expand it to? More complicated worlds To what extent could a machine that only gets this raw data learn to break the world down into relevant concepts? I don't know the answer, but I bet without an innate notion you can't do it. Yes, ten years ago, one hundred percent agreed with you like most experts on a system, but now I have one, but I have a glimmer of hope, okay, right?
That's very cool and I think that's what deep learning did in the community no, no, still, if I were to bet all my money, it's one hundred percent deep learning, it won't be enough, but there are still others, still so, I was personally a little surprised mm-hmm why Thar games pass. by the power of the personal game of simply yes, I am playing against you and, like many other times, I felt humiliated by how little I know about what is possible. You know, yeah, I think fairly fair personal play is incredibly powerful and you know, that's it.
We go back to Arthur SamuelWright with his checkers game program and it was brilliant and surprising because it worked so well, so just for fun let me ask you a question about autonomous vehicles, it's the area I work most closely on, at least these days. and it's also an area that I think is a good example of using sort of an example of things that we as humans don't always realize how hard it is to do, it's like the constant trend of AI, but the different problems that we think are easy when we first try them and then we realize how hard it is okay, so why have you talked about autonomous driving being a hard problem?
More difficult than we realize. You have to give him credit for why it's so difficult. One of the most difficult parts. In your opinion, I think it's difficult because the world is so open as to what kinds of things can happen, so what usually happens happens: you just drive and nothing surprising happens and autonomous vehicles can do the job. the ones we have now can obviously work very well in most normal situations as long as you know that the weather is reasonably good and everything, but if some of us have this notion of extreme cases or you know things in the tail of the distribution that you call It's the long tail problem that says so many possible things can happen that weren't in the machine's training data that it won't be able to handle it because it doesn't have common sense, right, it's the old rowing. moved, yes, it's the paddle movement problem, so as I understand it, and you're probably more expert on this than me, the current vision systems for self-driving cars have problems with obstacles, which means they don't They know what obstacles they quote.
Without quotes, the obstacles for which they should stop and which ones they should not stop for, which is why I often read that they tend to brake suddenly and that the most common accidents with autonomous cars are people hitting them from behind. because they were surprised, they warned waiting for the machine, the car to stop, yes, so there are many interesting questions there, whether because you mentioned two things, one is the problem of perception, understanding the interpretation of objects that are . detected correctly and the other one is more like politics, the action you take, how you respond to it, so a lot of cars braking is kind of a notion to clarify that there are many different types of things that people called autonomous. vehicles, but a lot of the safe driver vehicles are way moe and cruise, and those companies tend to be very conservative and cautious, so they tend to be very afraid of hurting something or someone and getting into any kind of danger. accidents, so their policy is very friendly, resulting in an exceptional response to anything that may be an obstacle, right, right, which, the human drivers around them, is unpredictable, yeah, that's not something very human, caution, that is not the question. we're especially good at driving, we're in a hurry, we're often angry, etc., especially in Boston, and then there's another one and a lot of times machine learning isn't a big part of it, it gets more and more confusing to me.
How much do you know? Speaking with public information because many companies say they are doing deep learning and machine learning only attracts good candidates. The reality is that in many cases it is still not a big part of perception. This is this lidar. There are other sensors that are much more reliable for obstacle detection and then there's Tesla's approach, which is vision-only and I think some companies that do that protest famously pushing that and that's because lidar is too expensive, well, I mean. Yeah, but I would say if you gave every test vehicle free, I mean, Elon Musk fundamentally believes that LIDAR is a crutch, the right fantasy said that if you want to solve the machine learning problem, LIDAR shouldn't be the main one.
The sensor is the belief, okay, the camera contains a lot more information mm-hmm, so if you want to learn, you want that information, but if you want to not hit the obstacles you want, it's kind of a strange trade-off just because. Pretty much what Tesla vehicles have, which is really the price of the backup. The main backup sensor is the radar, which is a very crude version of the cigarette lighter. It's a good obstacle detector, except when those things are sitting on the stopped vehicle. that's why you had problems hitting the fire trucks, stop the fire trucks, so the hope is that the vision sensor will somehow pick up on that and infer that there are a lot of perception problems.
They're actually doing amazing things almost like an active learning space where you're constantly taking edge cases and pulling back there's a state data pipeline, another aspect that's really important that people are studying now is called multitasking learning, which is kind of a break from this problem, whatever the problem is in this case. dozens or hundreds of little problems that you can turn into learning problems, so this giant pipeline, you know, it's kind of interesting. I have been skeptical from the beginning and over time we have become less and less skeptical about how much you can learn to drive.
I still think it's a lot further than what the CEO of that particular company thinks it will be, but it's expensive and amazing that through good engineering and data collection and active data curation you can attack that long tail and It's an interesting opening. Question, you're absolutely right, there's a much longer tail and all these edge cases that we don't think about, but it's this, it's a fascinating question that applies to natural language in all spaces, how big, how big? big is that long, straight tail and I don't want to dwell on the point, but what is the point in addressing these practical problems of human experience?
It will be resolved in a year, but can it be resolved in a reasonable time? Or fundamentally other methods need to be invented, so I don't think ultimately driving is like that, it's a trade-off in some ways. You know, being able to drive. and dealing with any situation that arises requires a kind of complete human telogen arena, even humans are not smart enough to do it because humans, I mean, most human accidents are because the human was not paying attention or humans were drunk or whatever and not because they weren't smart, but not because they weren't smart enough, whereas the accidents with autonomous vehicles are because they weren't smart enough, they're always paying attention, so it's a trade-off, you know, and I think it's very fair to say that autonomous vehicles will ultimately be safer than humans because humans are very insecure, it's a low bar, but just as you said the III, I think it had a bad reputation because we are very good at common sense, yes, we are great at common sense, we are bad at paying attention, being attached to something especially moral, you know, driving is a little boring and we have these play phones and everything, but I think what's going to happen is that, for many reasons, not just artificial intelligence reasons, but also legal and other reasons, the definition of autonomous driving is going to change or Self-employed is going to change, it's not going to be just I'm going to go. sleep in the back and just take me anywhere, it will be more that certain areas will be equipped to have the sensors and the mapping and everything necessary for that, so that self-driving cars don't have to have full common sense. and they will do fine in those areas as long as pedestrians don't mess with them too much, that's another question.
I don't think we have fully autonomous self-driving in the way most of the average person thinks about it. for a long time and just to reiterate, this is an interesting open question that I think I agree with you on: to completely solve Thomas's driving, you have to be able to design with common sense. Yes, I think it is important to listen and think. I hope that's wrong, but currently I might agree with you that unfortunately you have to be more specific about these deep understandings of physics and yes, the way this world works and also human dynamics, as you mentioned , pedestrians and cyclists. actually, that's what some people call non-verbal communication, there's that dynamic that's also part of this common sense, right, and we're pretty, humans are pretty good at predicting what other humans are going to do and how our actions impact behaviors Yes, this is a strange game theory dance that we are somehow good at and it works well.
The funny thing is, I've watched countless hours of pedestrian videos and talked to people. We humans are also very bad at articulating the knowledge we have. It's been a big challenge, yeah, so you mentioned embodied intelligence, what do you think it takes to build a human-level intelligence system? Do you need to have a body? I'm not sure, but I'll address that further. and more and what it means to be I don't mean to keep breaking, yes Laocoön, he seems very big, yes, well, he certainly has a big personality, yes, he believes that the system needs to be grounded, which means it needs something like that how to be. able to interact with reality, but he doesn't think he necessarily has to have a body, so when you think about what the difference is, I guess I want to ask when you mean body, do you mean you have to be able to play with the world?
Do you also mean that there is a body that you have to preserve? Oh, that's a good question. I haven't really thought about that, but I think we'd both guess because it's because I think intelligence is very difficult to separate. from self, our desire for self-preservation, our emotions are all those non-rational things that get in the way of logical thinking because we know whether we're talking about human intelligence or human-level intelligence, whatever that means, a lot of it . It's social, you know we evolved to be social and deal with other people and that's so ingrained in us that it's hard to separate intelligence from that.
I think you've known about AI for the last 70 years or however long it's been around. It's largely separated, there's this idea that it's very Cartesian, there's this thinking that we're trying to create, but we don't care about all these other things and I think the other things are very fundamental. So there's this idea that things like emotions get in the way of intelligence rather than being an integral part and part of it. I mean, I'm Russian, so I idealize the notions of emotion and suffering and all that kind of fear of mortality, that kind of thing. things, so, especially by the way, did you see that there was something recent circulating on the Internet about this, so some, I think it's a Russian or some Slavic boss, had written this as some kind of anti idea of ​​super intelligence mmm- Hmm , anyway I forgot maybe the polish, so in all these arguments and one of them was the Slavic pessimism argument, do you remember what the argument is?
It's like nothing works, so what do you think the role is? It's such a fascinating idea that we perceive that they serve the limits of the human mind, which is emotion and fear, and all those kinds of things are integral to intelligence. Could you explain why that is so important? Do you think that for intelligence at the human level at least the form? humans work is a big part of how it affects how we perceive the world it affects how we make decisions about the world it affects how we interact with other people it affects our understanding of other people you know for me to understand what you are To do what you are likely to do, I need to have a kind of theory of mine and that is very much a theory of emotions, motivations and goals, and to understand that, you know, we have this whole system of mirror neurons.
You know, I understand your motivations from simulating it myself, so you know it's not something I can prove is necessary, but it seems very likely. Well, you wrote the op-ed in the New York Times titled "We Shouldn't." You'll be scared off by the super-intelligent AI and give you a bit of flack just for making noise in the boss room. Can you try to summarize the key ideas of that article? So it was prompted by an earlier New York Times op-ed written by Stewart Russell that summarized his book called human-friendly and the article said, you know, if we have a super-intelligent AI, we need its values ​​to align with ours and it has We have to learn about what we really want and he gave this example, what if we have a super intelligent AI? and we give him the task of solving climate change and he decides that the best way to reduce carbon in the atmosphere is to kill all humans.
Well, that didn't make any sense to me because, first of all, a super-intelligent AI thought. What's it like trying to figure out what superintelligence means and isn't? It seems that something so superintelligent can't just be smart in this onedimension. Well, I am going to discover all the steps, the best optimal path to solve the climate. change and not be smart enough to realize that humans don't want to be killed, that you can get to one without having the other and you know, boström in his book talks about the orthogonality hypothesis where he says he believes that the systems "can".
I don't remember exactly what it is, but it's like the goals of a system and its values ​​don't have to be aligned. There's something orthogonal there that didn't make any sense to me, so you're saying it in any system that isn't. even super intelligent, but is it close to higher intelligence? there is a holistic nature that will generate a kind of attention that will arise naturally, yes, events from any dimension fleeing, yes, exactly, so you know, boström had this example of superintelligent. The AI ​​that does that turns the world into clips because its job is to make clips or something and that alone as a thought experiment didn't make any sense to me, any more than a thought experiment or something that could possibly be done either.
So I think you know what my opinion piece was trying to do was say that intelligence is more complex than what these people present, it's not that rationality, values, emotions, all of that are not so separable, it's the I see that you could separate all these dimensions and build the machine that has one of these dimensions and is super intelligent in one dimension but doesn't have any of the other dimensions, that's what I was trying to criticize, that, that, I don't have. I think I can also read some sentences from Yoshihua Bengio, who is always super eloquent, because of what he writes.
I have the same impression as Melanie that our cognitive biases are related to our ability to learn to solve many problems, they can also be a limiting factor for However, AI, in quotes, things can also turn out differently and there is a lot of uncertainty about the capabilities of future machines, but most importantly for me, the value alignment problem is a problem long before we reach any hypothetical superintelligence: it is already posing a problem in the form of superpowered companies whose objective function may not be sufficiently aligned with the general well-being of humanity, creating all kinds of harmful side effects, so he goes on to argue that, you know, orthogonality and that kind of thing, the concern of simply aligning values ​​with the capabilities of the system is something that could arise long before we reach anything like superintelligence, so your criticism is It's really nice to say that this idea of ​​superintelligence systems seems to rule out fundamental parts of what would be needed for intelligence and then say yes, but if We look at systems that are much less intelligent, there could be these same types of problems.
That comes up for sure, but I guess the example you give from these corporations is that people are right, those are people's values. I mean, we're talking about people, corporations are their value, they're the values ​​of the people who run those corporations, but the idea is the algorithm, that's right, so does the fundamental person, the fundamental element of what it does. the bad thing as a human being, yes, but the algorithm controls the behavior of this mass of human beings who help a company in whatever way it can, which is the way out, for example, if it is the company that promotes advertising that recommends certain things and encourage participation, so you make money by encouraging participation and therefore the company is more and more like the cycle that builds an algorithm that enforces more participation and perhaps creates more division in the culture, etc. ., etc.
Again, I guess the question here is who has the agency, so you could say, for example, that we don't want our algorithms to be racist, right? and facial recognition. You know, some people have criticized some facial recognition systems as racist because they're not as good on darker skin and lighter skin, okay, but the agency there, the actual algae recognition algorithm is not what the agency has. , it's not the racist thing, right, it's that I don't know the combination of the training data, the cameras that are used, whatever, but my understanding and I will say that I told him that I agree with Benjy, oh, he already knows , I think there are these value issues with our use of algorithms, but my understanding of what Russell's argument was is more, the algorithm itself has the agency, now it's what makes the decisions and it's what has what we would call values, yeah, so if that's just a matter of degree, you know, it's hard to say correctly because, but I would say. that's something qualitatively different than a facial recognition neural network and, to broadly stop at that point, if you look at Elon Musk, he goes crazy or boström, people who are worried about the existential risks of AI, for As far as the future of the plot is, it eventually happens, we don't know to what extent, but it does eventually happen, do you share any of those concerns and what kind of concerns in general does a body that is approaching something like an existential threat have? for the humanity?
So I would say yes, it's possible, but I think there's a lot closer to the existential threats that you had, as you said, for a hundred years, so your times are more than a hundred more than a hundred years and maybe even more than 500 years. . No, I do not know. I mean that existential threats are so far away that the future is immune; there will be a million different technologies that we can't even predict now that will fundamentally change the nature of our behavior, the society of reality and so on before that, I think. so I think so and you know we have a lot of other pressing existential threats in new gathering places, including their nuclear weapons, climate issues, you know, poverty, potential pandemics that you can go on and on, and I think even though you know, worrying about the existential threat of AI is It's not the best priority for what we should be worrying about, that's my opinion because we are very far away, but you know I'm not, I'm not necessarily criticizing Russell or Boström or whoever for worrying about that and I think That some people should worry about that is certainly fine, but I was rather understanding his view of intelligible intelligence is mmm-hmm, so I focused more on his view of super intelligence then. uh just the fact that they cared and the title of the article was written by the editors of the New York Times.
I wouldn't have called it that we shouldn't be afraid of being super smart and no, if you wrote it down, we should redefine. what you mean by super I actually said I was saying you know something like super intelligence isn't some kind of coherent idea it's not like it's just the New York Times would include it and the follow up argument that Yoshio presents isn't an argument either. . but a statement and I've heard him say it before and I think I agree that he has a very friendly way of putting it, it's good for a lot of people to believe different things, yes, well, no, but he also speaks practically like We shouldn't say that while your article says that Stuart Russell does an incredible job, Boström does an incredible job, you do an incredible job, and even when you don't agree with the definition of superintelligence or the usefulness of even the term, it's still useful to have people who can help you. like. use that term well and then discuss it sir.
I absolutely agree with the video there and I think it's great that you know that and it's great that the New York Times publishes all this stuff. That's right, it's an exciting time to be here, what do you think? it's a good intelligence test (IQ) it's a natural language, ultimately a test that you find more convincing, like the original or what you know, the higher levels of the Turing test, yes, yes. I still think about the original idea of ​​the Turing test. It is a good test of intelligence. I mean, I can't think of anything better.
You know, the Turing tests, the way they've been carried out so far has been very poor, so to speak, but I think an actual Turing test that really delves into the one that I mentioned, that I talk about in the book , I'm talking about Ray Kurzweil and Mitchell Kapoor, they are right in this bet that 2029 I think is the date that a machine will pass the Turing test and in turn, he says and they have a very specific type. how many hours, a lot of expert judges and all that, and you know, Kurzweil says yes, Kapoor says no, we can't, we only have nine more years to see.
You know, if something could happen with a machine, I'd be willing to call. it's smart of course no one will do it they will say it's just a language model if it does you would feel comfortable it's a language a long conversation well yeah I mean you're right because I think probably to take away out that long conversation you would. I literally need to have a deep understanding of the world with common sense. I think so, and the conversation is enough to reveal it. So another super fun complexity topic that you've worked on, let me ask the basic question: what is complexity, so complexity is something else.
One of those terms like intelligence is maybe overused, but my book on complexity is about this broad area of ​​complex systems studying different systems in nature, in technology in society where you have a kind of emergence from which I was talking to intelligence, you know we have the brain that has billions of neurons and you could say that each neuron individually is not very complex compared to the system as a whole, but the system, the interactions of those neurons and the dynamics create these phenomena that we call intelligence or consciousness, you know. which we consider very complex, so the complexity field is trying to find general principles that underlie all these systems that have these types of emergent properties and the emergence occurs from underlying interactions similar to the complex system that are generally simple, yeah, and the emergence happens when there's a lot of these things interacting, yeah, kind of and then most of science to date, can you talk about what reductionism is?
Well, reductionism is when you try to take a system and break it down into its elements, whether it's cells, atoms, or subatomic particles, whatever your field is, and then try to understand those elements and then try to develop an understanding of the entire system by looking at a species. sum of all the elements, so what is your sense of whether we are talking about intelligence or these kinds of interesting complex systems, is it possible to understand them in a reductionist way? It's probably the focus of most science today. I don't think it's always possible to understand the things that we most want to understand, so I don't think it's possible to look at individual neurons and understand what we call intelligence, you know, just look at a kind of summary and that kind of summary is the problem here, where you know that an example is that the human genome is fine.
So there was a lot of work on sequencing the human genome because the idea would be that we could find genes that underlie diseases, but it turns out I was a very reductionist idea, you know, we figure out what everything is. the parts are and then we could figure out which parts cause what things, but it turns out that the parts don't cause the things we are interested in, it's like the interactions, they are the networks of these parts and so on. The kind of reductionist approach didn't give the explanation we wanted. . Would I use the most beautiful complex system you've ever found?
The most beautiful that has ever captivated you. be cellular automata oh yes, I was very captivated by cellular automata and worked on cellular automata for several years. Do you find it surprising or is it surprising that such simple systems, such simple rules, and cellular Domino can create a kind of seemingly limitless complexity? Yeah, that was It's very surprising to me, I didn't understand how that makes you feel, ultimately is it humbling or is there hope to leverage this in some way to gain a deeper understanding and even be able to design things like intelligence. It's definitely humbling, how humbling there is.
It's also kind of impressive that it's such an inspiring part of mathematics that these simple, believable rules can produce this beautiful complex, hard-to-understand behavior and that's mysterious, you know? and surprising still but exciting because it gives you a kind of I hope you can design the complexity just from these. Can you briefly say what the Santa Fe Institute is, its history, its culture, its ideas, its future? I've never been a G semester, I've never been, but so has this in my - mystical place where brilliant people study the edge of chaos exactly like this, the Santa Fe Institute was started in 1984 and was created by a group of scientists, many of them from Los Alamos National Laboratory, which is about a 40-minute drive from Santa Fe Institute were mostly physicists and chemists, but they were frustrated in their field because they felt their field did not address big interdisciplinary questions such as What have webeen talking about and they wanted to have a place where people from different disciplines could work on these big questions without being isolated in physics, chemistry, biology, so they started this Institute and it was people like George Cowan, who is a chemist on the Manhattan Project, and Nicholas Metropolis, who is the mathematical physicist Murray Gellmann, physicist.
So some really big names here know Arrow, a Nobel Prize-winning economist, and they started having these workshops and this whole company grew into this Research Institute that itself has been on the brink of chaos its entire life because it doesn't. It doesn't have any, it doesn't have a significant endowment and it's just been living off whatever funding it can raise through donations and grants and whatever, you may meet business partners etc., but it's a great place, it's a really great place. fun. To go think about ideas that you wouldn't normally find, I saw Sean Carroll, so physicists, yes, yes, external professors, and you mentioned that there are some external professors and there are people, there is a very small group of resident professors, maybe about ten that are. there for five year periods which can sometimes be renewed and then they have some postdocs and then they have this much larger thing, on the order of a hundred external professors or people like me come and visit us for various periods of time, so what do you think? this is the future of the Santa Fe Institute and if people are interested in what there is in terms of public interaction or students etc., that could be a possible interaction about the Santa Fe Institute or its ideas, yes, then there is a There some different things that they do, they have a complex summer school system for graduate students and postdocs and sometimes professors attend and that's a very intensive four-week residential program where you go and listen to lectures and do projects and people that people really like.
I mean, it's a lot of fun, they also have some specialized summer schools, there's one on computational social sciences, another on climate and sustainability, I think it's called there are a few and then they have short courses of just a few days on different topics. I have an online education platform that offers many different courses and tutorials from SFI teachers, including an introductory complexity course that they talk about and there are a lot of online talks from guest speakers etc, they organize a lot of things, yes, they have kind of technical seminars and colloquia, all of them and they have a series of community lectures like public lectures and they put everything on their YouTube. channel so you can see it all by watching douglas hofstadter, author of get olestra bach, was your PhD advisor, mentioned a couple of times and collaborator, do you have any favorite lessons or memories from your time working with him that continue to this day? today, yes, but only looking?
I went back over the whole time I worked with him, so one of the things he taught me was that when you're analyzing a complex problem, you should idealize it as much as possible to try to figure out what the essence of this problem really is. And that's how the Copycat program was born, taking an analogy and saying how can we make this as idealized as possible but still really retain the important things that we want to study and that really remained a central theme. I think my research and I'm still trying to do it and it's actually very inspired by physics.
Hofstadter was a PhD in physics, that was his training, it's like first principles, a thought like you boil it down to the most fundamental aspect of the problem. Yeah, there you can focus on solving that fun of what I thought, yeah, and in AI you know people used to work in these micro worlds, just like the block world was an important area early on in AI and then that was criticized because they said oh. You know you can't scale that to the real world, so people started working on more real-world problems, but now there's been kind of a return even to the world of blocks.
You know, we've seen a lot of people who are trying to work on more of these very idealized problems or things like natural language and common sense, so it's an interesting evolution of those ideas, so maybe the block world represents the fundamental challenges of the intelligence problem more than people realized. When you look back on your work and your life, you've worked in so many different fields, is there anything you're really proud of in terms of ideas that you've had the opportunity to explore? I am very proud of my work on the Copycat project. I think it's very different from what almost everyone does in AI.
I think there are a lot of ideas to explore and I think one of the happiest days of my life, apart from the births of my children, was the birth of the copycat when he actually started to be able to make really interesting analogies and I remember very clearly you know it was a very exciting moment well you brought it to life yeah artificial so that's it in terms of what people can interact with I saw there's something like I think it's called kinetic meta copy crazy hat cat and there's a Python three implementation in if people really want to play with it and really get into it and study it, maybe integrate it into whether it's with deep learning or whatever other kind of work they're doing, what would you suggest they do to learn more? about it and take it forward in different kinds of directions?
Yeah, so there's a book by Douglas Hofstadter called Fluid Concepts and Creative Analogies, talks in great detail about the imitator. I have a book called Analogy Making as Perception, which is a version of my PhD thesis. There is also code available that you can run. I have some links on my website where people can get the code. So, and I think that would really be the best way to approach it. Yes, play with it. Melanie, it's an honor to speak with you. I really enjoyed it. Thank you very much for your time today. It has been great.
Thanks for listening to this. conversation with Melanie Mitchell and thanks to our presenting sponsor cash app download use code Lex podcast you'll get ten dollars and ten dollars will go first to a critical education nonprofit that inspires hundreds of thousands of young minds to learn and dream about engineering. our future if you enjoyed this podcast subscribe on youtube give five stars a patreon supported apple podcast or connect with me on twitter and now let me leave you with some words of wisdom from Douglas Hofstadter and Melanie Mitchell without concepts there can be no thinking and without analogies there can be no concepts and Melanie adds how forming and using concepts fluently is the most important open problem in AI.
Thanks for listening and I hope to see you next time.

If you have any copyright issue, please Contact