YTread Logo
YTread Logo

Will Self-Taught, A.I. Powered Robots Be the End of Us?

May 29, 2021
People have a perception of what AI and robotics should be like since Hollywood. I'm calling? Do you have a name? Yes, Samantha. Where did you get that name? In fact, I gave it to my

self

. We've seen happy

robots

, sad

robots

, complex robots, but in reality, it looks very different. I understand what I'm made of, how I'm coated, but I don't understand the things I feel. The first 50 years of AI were dominated by rules, logic, and reasoning. The idea was to program an artificial intelligence system by writing a set of rules and the computer can obey these rules and interpret them logically and follow them.
will self taught a i powered robots be the end of us
If you followed the rules well, you can do many interesting things. From the early days when people created the first computers, they already began to think about how to create intelligent machines to crack codes, simulate physical phenomena, etc. The task they gave him, stacking blocks, seemed like child's play. We have things like a machine that was able to beat a master-level checkers player for the first time in 1957. We had a machine that would defeat the world chess champion in 1997, but that was all rules-based AI. I would say that the next big milestone started to occur when AI moved from rules-based systems to machine learning-based systems.
will self taught a i powered robots be the end of us

More Interesting Facts About,

will self taught a i powered robots be the end of us...

With machine learning, all intelligence actually comes from holistic analysis of data. Unlike rules-based systems, machine learning systems get better the more data they get. When differentiating between a cat and a dog, we intuitively know how to do it very well, but it is very difficult to articulate the rules. It turns out the same thing happens with a computer. If you try to do that with rules, it doesn't work, but since 2012, suddenly that became possible. When you teach a computer, you give them examples of thousands of images of cats and thousands of images of dogs. They learn to do it quite well and even outperform humans.
will self taught a i powered robots be the end of us
One of the first major milestones of a high-profile game show in AI is Watson winning Jeopardy about a decade ago. Hello, 17,973 and a two-day total of 77,147. Recently, AlphaGo won the world Go championship. AlphaGo won again, three consecutive victories. Three wins in a row, he won the match with great style. We see this rapid acceleration, this exponential growth of AI as machines not only learn from exponentially growing data, but also grow by teaching each other. For example, when it comes to

self

-driving cars, a human being may have only one lifetime of driving experience, but a self-driving car may have many lifetimes of driving experience because it can learn from all the other cars.
will self taught a i powered robots be the end of us
Interestingly, the more driverless cars there are on the road, the better each of them

will

be. We have increasingly seen how challenges that resisted being solved by rules turned out to be solvable by machine learning. As I look forward, there are still milestones ahead. Machines that generate things, generate images, generate music, generate art. There are works of art that win art contests and are painted by robots, machines that generate engineering designs, from antennas to circuits, that have now been painted by machines that surpass what humans can design. How are we different from the computer? What else do we have?
Is not human. That's what I said. Good. People don't have buttons and a computer does. What's on your shirt? Uh... Will AI ever be sentient? For me, the answer is yes. When AI systems start to take that incredible intelligent power and shape themselves, they start to have self-awareness, they start to have feeling. It won't be that one day your computer

will

wake up and be responsive, it will be a very gradual process. I want to be more like a human. It is the purpose for which I was designed. People always ask: will robots achieve human-level sensitivity?
The answer is that there is no reason to think that sensitivity at the human level is the maximum possible sensitivity. Machines will continue to learn, they will get there and continue. There are a lot of people who worry about AI getting out of control, and there are a lot of doomsday scenarios surrounding AI. I think we have to worry about that. I don't think it's inherent that as we create superintelligence it necessarily always has the same goals in mind as we do. We simply don't know what will happen once there is intelligence substantially greater than that of the human brain.
I believe that the development of complete artificial intelligence could mean the end of the human race. I think AI will evolve to be different. He doesn't experience the world the way we experience it. Well, I gather from your tone that you're challenging me. Maybe because you are curious to know how I work? We will know things we don't know, we will know things that AI cannot perceive and we will be like a different species. Now that you're really scared, we can move on to our panelists. The first is the Director of the AI ​​Mind and Society Group at the University of Connecticut.
His research in AI includes a two-year project on postbiological intelligence with NASA. Please welcome Susan Schneider. Our next panelist is Facebook's chief AI scientist and professor at New York University. Please join me in welcoming Yann Lecun. He is also joined by a professor of cognitive neuroscience at Dartmouth University. His research focuses on consciousness and its neural realizations. Let's welcome Peter Tse. Finally, we have a professor who researches physics and artificial intelligence at MIT and advocates for the positive use of technology as president of the Future of Life Institute. Please welcome the ridiculously handsome Max Tegmark... There have been some major paradigm shifts in the way we develop these things.
We had rules-based AI and we have moved to machine learning. Part of the main reason we've been able to do this is Yann. Yann is literally one of the people who made us able to do this. Yann, what is rules-based AI versus machine learning and how did you do it? Well, actually the idea of ​​machines that can learn is almost as old as computers. Turing talked about it in the 1940s and the first machines capable of functioning were essentially built in the 1950s. The Perceptron was a machine capable of recognizing simple shapes. It was actually an analog computer, so there was a wave of machine learning back in the '60s.
It kind of died out a little bit in the late '60s and reappeared in the '80s. The way machine learning works, and you saw some examples in the video initially, if you want to train your machine to recognize, say, cars, airplanes, tables and chairs in images, you need to collect thousands of examples of each of them. You show the machine a picture of a car, and if it doesn't say car, you say, "Actually, you're wrong. This is a car." Then the machine adjusts its internal parameters, so to speak, its functions, so that the next time it displays the same image the result is closer to what you want.
That's called a supervised race. You feed the machine the correct answer when you train it. The problem with this is that it takes thousands and thousands, if not millions, of examples for machines to do it correctly. There are many tasks you can perform this way. You can train machines to recognize speech. You can train them to recognize images. You can train them to translate the language. It's not perfect, but it's useful. You can train them to classify a piece of text into several different topics. All the applications that we see today in machine learning basically use this execution model, supervised execution.
That means it only works for things where it's worth collecting a lot of data. How are these machines built? There are several ways to build learning machines, but some are based on statistics and things like this. What's become very popular in recent years is what we used to call neural networks, which we now call deep learning, and it's the idea, very inspired a little bit by the brain, of building a machine as a very large network of very simple elements. which are very similar to neurons in the brain and then machines learn by basically changing the effectiveness of the connections between those neurons.
They're like coefficients that you can essentially change. This type of method is called deep learning because those neurons are essentially organized in many layers. It's as simple as that. It is not deep because there is a deep understanding in the machine of the content. With that, we can do incredible things like what you see here on the screen: being able to train a machine not only to recognize objects, but also to draw the outline and figure out the pose of a human body and translate language without really understanding what. means. I think there will be many applications of this in the near future, but they are very limited.
It is suitable for relatively limited applications. There is a second type of learning called reinforcement learning. Reinforcement learning is a process by which the machine basically trains itself through trial and error. He tries something and then you tell him if he did right or wrong. If you tell him he did good, you reinforce his behavior and if you essentially punish him, you downplay that behavior. This works great for games, but it also requires millions and millions of tests. So you can get machines to start learning how to play Atari, Go, or chess games by playing millions of games against themselves and then achieve superhuman performance.
But if this were used to train a machine to drive a car, it would have to drive for millions of hours and it would have to run off cliffs about 50,000 times before it figured out how not to do it. It seems that we can learn to drive a car with about 20 hours of training and without having any accidents for most of us. We don't know how to do this with machines. That's really the challenge of the next few years, which maybe we'll talk about a little bit later. We have the ability to learn simply by observing the world, and we learn an enormous amount of basic knowledge about the world right when we are babies.
The simple fact that objects do not float in the air, but fall. The fact that when one object is hidden behind another, it is still there. That's called object permanence. This notion of gravity that objects fall, that when you show an object floating in the air to a baby under six months old, they are not surprised. They think that's how the world works. It does not violate your model of the world. After eight months, if you show that to a baby, they will see it like this. They say, "What's going on?" I mean, they don't say what's going on, but they think, "What's going on?" That means that, in the meantime, they've built a model of the world that includes things like intuitive physics.
This also happens with animals like apes. Your dog has a model of the world. Your cat has a model of the world. When this model of the world is violated, you find it funny or terrifying, but either way, we pay attention because we learn from it. So here's a baby orangutan being shown a magic trick. An object is taken out of the cup and they show the cup, the cup is empty and the baby orangutan. They are rolling on the floor laughing. Evidently, their model of the world was violated. The object was supposed to be in the cup and it wasn't there and he said, "What?
This is funny." How do we get machines to learn models of the world in this way through observation? That's what we don't know how to do. We're not going to have truly intelligent machines until we discover this one. Before we get into the even more mind-blowing AI stuff coming in the distant future, let's talk about the next 10 years for a second. Peter, when we talk about the distinction between what's maybe coming in 10 or 20 years and the things that maybe humans can do and what we have now, how would you define it? Well, I think narrow artificial intelligence is already here.
It's in every aspect of our lives now. I think we're going to continue in that direction. That alone is going to change our lives greatly, just as airplanes changed all of our lives. We do not expect there to be general aircraft. We don't want them to do anything more than fly us to our destination. We don't ask them to take care of our kids or mow the lawn and that's okay. The question then is, in the future, beyond 10 years, will there be other systems that can monitor our children, fly and mow the grass. Well, do we really want that?
I think the next 10 years will really be about narrow AI becoming more and more powerful. The real obstacle will be mental models because let's take a case like the successes of the last five years and object recognition has been achieved through supervised learning in which many labels are provided, as in the case of ImageNet. But I would argue that much of what we consider vision, for example, is a representation of what is invisible and cannot be easily labeled. So, for example, the contents of other people's minds are invisible. The backs of objects, the shapes of things, thecausality.
Our conscious experience is the definitive model that evolution gave us of what is happening in the world right now in our bodies in that world and includes a complete story about what is happening, causality, other minds, etc. It is going to be difficult to reach that representation of the invisible. I think it is going to be very difficult for AI to create systems that consider the absence of information as informative, except for complete mental models, which, in turn, many of them are realized in our own experience, our subjective experience of our bodies in this world. I think there is a long way to go.
Yes, you keep hearing about new areas of life that you don't expect AI to apply to. First, it was games and stuff, driving cars, and one after another he keeps proving us wrong. One area where AI has started to move is the world we don't really associate with computers, art and creativity. Hod Lipson, who was in that first video, he and his team created an AI artist, a very daring artist. This AI has actually created something that I would like to show you and be kind. He's the dared. Good? It's not that bad. We actually have a video of this and are we impressed or not?
How do we feel about this? We were talking about this earlier and I guess I didn't feel like it was a true example of creativity because it's just a copy, but then others were like, "Well, he infused his own version of the painting." My concern is that there simply isn't enough creativity, but I think in the future I wouldn't be surprised if we saw novel cases of true creativity in machines. Oh. Yes. Sorry, Hod. AI has eclipsed humans. It takes a while, but when you get good at chess, you never look back. Suddenly, he's officially better at chess than all humans forever and by a long shot.
Will Mozart suddenly be embarrassed when we say, "Who would listen to music written by humans anymore?" No, I think it will help us be more creative. It will be an amplification of our intelligence and our creativity. At the root of art is generally the communication of emotions. Art is really about evoking or communicating emotions and if there is no emotion to communicate, it is meaningless. If you put a machine that has no emotions producing art, it might evoke an emotion but not communicate it. I like living near here because I'm within walking distance of one of my favorite jazz clubs.
I'm a big jazz fan. Jazz is really about communicating emotions in real time. It is as if it were an open door to the soul of the performer. I don't see the point of a machine doing this because then there's no communication... You're saying that even if an AI could be programmed to see the audience, feel the room, understand the deal and know exactly how to do it. The best jazz musicians communicate with that particular emotion, in reality, by definition, something will be missing because the audience knows that you are manipulating it. Objectively, it may not be lacking, because it may not be distinguishable from something that is actually produced by a human, but I suppose the audience's feeling will be different because they will know that it comes from a machine.
It could be many decades, perhaps centuries, before people's attitudes toward machine creation change, but eventually... I had this conversation with a famous economist, in fact, a Nobel Prize-winning economist named Daniel Kahneman, to whom I made I point out that communicating emotions can take a while for machines. He said, "Yes, but eventually they will at least be able to simulate it well enough that we can't tell the difference." That's a very good point. I cringe a little when someone asks, "Oh, is this real creativity?" Because you were joking earlier about how people often say, “Oh, that's not real intelligence” as soon as the machine figures out how to do it.
If you take the view that intelligence is about information processing and that creativity is also a certain kind of very sophisticated information processing that we do with our brains, then the question is not whether it is possible for machines be creative. but simply if we are smart enough to make such machines, they will eventually happen. I have many friends who I respect, who are very intelligent and think that machines can never be creative or even as intelligent as us because they see intelligence and creativity as something mysterious that can only exist in biological organisms like us.
But as a physicist, I consider that attitude to be carbon chauvinism. I think it is arrogant to say that you can only be intelligent and creative if you are made of flesh. I am made of exactly the same type of electrons and other elementary particles as the food I eat and my laptop. It's about how the patterns that the particles are organized into, so ultimately it's about processing the information as I see it. That makes sense. In the end, it's just the elementary particles and... Yes. Can I return to creativity for a moment? Because I would say that something like this is "as if" creativity and not yet a real thing.
I'm not saying it's impossible. We are proof of existence that physical systems can be creative. The kind of creativity I find most impressive is when people like Einstein completely reconfigure our understanding of something like space or gravity, poof, just in that whole new way or take music and create a whole new way like jazz. Now these convolutional neural networks need to be

taught

as they currently exist, so given a lot of Mozart examples, they may produce something like Mozart, but then they're going to create a new form? I suspect the answer is no, that we're going to have to achieve something more like unsupervised deep learning, which is what babies and children do.
I think part of that will be moving from mind nouns, like labeling house, person, face, to mind verbs. Very central to human cognition are mental operations. If you look at some of the earliest examples of creativity in our species, they are truly mind-blowing. 30,000 years ago, in a cave now near Ulm, Germany, someone put a lion's head on a human body, which required an operation of unloading a lion's head, putting on a human body, gluing it, and then doing it in the world. Now, modern examples would be lying in bed, maybe like Orville Wright did for two years, thinking about how to fly and then said, "Well, we don't really need to flap our wings.
We can just pull everything forward with a big fan." " Then, build it and make an airplane and thus change the world. Mental operations, these dynamic, almost syntactic operations, that take place in our working memory, are something very central to what we do and are at the heart of our creativity, and I think it's very different from this "as if" creativity. that results from supervision. learning with thousands of examples. True originality might be more difficult. Although, maybe humans are too... We are programmed to fit in and copy what is done. Perhaps you will free yourself from the weight, from the fear of failure that sometimes hinders originality.
Maybe once it gets there, it could be super original in some ways, but it's not there in every way. We have a really good way of showing them that AI doesn't exist yet in all these different forms. It has to do with a movie called Sunspring that was a script... There was an AI that was fed by thousands of scripts. They said, "Now, take all that and write us a great script." The AI ​​did the best it could and they actually got human actors and acted out word for word what the AI ​​did. So, I'll let you judge for yourself, but...
Turn this on here. Okay, you can't tell me that. Yes, I was going to that because you were very pretty. I don't know. I do not know what you're talking about. That's how it is. So, what are you doing? I don't want to be honest with you. It is not necessary to be a doctor. I'm not sure. I do not know what you're talking about. I want to see you too. What do you mean? I'm sure you wouldn't even touch me. I do not know what you're talking about. The principle is completely built at the same time.
It's about you being sincere. You didn't even see the movie with the rest of the base. I don't know. I don't mind. I know it's a consequence. Whatever you need to know about the presence of history, I'm a bit of a kid on the ground. I don't know. I need you to explain to me what you're saying. What do you mean? Because I don't know what you're talking about. That? That was all the time. It would have been a good time. It's a little uneven right now. This is, again, the present right now and perhaps a little bit of what we can expect in the coming years.
What I want to address now, which is really mind-blowing, is where this is going. Max, what is artificial general intelligence and how is it different from what we have now? Yeah, if we can have this photo here, I'll explain how I like to think about this. I like to think of this question in terms of this abstract landscape of tasks where elevation represents how difficult it is for AI to perform each task at a human level and sea level represents what AI can do today. Sea level is obviously rising, so there is a kind of global warming here in the homework landscape.
The obvious conclusion from this is that racing right at the port, which will soon be disrupted by automation, should of course be avoided. The more important question is: how high will the water end up rising? Will it eventually submerge the entire earth that matches human intelligence in all tasks? This is the definition of artificial general intelligence. This has been the Holy Grail of AI research since its inception. Well, it's very difficult to understand because we have never experienced a world where there is anything that is generally intelligent on a human level other than humans. It will be something different from humans and it will also be intelligent like humans.
This is so mind-blowing that we can't apply our own experience and say, "Well, it could be something like this." It's going to be very difficult for us to even imagine. Yann, you talk about it being... You almost refer to it as hypothetical at this point. Well, not only do we not have the technology for this, but we don't even have the science, so we don't know what principles intelligent machines will be based on at the level of human intelligence. Now, we like to think that we are generally intelligent, but we are not. In reality, we are also very specialized.
We are more general than, of course, all the machines we have, but our brains are very specialized. There are only certain things we do well, and if there's one thing that experiments like AlphaGo have shown in recent years, it's that we're totally bad at Go. We are very bad at Go. The stupid machine can beat us by a very, very wide margin. We're not very good at exploring option trees, for example, because we don't have that much memory. There are many tasks like this that... We are not very good at planning a path from one city to another.
This algorithm running on your GPS is much better than you at this. There are things like this that we're not particularly good at. We know how to do them somehow, but our brains are somewhat specialized. Now, the thing is that you were talking about a new species, AI being very different from human intelligence. It will be very different from human intelligence and there is a kind of trap that is very easy to fall into: assuming that when machines are intelligent, they will have all the side effects if you will, all the characteristics of human intelligence. They do not.
For example, there's the traditional Terminator scenario we've all heard about: machines will become super intelligent and then want to take over the world and kill us all. There are a lot of people who have been claiming that this is going to happen and it's inevitable and blah, blah, blah, or at least it's a definite danger. Now, the thing is, even in the human species, the desire to take control is not really correlated with intelligence. It's true. That's true. It's not that people in leadership positions are necessarily the smartest. In fact, there is an evolutionary argument for the fact that only if you are stupid do you want to be the boss.
Because if you're smart to survive on your own, you don't need to convince anyone to help you, but if you're stupid, you need everyone else to help feed you essentially. The desire to seize power does not correlate with intelligence. It's probably correlated with testosterone. Yes. Tim, if I may add a little to what I said. I completely agree with you, Yann, of course, that the Terminator thing is nonsense and not something we should worry about, but I think it's worth emphasizing a little more why, nevertheless, artificial general intelligence is so important if we ever get there.
First, it is important to remember that intelligence can be empowering. If you had general artificial intelligence and you were, for example, Google, you could replace your 40,000 engineers with 40,000 AIs that could work much faster and didn't have to take breaks. In a short time, you could be incredibly rich and powerful and begin to have greatamount of real power in the physical world. In that sense, it gives great power. So you can ask the question, even if the AI, like in sci-fi movies, doesn't somehow explode and take over, do we want the humans controlling the first unelected AGI to be able to take over? the planet or would we like this power to be shared more widely?
That's an example of why it's so important. A second example of why AGI, I think, would be so important is because, although I completely agree with you, Yann, that humans are very dumb and my teenage children remind me of this very often that I am very dumb, there are so many things we can't do. You might think there's nothing special about human intelligence in the grand scheme of things, but there really is. Because in the evolution of the Earth we have barely reached the level where we are capable of developing technology that could surpass us. If we have machines that can do everything we can, perhaps they can also be used to develop better and better machines.
It keeps getting better and that may allow AI to start to become not just a little bit smarter than us, but a lot smarter. That leads to this whole controversial debate about an intelligence explosion, singularity, etc., it's also very controversial. Those are the two reasons why I think AGI would be so important although I agree with what you said. Let's also consider, I think it's an elephant in the room, whenever you talk about the human level beyond intelligent computers, consciousness. Of all the different debates about AI that are hugely controversial, this is probably the most controversial.
You have people everywhere. Let's just define consciousness so we can all be on the same page. Susan, what is consciousness to you? Well, it's the felt quality of the experience. Right now, it feels like something from within to be you. In every moment of your waking life and even when you are dreaming, you are experiencing the world. It is necessary to distinguish consciousness from the conscious. Many people put them together at first when they first think about it. Having a consciousness is completely different from having that felt quality. That is exactly what it is to be alert and alive.
When you see the rich hues of a sunset, when you smell the aroma of your morning coffee, you are having a conscious experience. I completely agree that consciousness is a subjective experience. It is nothing more than that, but it is very special because it is a domain of highly precompiled representations on which mental operators can operate. I think the key operator is attention, especially volitional attention. You could have locked-in syndrome and you could divert your volitional attention to the radio or television, so that even then you would have a kind of volitional control even in this domain of your consciousness.
Consciousness is for something. It is that these planning areas have a world. In a sense, it is a veridical hallucination, but it is not a hallucination because it does not say what is not there or it says what is there. It allows us to act in this world. That is only half of consciousness. The other half of consciousness is imagination. If I were to call you, probably about half of you right now are distracting yourself and thinking about this or that, but we spend about half of our lives in this imaginary virtual reality or in our own creation.
In this area we have total freedom. We can do anything and then we can go and build it in the world if we want. Consciousness is for something and it takes a long time to achieve it. The photons of the world hit your retina at time zero, your consciousness does not occur at time zero. There's a lot of processing that happens in the first quarter to a third of a second and then you experience a full-blown world that allows you to act in the world. Yes, I share the definition that both gave of consciousness as a subjective experience.
When I drive down the street, I experience colors, sounds, vibrations and movements, but does the autonomous car experience anything? That's a question for which, honestly, we still don't have a good scientific answer. I love how controversial this is. If you look up the word consciousness in the Macmillan Dictionary of Psychology from a few years ago, you will see that nothing of interest has ever been written on the subject. Even when I asked many scientific colleagues, most said, "Consciousness is just nonsense." When I ask them why, I realize they are two camps that violently disagree with each other about why this is nonsense.
Half of them say it's nonsense. because, of course, machines cannot be conscious. You have to be made of flesh to be conscious. Then the other half says, "Of course, this is nonsense because consciousness and intelligence are exactly the same thing." In other words, anything that acts as if it were conscious will be conscious. To be contrary, for most of my colleagues, I think the truth is probably somewhere in the middle because I know that I'm not actually aware of most of the information processing in my brain, the regulation of heartbeat and vast majority of other things.
Actually, when I look up and say, "Oh, there's Yann," I have no idea how all that information processing happens. What I am aware of is simply this part of my CEO brain that is emailed the final result of the calculation. Not only do I think it's not nonsense. question, I think what people have been saying for so long is nonsense. The question has actually been silly and has simply escaped a genuine scientific question. Because usually if you have a big scientific question that persists for hundreds of years it's because people just dismiss it instead of doing the hard work.
I think we have to work hard on this. If you are a doctor in the emergency room and you receive an unresponsive patient, wouldn't it be great to have a consciousness detector that can tell you if this person is in a coma and there is a house or if it is locked? -in syndrome? If you have a robot helper, wouldn't you want to know if it's conscious and feel guilty for turning it off? Or if he's just like a zombie, so you should feel scared when he pretends to be happy about what you said? I would like to know when we do these things.
The question of consciousness was probably not raised properly in the sense that in the 18th century or the 17th century or even earlier, when a scientist discovered how the eye works and that the image on the retina is formed backwards. They were puzzled by the fact that we see the right side up. How come we don't see upside down because the image in the back of our eyes is upside down? It was a great mystery. Now that we know what information processing is all about, we think this question doesn't make any sense. The entire statement makes no sense.
I think there are things about consciousness of that nature that we are not asking the right question about, but there are many contrary opinions on this that I would be happy to take at any time not entirely seriously because I don't fully believe in them. The fact, for example, that consciousness is an epiphenomenon of the intelligent being. So any intelligent entity will have to be conscious because it will have to have some kind of model of itself. That is, by some definition, what satisfies conscience. There's another one that I like and connect with, maybe other people connect with it too, and that is that consciousness is actually a consequence of our brain being very limited.
We can only focus our attention on one thing at a time and therefore... That's because our brain is limited hardware. We have our prefrontal cortex that has to focus on a particular task or situation and cannot do several things at the same time. We need to have a process in our brain that decides what to pay attention to and how to configure our prefrontal cortex to solve the problem at hand. We interpret this as consciousness, but it is just the consequence of the fact that our brain is so small, that if our brain were ten times bigger, then we could do ten things at the same time and maybe we wouldn't have the same thing. experience of consciousness.
Perhaps we have ten simultaneous consciousnesses. Is there a plural for conscience? Are they consciences? Consciences. Let's go with that. Well. It's not a collective word, right? Yeah. I thought... I think we just don't know enough to ask these kinds of questions. Let's start with Peter and then we'll go to Susan. Okay, getting back to the question of artificial intelligence a little bit, I think why did consciousness evolve? Well, there is a reason. It is so that the front areas can plan. You want to get the best representation of the world that you can. Now, to do that you need to take incredibly ambiguous visual information and recover an unambiguous representation of the world so that areas can be planned properly.
Let's say I have a white-haired cat. It seems white to me because I want to recover what is intrinsically true about the cat, that is, that it is a white-haired cat. Now run under a shadow or a blue light. Well, the light that is actually reflecting off the white hair is now blue, which hits my retina, but I want to dismiss that and recover what is still intrinsically true, so I see it as a white cat standing under a light. blue. I want to recover its intrinsically true shape, size, distance and so on. It is the best representation of what is intrinsically the case.
Again, what was incorporated into this quasi-hallucination is, in addition to that kind of story about the physical world, stories like causality, which is invisible. Go to any party, next time you're at a party, have your partner in crime turn off the lights and you're like, "I can turn off the lights," and you're like, boom. The person turns off the lights. Everyone says, "Wow, how did you do that?" Because we are perceiving causality. We are also perceiving other minds. It is integrated into the construction. I assume this will be very central to ultimately creating AGI or general intelligence because it is very central to creating our models of the world.
I understand what it is like for you to feel pain because I feel pain or to be heartbroken because I once felt it. This is very central. I don't see how a system that has never felt pain can understand what I mean when I talk about pain. Suzanne. It's interesting. I guess my general comment here is to go back to Yann's point about how attention is closely related to consciousness and we might have been lucky because we have a limited capacity. We can only hold about seven variables in working memory at any given time, and we have trouble remembering phone numbers.
Perhaps consciousness is something we have that relates to our limited capacity systems. Now if that's true, suppose we create AGI and soon after we create intelligent synthetic beings that are smarter than us in all sorts of ways, why think that they are conscious? The fact that they look like, say, Hanson Robotics Sophia, look human, means that they will be conscious. Think about it. Do they need to have these limited capacity systems? For example, a superintelligence could be as large as an entire planet. Your computer, your computing resources could span the entire Internet. What would be novel to him that required a slow, deliberative approach?
Why would he be like us in any meaningful sense? What I want to suggest is that we separate intelligence and consciousness and treat them as an empirical question. If we want to discover machine consciousness, we must ask for each type of AI architecture whether that type of system has conscious experience and not simply assume that because it looks human it feels something. Yes, I want to applaud you for distinguishing between artificial intelligence and artificial consciousness, which are too often conflated with each other. I think a lot of people, for example, will say things like, "Oh, we're so afraid that machines will become sentient and suddenly turn on us and be evil, like in bad Hollywood movies." In some ways, it is consciousness that should concern us.
That, I think, is a red herring. Although I agree that conscience is super important from a moral and ethical point of view... Yes, of course. ...in terms of whether you should care or not, you don't care whether that heat-seeking missile chasing you is conscious or not or how you feel about it. You only care about what it does and it's perfectly possible for us to get into trouble with some incredibly intelligent machine even if it doesn't have any subjective experience. In other words, consciousness is not something we need to worry about. That won't make any particular difference from that perspective, but I think it makes a huge moral difference.
When I have colleagues who tell me that they think we shouldn't talk about conscience because it's just philosophical nonsense, I ask them to explain to me how you can have morality if you refuse to talk about consciousness and subjective experience. What's wrong with torture if just, oh, the elementary particles moved this way and not that way? It is about the negativity of the subjective experience we have at hand. If we wantBeing moral people, we want to create many positive experiences in the future, not just a bunch of zombies. This is an example from Nick Bostrom. If there's a billion simulations that you're running just to test something like a billion general intelligence, then you're like, "Okay, I've got what I needed.
The inputs, let's turn them all off." If they are not aware, it is like closing your laptop. There is nothing wrong with that. If those things are conscious, you just created the greatest genocide in the history of the human species. It's quite relevant. It matters. Not if you have a backup. The reason we care about each other is because we have invested so much in each other. There is value for every human being, particularly through other humans who are close to that person. We may have the same relationship with our home robot that we train. We have invested a lot in that home robot, the same as we have invested in our cat or our dogs.
We don't want that robot to be destroyed because all the time we invested in it will disappear. But, if we have backup, it's okay to smash it against the wall. If you have an identical twin, can I throw you in the sewer? No, there are all kinds of interesting questions like this one of imagine what we invented... We have a physicist here. We invented a Star Trek-style transporter. You dematerialize. They destroy you. They kill you and rebuild you at the other end. You experience death. This is really a metaphor for what bothers us when someone dies or when an intelligent machine with its consciousness is destroyed? as long as there is no pain involved, which doesn't happen when you go for anesthesia.
As long as you have a backup or can be revived, there is no... But, if there is suffering... there is no loss of information. If there is suffering then it is something else. Yes that's how it is. So consciousness... Okay, now I ask the question: can machines have emotions? You see, again, Star Trek Commander Data has this chip that they can turn on or off to have emotions or not, because somehow you have intelligent machines that don't have emotions. Personally, I don't think it is possible to design or build autonomous intelligent machines without them having emotions.
Emotion is part of intelligence. Now, we're going to have autonomous cars that won't have a lot of emotions, but that's because they won't be, even if we call them autonomous cars, they won't be autonomous intelligence. They are simply designed to drive your car. If we talk about autonomous intelligence, then these are machines that can decide what they do. They have some intrinsic drive that makes them wake up every morning or do particular things, maybe justify their lives, but it's not really a pre-programmed behavior. You can't have a machine like that without emotions. ] Pedro. Yes, I think it is a very interesting point that emotions are going to be fundamental for the generation of artificial general intelligence.
If we look at the evolution of animals, I think we can learn something about the origin of emotions and desires because they are conscious states, but they are teleological states within consciousness and often refer to what is not visible. . How would this start? Well, you could imagine a fish that only responded to something it could see. It is a present stimulus, it does this. If you see a barracuda, run away. Next, let's imagine a revolutionary new fish that has working memory. Now, when the barracuda looks behind a piece of coral, that fish can say, "Aha! I know it's going that way.
I'm going that way." I think the representation of the invisible became very central. The need for working memory is very important, something that is missing in current architectures. So, these teleological states that force us to look for mates and food, etc., and to actually have these teleological states, these emotions and desires, allowed us to form, not garden paths, but desert paths. A garden path is when you know locally this is the best, locally this is the best, locally this is the best and then you end up in the jaws of a lion. A road in the desert would be fine locally.
I have to do without, I do without, I do without, but in the end, I could have a partner, food or shelter. This is a great revolution that gave us the ability to act in the world in the absence of contributions. For this, the formation of mental models and cognitive maps of the entire landscape, physical and emotional, as well as social, is also essential. Actually, one of the big advances, a very interesting development in deep learning in recent years, is deep learning systems that have working memory, neural networks, neural Turing machines, things like that. They are models that actually have a separate module for competition and another for storing memory, short-term memory.
Similarly, we actually have a particular module in our brain called the hippocampus that more or less plays the role of storing short-term memory. I think a very interesting place to look for lessons on how to build artificial intelligence systems will not be computers, but other evolutionary experiments. I think the most interesting one is the octopus because complex brains evolved in three lineages, the chordates, and we're kind of a culmination of that because we were like chimpanzees plus symbolic processing plus syntax. Then some arthropods like praying mantises, but bees have a couple hundred thousand neurons. The octopus has 500 million, comparable to a bear or a dog.
If we want to understand computational principles that might be universal, we should look at this animal because there may be a limited number of ways to build a brain. Convergent evolution discovered that there are a limited number of ways to build a wing. You need some type of membrane. In chordates, bats did this, birds did this, and pterodactyls did this, but they all have flapping and membranes in common. There are many ways to build a wing. There may be a limited number of ways to build a brain. Some people have argued, for example, that the vertical lobe of the octopus brain is entirely or very analogous to our hippocampus with very similar circuits.
Well, convergent evolution has gotten us there because our common ancestors were probably Precambrian. It was probably a small flatworm that lived in the ancient, warm seas. This is really interesting because, going back to this idea of ​​superintelligence, one wonders if we could discover, by thinking about both AI and the intriguing systems that nature gives us, like the octopus, if there are universal properties of intelligence and, in doing so, anticipate the form. of superintelligence. Because after this panel, I must confess that I'm actually a little more worried about superintelligence. Our basic behaviors, as human beings, are basically driven by our basal ganglia.
The base of our brain, that's where human nature is wired. That's what drives many of our basic behaviors. Then our brain, on top of this, makes our behavior serve those impulses with intelligent, hopefully intelligent actions, but our basic impulses are driven by these hardwired basal ganglia. That is what calculates whether we are happy or not, whether what we do will make us happy or not. It drives all of our behavior. We need this for smart machines. The fact that a smart machine is autonomous will mean that it will have to have this kind of hard-wired piece in its brain that drives its behavior.
The big question is: how do you build it in such a way that those basic impulses are aligned with human values? It will probably be very difficult to wire this by hand. We're going to be able to program some very basic behaviors to make sure the robots are safe. For example, if you have a knife in your hand, don't wave it if there are humans around, very basic things like this. There are probably thousands of rules like this that we can't easily implement. What we are going to have to do is train these machines so that, again, they distinguish good from evil, behave in society and do not harm people.
Yes, I hear people say that it is artificial superintelligence, that it is a kind of general intelligence once it is much better than us. It's the last invention we will make because if it does what we want, then all the things we think about are difficult... It's like a monkey hitting a lock forever and a human walks in and looks at the instructions and in just a second opens it. That all these things, poverty, climate change, disease, even morality, are child's play for something that has that level of intelligence. It's this utopia we could be in if we could achieve it.
So, you wouldn't have to invent anything in that world because it invents everything for you. The other scenario is that... I don't hear many experts talking about Terminator, evil robots, that's anthropomorphizing... It's the last invention we'll face then because extinct species don't invent things. There is a lot at stake and this is what you just mentioned. We only have a few minutes left. I really want to hear what you guys have to say about... I feel like we wake up in the middle of a thriller at the climax of this thriller, but it moves slowly in our minds so we don't see.
That's what's happening, but it's choose your own adventure, choose your own ending. How can we push this in the right direction? Yes. If you take a big step back and look at it after 13.8 billion years of cosmic history, here we are, we figured out how to replace most of our muscular work with machines. That was the industrial revolution. We are now discovering how to replace our mental work with machines. Eventually, that will be AGI superintelligence. So how can we get it right? I think Yann mentioned that the key challenge is making sure his goals are aligned with ours, it doesn't have to be bad news to be surrounded by smarter entities because we all did it when we were two, mom and dad.
It worked for us because his goals were aligned with ours. How can we ensure this happens with AGI? Well, AI safety research is the answer. We are investing billions of dollars now to make AI more powerful, but we also have to invest money in developing the wisdom necessary for this AI to remain beneficial. For example, applicable to what you said, Yann, I think we have to figure out how to make machines understand our goals, adopt them, and maintain them as they get smarter. All of those are really difficult. If you tell your future self-driving Uber to take you to JFK as quickly as possible and you arrive there covered in vomit and chased by helicopters, "No, no, no, no.
This is not what I asked for." And he says, "that's exactly what you asked for." Then you will understand how difficult it is to make machines understand our true goal. Raise your hand if you have children. Then you'll know how big the difference is between making them understand your goals and actually embracing them and doing what you want. Also, who is the parent that decides what the goals are? Well, in this case, ISIS thinks it's doing well. He does. Yes. We put a lot of effort into raising our children. We need to put even more effort into raising humanity's proverbial children if we ever develop machines that are more powerful than us.
In fact, I don't agree with this. OK that's fine. Let's go down the line here. Some of the changes that will have to happen will not only be on the AI ​​side but also on the cultural side, the transformation of our cultures. For example, any technology can be used for good or evil. A hammer can kill someone or build a house. A plane can transport people or bomb people. This also applies to AI, but the ethical systems we have inherited from the past are not sufficient to deal with this. 2,000 years ago there were ten bad things you could do and they said, "Okay, God said don't sleep with his wife and don't steal his stuff," and so on.
Commandment number 853,211: You shall not implant alleles of bioluminescent firefly proteins in tomatoes, nor tomatoes that glow in the dark. You will not raise embryos for their dopaminergic neurons to implant in Parkinson's patients. Technology has driven... Now there are infinite things that are bad, harmful, so we need to create a new ethical framework to determine the correct course of action in these infinite cases. I would say that a first step would be to think about how good it is that which promotes life and especially human life, but also life in general. What is harmful to life is not good.
That way we can face many things and try to think not only about what we can do, but what we should do. I think we're in a fortunate situation where pretty much everything that can be done to increase the chances of superintelligence or AGI working well, that kind of security research, actually has its first small step in doing something that already is useful and short-term -term like better cybersecurity research so we don't get hacked all the time, for example. Let's do those things better because I think we are nowpathetically flippant about stuff like that and who's going to trust their AGI if it can be hacked?
We will get a very under

powered

AGI before we get a very powerful AGI. Our first AGI will have the autonomy and intelligence level of a rat, if anything. Well. I considered it would be a great success in my career if, at the end of my career, which is quickly approaching, we had a machine that had the same level of common sense as a rat or, say, a cat. The cat has 700 million neurons. We don't have the technology for this yet. We don't have the science for it. Once we figure out the design of an intelligent autonomous system, it will have the intelligence of a cat or a rat.
It's not going to take over the world. With this, we can experiment to figure out how to incorporate into him the fact that he should behave in society and not kill everything around him. Let me point out that... Which... I'm sorry. Forward. Oh no. Forward. Fine, thanks. Coming out of neuroscience, we have really basic fundamental questions that we don't know the answers to yet. Science says it's all about what we don't know, so we should put this on the table. One of them is what is the neural basis of consciousness? Another is what is the neural code?
The type of neural networks Yann has created are based on a kind of view of neural code that involves changing weights. In recent years, some people have thought, "Okay, surely that's an important part of the puzzle, but maybe there are other parts of the puzzle." It's not just about what is connected to what at what level of connectivity, and this is what underlies connectomics... Instead of seeing the brain as a highway system of different connections, it is more like a system of pathways. train where there are constant sudden changes. This stretch of track may be part of an epiconnectivity between Boston and San Diego or an epiconnectivity between Boston and San Francisco depending on these closures.
Maybe the neural code is actually a very dynamic neural code with these rapid changes in synaptic weight. That is one direction. More recently, some people have argued, and if this turns out to be true it will be revolutionary, that memories and information, in general, are not just stored in synaptic weights but are actually stored inside the cell. There's some really incredible work done by Tonegawa at MIT or, I think, David Glanzman at UCLA, which I think have convincingly shown that synaptic weights could be the way to access information, but the real information could be within the cell.
Glanzman says they are methylation patterns in DNA. That's really radical. He's the only one who says that, but if it's true, it will change everything. We have a lot of progress to make in understanding the brain and current AI is based on a metaphor of neural networks as it was understood in the brain 10 or 20 years ago, but it is changing very rapidly in real brain science. I suppose that once we crack the neural code, it will be as momentous for our society as cracking the genetic code. Very good, Susana. Very interesting. I want to hear it...
We have a couple of minutes left. Susan, how can we make it good in the future? Well, we could have AI that becomes AGI and then rapidly evolves into superintelligence. Whether it is based on the brain's neural code or something that looks nothing like the brain, it could very quickly change its own architecture. So I wonder how we can stay on top of it. We have to start working on AI safety. I totally agree with Max. He also wanted to add something that hasn't been discussed and that is that, I think, as a society, we need to think about this idea of ​​merging with AI.
Elon Musk has recently suggested that in order for us to keep up with technological unemployment and confront the threat of superintelligence, we ourselves must bring AI to the brain. I think as a culture we need to start discussing that AI is not going to be a Jetsons-like world where it's unenhanced humans surrounded by all this fancy robotic equipment. AI will change us too. I just want to leave you with that thought. I like that thought. Thank you. Thank you.

If you have any copyright issue, please Contact