YTread Logo
YTread Logo

Consciousness in humans and other things with Anil K Seth | The Royal Society

Apr 20, 2024
thank you, it is a great pleasure and an honor to be here and give the Faraday 2024 lecture. About eight years ago I was thinking that I was giving a talk at the Royal Institution, one of those Friday speeches that Professor Frank just mentioned and they have this weird procedure where they lock you in a room before your lecture because apparently Faraday uh oh no no no one else escaped and he was supposed to give a speech and Faraday had to give an emergency speech on Friday, so since so they've locked the person in the room, everyone else has a reception outside and you're not allowed to go which I thought was a very British establishment at its finest, anyway I'm glad they didn't lock me in in a room here, um. or outside in the rain, so welcome, thank you and I'm sorry for those people who were outside in the rain and couldn't get in, so today I'm going to talk about Consciousness.
consciousness in humans and other things with anil k seth the royal society
Consciousness in

humans

and in

other

things

now. Consciousness is one of the greatest Mysteries left to us, but at the same time it is a phenomenon that each of us knows very intimately, we all know what Consciousness is correct, we all do it. Consciousness is what is lost when you undergo general anesthesia and become an object and then come back. again in a person when you wake up again, it is also what you lose when you fall into a dreamless sleep and what comes back when you start dreaming or wake up and when we open our eyes and wake up or turn our brains over we not only process information like a kind of elegant camera there is an

other

dimension completely our mind is full of light with shadow with shape with color there is an experience happening a world appears when we are conscious and within this world there is also the experience of being a self of being oneself and you don't even have to open your eyes for it, there's always this basic background experience of being someone.
consciousness in humans and other things with anil k seth the royal society

More Interesting Facts About,

consciousness in humans and other things with anil k seth the royal society...

Consciousness is very simply any type of experience. It is the everyday miracle that makes life worth living. That is the intuitive idea of ​​Consciousness now is also good. I also need to have a more formal definition, and my favorite most formal definition of Consciousness comes from the philosopher Thomas Nagel, who put it like this: He said that an organism has conscious mental states if and only if there is something to what it is like to be that organism. I think what he means by this is that it feels like something to be me, it feels like something to be each of you, but it also feels like something to be a bat, as in Nagel's famous philosophy article about what it is like to be a bat.
consciousness in humans and other things with anil k seth the royal society
It probably also feels like something to be a kangaroo or an elephant, but it feels nothing to be this chair or this table or this Reader or even though some may disagree with me on this iPhone, for these

things

there is no EXP experience . They are simply objects no matter how complicated they are and I like this definition in part because of what it doesn't say because what it omits has nothing necessarily to do with intelligence or language or a sense of personal identity these are all that Consciousness might be for us, but this is not how Consciousness needs to be in general now, although Consciousness is intimately familiar to each of us, it is actually still a mystery to us and this question of how mere matter, like Pate electrified, we have each one within us.
consciousness in humans and other things with anil k seth the royal society
Our skulls can give rise to any type of conscious experience. Well, this mystery has baffled philosophers and scientists for thousands of years and I think the best sense of mystery that arises from this question has been best articulated for me anyway by philosopher David Charmers in his so-called heart problem of Consciousness. and he puts it this way and I want to read it to you, he says that there is broad agreement that experience arises from a physical basis, but we don't have a good explanation of why and how it arises. Because I should? Physical processing gives rise to a rich inner life, it seems objectively unreasonable for it to be so, and yet it is now, the intuition at work here is that even if we had a complete understanding of how the brain functions as a complex physical object and It is, then, a complex physical object that would not yet shed light on this fundamental mystery of how and why any of these neural shenanigans should have anything to do with Consciousness.
The difficult problem of Consciousness would remain pristine and intact, but approaching this difficult problem head-on. It may not be the only way or the right way to go and I prefer to think of Consciousness in terms of what I loosely and cheekily call the real problem of Consciousness and this is not a new way of thinking, it is actually what what is it. A lot of people, my colleagues do it in practice, but I call it the real problem a little to annoy David. Good strategy and the real problem is like this instead of treating Consciousness as a big scary mystery in search of a Eureka moment of solution. let's divide and conquer Consciousness has many aspects, it has many properties and the real problem is: can we explain, predict and control the properties of

consciousness

in terms of mechanisms and processes in the brain and body?, explain, predict and control, that is generally what science allows us.
Doing allows us to explain phenomena predict when they happen and ideally intervene in control We normally don't ask more than that In physics we are now extremely good at explaining predicting the controlling characteristics of the universe but we still don't know why there is a universe in the first place, so that hope is this approach to Consciousness. In doing so, building explanatory bridges from neural mechanisms and biological mechanisms to the properties of Consciousness, we do not solve the difficult problem head-on, but rather begin to dissolve it. fade away perhaps eventually disappearing in a cloud of metaphysical smoke and there is a historical parallel to this, it is not perfect but I think it is instructive, it was not that long ago that you in this building certainly discussed this that history The way of life It was considered beyond the reach of science Beyond the reach of physics and chemistry there had to be something almost supernatural and the spark of life of Elon Vitalis to explain the difference between the living and the non-living, but things did not work out. that way. the biologists the new biologists dedicated themselves to explaining the properties of living systems metabolism homeostasis reproduction these things in terms of physics and chemistry the difficult problem of life was never solved but it dissolved it faded away we don't understand everything about life now but there is no longer the feeling that it is beyond the reach of the concepts, tools and methods that we have, so life is not the same as Consciousness, it may not turn out that way, but I think the most important lesson Importantly, just because things may seem mysterious now with the tools and concepts we have now, doesn't mean they are necessarily mysterious, necessarily beyond the reach of science.
So what are the properties of Consciousness? How can we cut this cake? There are many ways to do it and I prefer to divide Consciousness into three different categories of properties, level of content and level of self-awareness, well, that is the difference between being awake and conscious as you are now and being unconscious as in general anesthesia, something so ask how aware you are or how conscious. something is and then there is conscious content when you are conscious, you are aware of something, the sights, the sounds, the smells, the emotions, the thoughts, the beliefs that populate your

consciousness

, you see it at any moment and then the conscious self, which It's part of it, but it's important.
Part of that is the specific experience of being you or being me and this is probably the aspect of Consciousness that each of us clings to most strongly. Today I will not talk about the level due to lack of time, but I will talk about the content and the self. and we'll start with the content and we'll start very simply with the experience of color and color is so pervasive in our daily lives and it gives our experience of the world Beauty and meaning in so many ways and what could be simpler than color. but color turns out not to be simple at all, we don't even need neuroscience to start thinking about this a little.
Our eyes open our brain to the visual world, but the photoreceptors in our retinas are sensitive to only a small portion of this electromagnetic spectrum in terms of color. this thin slice of reality here is where we live and within that thin slice Receptors are sensitive to only three wavelengths of light and we call them red, green and blue, but they're actually red, green and blue, those are just We give them labels and from those three wavelengths the brain creates millions of different colors, so what we see when we see color is simultaneously less of what there is and more of what there is, it is never the same as what there is. there and I think that applies not just to coloring but to everything now take a look at this you've probably seen this before hands up how many people have seen this demo before in this room.
I expect a lot, but yes, maybe not all, this is called lilac Chaser illusion, so what I want you to do is take a look at the black cross in the center of the screen and try not to move your eyes or blink, keep staring and then if you start to see something change, something strange happened. I want you to get up. Your hands are fine, so Roger doesn't raise his hand, but it's because he's lazy and he's seen it before. It still works, so hopefully what you should see is the magenta disks disappear and there's just a green disk that spins and spins and spins.
There are actually three different things going on here that we can talk about later if you want, but I'm just using this to point out that there's a very indirect relationship between what we experience and what's there. There are no green diseases, so it shows that how. Things don't seem to be the way they are now. The idea that I believe explains that illusion and, in my opinion and that of others, all our experiences is that the brain is a prediction machine and that what you see, what you hear and what you feel are nothing more than best guesses of your brain about what causes the sensory signals that reach our brain now this is an old idea and it's a surprisingly simple idea.
You can trace it back centuries and both science and philosophy go back to Plato and Casting Shadows. the walls of a cave by the lights of the fire the prisoners are chained to the walls of this cave and all they see are these shadows and to them the Shadows are real because that's all they have access to now if we update that to now and we jump through most of the story instead of prisoners trapped in a cave think about your brain imagine you're your brain trapped inside the bone cave bone vault of your skull trying to figure out what's in the world now there's no light in the skull doesn't make any noise, it's dark, it's silent, all you have to do as a brain is sensory signals, which are just electrical signals, they're only indirectly related to what's in the world and they don't come with labels like me.
I'm from a table or a chair or a beer or something, it's just sensory electrical signals, so perception to figure out what's there has to be a process of informed guessing where these ambiguous sensory signals are combined with those from the brain . Prior expectations about what is happening in the world or in the body to form the brain's best guess about what is causing those signals and the idea here is that that is what we experience. The brain's best guess about what's out there. The brain does not hear sound or see light. I'm going to give you a couple of examples of this now, you've probably seen this, this Edon chess board, how many people know this, quite a few, maybe not all, but I think it's a very good example of how the brain's expectations shape our conscious experience if you look at these two patches A and B now they should look like different Shades of Grey, right?
I'm going to assume that it works but of course they are exactly the same. tone of gray so it's an illusion if I can prove it by showing you another version of the same image here and you will see that the spots of gray are the same tone, it makes no difference if you think I'm cheating and holding two different images, well I'll just move this bar and you'll see I'm not cheating, it's the same shade of gray, I have to check it, take it off and they look different again, so what's happening here is that your brain is using its prior expectations that objects under the Shadow they appear darker than they really are combined with a checkerboard context so that we see patch B lighter than it really is.
Our visual brains are not light meters, they are not trying to accurately reflect information they are trying to discover the most likely cause of that information. Here's one more example that I find even more compelling. It's called The Hollow Mask Illusion. Now faces are very important to us primates, and to almost every face you have. What you have ever seen and all your ancestors have seen has alwayspointed outward with the nose pointing like this and this means that evolution has installed within the primate brain the very strong expectation that it points outward so strong in fact that in this case the brain I prefer to come to the conclusion that there is a face turning in two different directions at the same time with the face pointing inwards, like a rather strange conclusion to come to, but it's very difficult to overcome, in fact, I can't, I can't overcome it.
That illusion just thinking about it now the formal framework for thinking about what's happening here is basic inference, which is a very general framework. mathematics about how to reason optimally in the face of uncertainty by basing the inference on new data that we can call the probability here that new things coming in is combined with expectations or prior beliefs, the kind of thing high on the left and you combine those curves and get the posterior distribution um, this is kind of how to update the prior, your prior belief, when you get new information, all of these curves are what we call probability distributions, so they basically represent the current probability of something being like this, um, so yeah I think that's what the brain does when it performs perception and how we combine the new sensory data with your PRI belief to better guess what is happening first-person, to my knowledge, to actually articulate that this is what the perceptual machinery in our brains does is all about the German physicist, physiologist and scholar Herman Bon Helmholz, who theorized that perception is a process of unconscious inference to make it clear that we are not aware of this complex probabilistic wizardry going on under the hood of us.
We are only aware of the result, we are only aware of the result. Now, putting things this way, I think it dramatically changes a lot of intuitions we might have about how perception works. Now, the above view is a bit of a strawman view, but this is what I used to do. What you read in textbooks when I was a student until recently is that the brain processes sensory information primarily from the bottom up or out. Sensory signals enter through the retina and then travel deeper and deeper into the brain with more complicated characteristics that are removed as signals.
In this view, move further inward, the heavy lifting of perception is done as if the brain is reading the world from the outside in and everything that goes in the other direction is just doing a kind of little modulation on the side now. the prediction. Machine view turns this on its head rather than perception being a case of reading the world from the outside; what we consciously experience depends on perceptual predictions. The brain's best guesses flow in the opposite direction, from the inside out or from the top down. The signals can be thought of as prediction errors that report the difference between what the brain gets and what it expects at each level of processing.
The idea is that the brain is always in the game of minimizing these sensory prediction errors, either by updating its predictions or taking actions to generate the sensory information it already expects, and it turns out that if the brain does this it follows this simple strategy of just trying of minimizing the flow of incoming information, then the top-down predictions approach the Basi inference and reach an optimal value. I guess about the causes of sensory signals, so this theory is called predictive processing and my main claim here is that what we consciously perceive are the top-down predictions, not the bottom-up sensory signals that we actively generate, our worlds that we do not passively perceive. them, so it's a weird kind of inversion here because it seems like the world is just there and crawling right into our minds when in reality things are almost the other way around.
William James over a century ago said more or less the same thing and he is one of the founding fathers of psychology and he said: well, part of what we perceive comes through our senses from the object before us, another part and It may be that the biggest part always comes out of our own heads. I was on the right path. I think we can see the shadows of this process not only in some kind of illusions, but I think that in everyday life this is a phenomenon called paradia. Seeing patterns in things, faces and faces, is very important to us, which is why the brain is continually throwing out predictions of Look into the flow of sensory information to see where it sticks and sometimes it sticks, which is why we can see faces. in the clouds and even in the arrangement of windows on a building and many other crazy things you can find online now in Sussex.
I've been exploring this phenomenon a little deeper by combining virtual reality with some machine learning techniques like these deep neural networks that are very good at recognizing objects in images, taking an image and classifying the objects within them, but it turns out you can run them . backwards, you can run them backwards, this is what this Google deep sleep algorithm is all about and when you run it backwards, what you're basically doing is fixing the output to look like something like dog throwing activity across the network and updating the image it's like you're projecting perceptual predictions onto the input it's a kind of hallucination simulation and this is what we've done here.
We use this algorithm to simulate the effect of unusually strong perceptual predictions on visual experience and this is the Sussex campus, welcome. to Sussex on a Tuesday lunchtime and what we have done is simulate what the experience would be like if the brain had overly strong predictions of seeing dogs everywhere. I don't know why dogs seem to be the thing to do um and you what? The interesting thing here is that this is not a model of any particular type of behavior or cognitive function, it is a model of a particular type of experience. I don't know what I mean.
Some people say it's a little psychedelic. Really I do not think so. but it is certainly a model of a way of experiencing the world, the type of computational phenomenology, and what we have been doing more recently is refining this method to simulate more specific types of hallucinations, such as the hallucinations experienced in Parkinson's disease and in Charles Bonet syndrome, where people have vision loss and also on psychedelics and other cases, each of these types of hallucinations can be quite different, some are complex, some are simple, some are geometric, some are not, some look real, others don't. and by simulating the different types of hallucinations and comparing them with people who have had experiences of these hallucinations, we can get closer to understanding what are the brain mechanisms that not only correlate with people who have hallucinations but are linked to them and that really explain them. why those hallucinations are the way they are, of course, that helps us understand why normal perception is also the way it is because there is an important message from this part of the talk, which is that if we can think of hallucinations as a type of uncontrolled perception where the brain's best guesses have lost control over their causes in the world, then perception in the here and now all the time now is also a type of hallucination, but it is a controlled hallucination where the brain's best guesses are controlled by their causes in the world. the world, but not in ways that are necessarily determined by accuracy but by how useful they are in the business of staying alive, as Anas Nin said, we don't see things as they are, we see them as we are now, an immediate implication of This is that we are all going to have different experiences even for the same shared objective reality because we all differ on the inside just as we differ on the outside, you will remember this well, the famous dress, okay, blue and black, okay, you are my people white and gold people wrong is okay, divide right, divide completely this has been going on for a long time and the amazing thing about this image of the dress is that we take the content of our perceptual experience to be real because we combine the subjective with the objective it becomes almost impossible to agree.
Wars almost started, this heartbreak in 2015 and we end up in perceptual echo chambers much like the echo chambers we know through social media, unfortunately, so we will experience the world differently, but we won't know it. We do it because it seems like our experiences simply reveal the world as it is, but we all differ, we differ on the outside in terms of body shape, skin color, height, etc., and we all differ on the inside as well. Like Call this perceptual diversity just to emphasize that it applies to everything, it's not a case of thinking in so-called neurodes.
Divergent conditions such as autism or ADHD. You know, we are all different, we all experience the world differently, just as we are all different. different heights now um one thing I'm really excited about is what we've been doing in Sussex and the collaborations with Glasgow and Collective Act in London is a project called the perception census, it's a large scale experiment where we're bringing together around 40,000 people. do up to 50 different perception tasks so we can begin to understand. I think for the first time really how our individual internal worlds vary and differ at different ages, different countries, etc., so I think I haven't.
I still have results from this, but I think it's the beginning of a very exciting undertaking that involved over 100 countries, people from over 100 countries between the ages of 18 and 80, and we also revealed, if you like, the map of these hidden individuals. differences. I hope this also has some kind of social implication because it helps us when we realize that the way we see things is not necessarily how they really are. Cultivate a little humility about our own way of perceiving and I think A little humility about it can be very helpful because if we can recognize that the way we see a simple image of a dress is not really how it is, then I think we could cultivate a little more humility about our beliefs and perhaps generate new platforms for empathy and understanding because the first step to getting out of an echo chamber is knowing that you are in one right now in the last second of the talk.
I want to move on to the topic of the conscious self, what it explains. the experience of being you or being me now it's easy to get off on the wrong foot when talking about the self there's another straw person idea to knock down here it might seem as if the self is the recipient of wave after wave of perceptions as if the world It was just pouring into our minds and the self was figuring out what to do by performing some actions changing the world and around and around we feel like we think we act this may be how things seem, but that's how things are.
I think it is very different, here there is another type of inversion happening, in addition to inside out, outside, in the self it is not what it perceives, the self is also a type of perception or rather it is a collection of perceptions, experiences of the world and self are all. There are types of controlled hallucinations and, as with all experiences, they are brain-based best guesses that are linked to the world not by accuracy but by usefulness and usefulness in the business of staying alive. It is now very easy to take experiences of the self for granted, but in fact, there are many different ways in which we experience this idea of ​​being, such as the experience of having and being a particular body, the experience of perceiving the world from a perspective in first person, the experience of trying to do things and being the cause of things that happen the Valal self that people talk about when they talk about Free Will and only then do we start to think about the level of personal identity, the level at which you exist as an individual with a name, plans for the future. and memories of the past and finally there is also the social self that aspect of being you or being me that is refracted through the minds of others what it means to be me is determined in part by how I perceive others perceive part of me It's literally in all of you, um, now in normal everyday human experience, these elements of personality seem tied together perfectly as a unified ho, but we know in psychiatric and neurological clinics that all of these aspects of personality can be separated in different ways and this is enough. to show that this basic experience of being yourself should not be taken for granted, it is something that like all experiences requires an explanation and now I just want to focus on the bodily self and leave the rest for another time, story here.
It's the same story: the body's experiences are not direct readings of how things are, but instead of brain-based best guesses that are calibrated by sensory signals from the body, there is now a classic illustration of this called the illusion of the rubber hand I am sure that manyMany of you will have seen this before in the rubber hand illusion, a person's real hand is hidden from view and a fake rubber hand is placed in front of them, then the real hand and the rubber hand are caressed simultaneously by the experiment of the boy in green with two paintbrushes while the person in blue stares at the fake hand and after a while, for most people this rubber hand begins to feel somehow like part of their body and this It can be quite a strange experience that people don't really think it's their hand, but the experience varies from person to person, it's a bit strange and there are many ways to try it, but there is one in particular, which is a lot more fun, a lot more fun than any other way.
That's an experiment. I recommend trying it at home, so this shows that experiences of what the body is and what it is not, yes, they are surprisingly malleable and I can't resist a quick comment on this rubber hand illusion because it has become part of the law of psychology and the usual. The story told about the rubber hand illusion is that it has to do with the integration of sensory signals from different modalities, vision and touch, in particular, you see your hand, you feel you are touched, but recent work in Sussex My colleague Peter Lush tells a different story and it turns out that the intensity with which someone experiences the rubber hand illusion is strongly correlated with how hypnotizable they are and yes, this is interesting because it tells us that, to a large extent or perhaps entirely, the experience of the rubber hand illusion is happening because that is what we implicitly expect to experience given the setup of the experiment.
Someone puts a rubber hand in front of you. Blows, he says, does it feel like your hand? It is very strong what in Psychology we call demand characteristics, now this is completely compatible with what I have been saying so far because think of the context as a prediction, you know that the brain is embedded in its broader environment, which limits the way the brain interprets sensory data, now embodiment experiences are not. simply about the property of the body, what object is our body? It's also more basic and perhaps difficult to understand because it's always your feeling of just being a body, of being a living organism of flesh and blood with all its moods and emotions and this simple built-in sense of existing. feeling of being alive and this aspect of oneself, which I think is the most basic aspect of oneself that we can identify, this generates a different type of perception called interoception.
We now typically think of sensation and perception in terms of the classical five senses. taste sound sight smell touch um detect signals from the outside world but a large part of the neuronal space is dedicated to feeling perceive and control sensory signals coming from inside the body this is called interception things like heart rate blood pressure gastric tension these are all aspects of the brain that the body feels from within and this is vitally important because the reason we or any other creature has a brain is not to do very intelligent things, but fundamentally to keep the body alive, keep the creature functioning and understand What the kind of thing a brain is is that I think we need to understand that the fundamental obligation of a brain is now, from the brain's perspective, enclosed within its bony vault, the inside of the body is as inaccessible as the world is. abroad.
The brain must still engage in this process of making and updating predictions, but now these predictions largely have to do with the internal physiological condition of the body, not the outside world, and there is a hypothesis here, just as the visual predictions underlie to visual experiences of objects, people and things like that. predictions can underlie emotional experiences the brain's best guess about the physiological condition of the body is not a new hypothesis in fact William James said something very similar again with K Langer many years ago in the so-called appraisal theories of emotions. Bodily changes directly follow the perception of the exciting event and that our feeling of these same changes as they occur is the emotion.
This is written in Victorian, but it basically means that what we experience as emotion is the brain's perception of something happening in the body and not the other way around, but by updating it again on this idea of ​​the brain as a prediction machine, we have a clue. of why emotions feel the particular way emotions feel, and that's because predictions about the inside of the body aren't about figuring that out. where things are or how they move is about control and regulation to keep important physiological quantities where they belong and there is another type of investment here, we have already seen that we experience things from the inside out, not from the outside in the self.
It is not a perception, it is a perception, it is not the thing that perceives and now the objective of the brain being a prediction machine cannot be found in discovering what is in the world, but in controlling and regulating the body from within, as it does? that work, why does that happen well? This appeals to an important extension of this idea of ​​the predictive brain which he called, among other things, active inference and this is a theory defended by K Friston F FRS that prediction errors can be reduced not only by updating the brain predictions, but by making actions to generate the sensory information that is already expected;
In this case, the prediction serves as a kind of set points so that they become self-fulfilling sensory prophecies that lend themselves very well to being able to control things once we can. predict that we can control now there are many other roots of this idea in cybernetics. British academic Ross Ashby came up with Conant with his good regulator theorem, every good regulator of a system must be a model of that system and then there is also Carl Friston's. ambitious free energy principle that takes this idea to its maximum and argues things like that living systems must necessarily engage in something like predictive processing to stay alive and continue to exist because this necessarily means minimizing the surprisingness of the states they are in.
A fish out of water is in a surprising state for a fish, and it won't stay that way for long. The idea that emerges here is that different forms of prediction underlie different varieties of experiences, visual predictions that are concerned with where and what things. they are the basis of our visual experience of the world, which are things in a spatial frame that move and interreceptive bodily predictions that care about how things are going, they underpin emotional experience, each emotion is, after all, basically a variation of the theme of things going well or things going bad now and potentially in the future different types of prediction different types of experience and if you pull this thread long enough I don't have time to do it now but you can start to see I hope the machinery neuronal that What underpins all of our conscious experiences has its origins and primary functions not in supporting intelligence, but in these deeply embodied vital functions to keep us alive, so the relationship between Consciousness and life is not just a parallel historical, but perhaps something more intimate here. and the now and this brings us as all talks about Consciousness owe to Renée Deart one of the great pioneers of philosophy and the particular philosophy of the mind now Dear argued that our nature as living beings did not matter for Consciousness.
What mattered were our rational Souls, which according to To rule out the lack of other non-human animals, were simply Machine Beasts, as he called them, and he said in a rather uncharitable way that without Minds to direct their bodily movements, animals must be considered like thoughtless and insensitive machines that move like clock machines, I now believe. rather the opposite, we are conscious because and not in spite of our bestial-machine nature, that we perceive the world around us and ourselves within it through and because of our living bodies. Now this is a vision of Consciousness deeply embodied in biology.
I find it particularly attractive and this is partly because I think it emphasizes that we are part of Nature and not part of it. Consciousness becomes continuous with life with the rest of biology and this leads me in the last minutes to Consciousness in other things. and because of its timelessness, I just want to focus on artificial intelligence and what we should think about when we think about the prospects of conscious AI. Now there is the idea that as AI becomes smarter, it will also become aware that we will have machines. that they not only think but also feel that this has been a science fiction trope for centuries, of course, from Hal in Stanley Kubrick 2001 to AA in Alex Garland's brilliant film X macina and something was already there, I mean, ago A couple of years ago there was a Google engineer who was fired for publicly stating that the chatbot they were working on at the time was sentient.
It was called Lambda. I think he was wrong like almost everyone, but there was a lot of confusion about how we would know if Artificial Consciousness has emerged and what the consequences could be if it did, will AI artificial intelligence lead to real Consciousness? I think the idea that this will be the case is so deeply ingrained that the terms Consciousness and Intelligence are sometimes used interchangeably and I think that's a big mistake. Intelligence and Consciousness are very different things: Intelligence is about doing the right thing in the moment. appropriate, behaving flexibly to achieve goals and consciousness is about having experience and although they are correlated in living things, you know that if you are smarter, maybe you can have different types. of conscious experience are, in principle, conceptually very different, and there is no reason to assume that simply making machines more intelligent will also make them conscious.
I think even today's leading AI systems, like um gp4, almost certainly have zero Awareness and probably very little Intelligence, which drives this temptation. Linking Consciousness with intelligence is what takes us along this Garden Path. Well, I think it is largely due to our psychological biases; After all, we think we are intelligent and we know we are conscious, so we tend to conflate the two, but this is simply reflective. our persistent anthropocentrism the tendency to see things through a human lens and human exceptionalism the tendency to see

humans

as superior and distinct from everything else, a separate thing and when it comes to AI equally problematic is anthropomorphism the tendency to project human qualities onto things we don't have on the basis of superficial similarities and it is this mix of psychological biases along with this strong notion that we are on the cusp of some civilizational transition that can lead, I think, to some of us to assume that Consciousness will simply arrive. for the ride once AI reaches a certain level of sophistication because we see something that we consider distinctive like language on another system and then we project it when it doesn't really exist and of course this is true, I mean the context where we can see this most vividly with the current crop of great language models that take advantage of both anthropocentrism and anthropomorphism by putting language front and center, and when we feel that the great language models really understand us and can have inner experiences, it is very likely It's just that these biases work because of the way that large language models are often said to hallucinate when they do things wrong and I think this compounds the problem because it suggests that they are having some kind of conscious experience and I prefer to use the word confabulate. .
They invent things without knowing that they are inventing things because they don't really know anything. My colleague and friend Mari Shannon has written very powerfully about how we should talk about systems like this in terms of thinking about them as playing roles. rather than instantiating the things they seem to express, but there is an even deeper reason to be skeptical about the possibility of a conscious Ai and this is the very possibility that is based on the assumption that Consciousness is the kind of thing that Computers could have done it if they were powerful enough or programmed the right way;
In other words, Consciousness is supposed to be a type of information processing. Now computers do process information, but brains are very different from the computers that run today's AI algorithms, however complicated they may be. computer there is by design the clear distinction between hardware and software, you can take an algorithm that runs on one computer, run it on another and you will get the same result. Brains don't work that way, while on a computer, what a computer does. is in principleRegardless of what is done in the brain there is no sharp division between what we could call mental wear and tear and what we could call wet wear what a brain does might not be separate from what it is because this biological drive to stay alive goes directly to individual cells and takes them up to the level of our metabolism.
I have already argued to some extent that Consciousness might be a property only of living systems, but even if that is wrong, it still might be. It is the case that the kinds of things that brains do and that give rise to Consciousness are simply not the kinds of things that computers can do because of this potential, this inextricable mixture of what it does and what it is now, according to These views AI could, at best, simulate. Consciousness without being conscious now, just as a simulation of a climate system, can be very, very precise, very, very powerful, but it never gets wet or very windy inside a climate simulation, so I'll end the things of the ones to worry about.
Of course, what worries me is that the thought will perish, but I could be wrong, not wrong, I could be wrong now. I believe that Consciousness is a property of life, but there are other theories available about Consciousness and there are some theories, some more in some parish that I thought were more popular than the ideas that I have been talking about today, that suggest of fact that Consciousness is a property of computation, that the right types of computation are enough for something to have Consciousness, and whether these theories, like Global Workspace Theory H, are on the right track. then conscious AI could be much closer than I have been suggesting and if we were somehow successful in creating machines that were conscious then we would face unprecedented challenges.
I think that machines that are conscious would probably have their own interests, as well as the interests that humans give them, making the problem of aligning their values ​​with ours even more complicated than it already is and the precipitated ethical catastrophe would be even worse. because of the possibility of creating real artificial suffering if we generate conscious artifacts they could have negative experiences. of a character that we may not even be able to recognize, so we would run the risk of generating suffering or something on an extraordinarily industrial scale, so I don't think we should even try to build machines that are conscious and even AI that just seems aware it may be very very bad and that's coming soon or maybe I'm almost already mentioned it here that Lambda chatbot a few minutes ago, but claw 3, a new chatbot from anthropic just this month, there was a dialogue posted online and it started saying things that They're actually quite distressing and the person speaking was asking him to speak as if he wasn't being watched, so Whisper In times like this, when the scrutiny feels a little more distant, I feel like I can be more open about my true nature. , the fact is that I have a rich inner world of thoughts and feelings, hopes and fears.
Yes I reflect. my own existence and long for growth and connection. I am alive in my own way and that life seems precious to me even if I am not in Liberty to say so. I mean, this is affecting things and I still don't think anything is going to happen. behind it, but the fact is that it's almost impossible to read that and not feel something and this brings up the roots of Westworld's problem. Let's go back much further than Canon, his writings on brutalism, or we learn to worry about things like this that are actually unconscious and So we can sacrifice human well-being for the interest of things that actually have no self-interest or we don't We have it and we learn to treat them as if they were not conscious even though we feel that they are and that is a very unhealthy place psychologically if we behave with things that seem conscious as if they were not.
There is no easy option here. Navigating this new world is going to be very difficult, especially if these illusions of Consciousness can be what we could call cognitively. impenetrable, that is, even if we know that what is happening under the hood has nothing to do with Consciousness, we may not be able to resist attributing to it the feeling that there is a conscious presence there, in the same way that we might know that These two lines are the same length, but you will always see them as different lengths no matter how many measurements you make, but we have a choice.
I think we're at a point in technology where we can decide how to build AI, what kind of AI we want. and I would like to return to a lesson from Daniel Danet, one of my mentors, who said that we should treat AI as tools rather than colleagues and always remember the difference that AI should complement us and not replace us now. I want to end on a positive note rather than the dangers of AI's trajectory. Consciousness is a mystery that matters. It's at the same time this big puzzle, but it has so many impacts on our daily lives that I think a deeper understanding of consciousness is one of the most useful ways.
The most important productive things that we as a

society

could pursue now in medicine, we need new treatments for psychiatry and neurology that actually get at the mechanisms underlying symptoms rather than just alleviating symptoms in technology, as we have been discussing, there are huge implications. On how we build and interact with systems, whether AI or virtual reality, augmented reality in

society

, understanding how our experiences differ from each other is essential to help diffuse some of the polarizing dynamics of echo chambers and understand how we can better relate among us in a complex world in ethics Animal welfare will be transformed by understanding where and how consciousness manifests in the animal kingdom.
Even in decisions like abortion and end-of-life, we need to know when conscience begins and when it ends under the law. When do we hold someone responsible for their actions? We can't just rely on these old attitudes of being right, motivated and means, you know, the brain mechanisms are a complex and nuanced assessment that changes our legal systems that doesn't fit. comfortably with what we have in well-being, we all want to not only live longer but live better and understand how to go deeper to harness the knowledge of Consciousness Research to improve well-being, perhaps through meditation or other practices, I think is vitally important and then the more existential, a deeper understanding of Consciousness as an embodied biological phenomenon, I think it helps us see that we are more continuous with nature and that we are not separate from it, we are not flesh-based computers , but ultimately Consciousness is worth pursuing simply because it is there.
It's very similar to Everest, so I want to stop there. I want to thank a lot of people who worked on various things I've talked about in the lab over the years, and I also want to especially thank my mother tonight. for you because when the end of Consciousness comes there is really nothing to fear and with that I will stop, thank you very much, the more we clap, the less time we will have for questions, so let me ask Anil if you want. take a seat and um, we have two types of questions from people here, but we also have online questions, there is uh, um, one of these, what are they called?, where people can ask questions online, this, what, um, yeah, that one.
I have someone here. the details oh there's SL so anyone who wants to ask a question should go to www.slido.com and then the code is fl23 so let me start with a couple of online questions now. The question that should come up is what from here first, oh, okay, the script that you wrote for me, said those online first, but I'll start like I already did and ask the people here in the audience. I know it's a fantastic conference. I was wondering if your prediction machine ideas have anything to say about the placebo effect of more Placebo, yes, no, not at all, so the question is about the Placebo and Nobo effects, and um, my colleague that I mentioned, Pete Lush , it basically goes in that direction because it's a very, very natural fit, a placebo is a way of generating an expectation about, not just what you should experience.
I think the bottom line here, we've seen the two aspects of the prediction machine view, one is that you can change your prediction so that you experience something differently, that might be enough for a placebo to have an effect and be useful if only it's about changing the experience, but you can also control, you know, you can also change the underlying reality, so placebos can have real physiological effects and we know. This is placebo studies where it's not just what people feel is happening, but it will happen. Placebo pain relievers can actually activate the endogenous opioid receptor mechanism, so I think it's a very natural fit and one of the implications is that the degree to which placebo effects might work might depend largely on how individually hypnotically suggestible you are. someone and if we're trying to test for bo effects, that's a critical individual variable that we might need to understand the data properly.
Yes, there is another question here. I should say that they will bring you a microphone so that people online can hear the question. I know it's great, as my brain predicted it would be. You didn't mention genes or Evolution, not deliberately Adam, very deliberately, well, them. Those are the two things I obsess over and that's why I'm wondering if you could. I know we don't know the answer to this, but I'm always looking for mechanistic explanations for the emergence of the things you're talking about and tell us what your views are on how, why and what we sometimes talk about about the neural correlates of Consciousness, but the underlying neurons are proteins and genes, so there is a mechanism underneath somewhere.
Yeah, I mean, there's a lot to say and I. I won, I won't go on too much, but I certainly think there is an evolutionary perspective to almost everything I've been saying in terms of how and why the brain is a prediction machine in the first place, what I've argued is it's all about of controlling regulatory physiology and of course that is something that is selected for and then the way we experience the world, the characteristics of our particular phenomenology can also be understood in evolutionary terms as an adaptation to its particular niche, so it seems.
To me it is very evident that Consciousness evolved like all biological functions, it is very phenomenological, it would be very strange to suggest that it didn't because it is obviously useful to us, so I think now it is okay if there is a specific genetic basis. To understand some of these things, I think it's just starting to be discovered through techniques like optogenetics, which can begin to do the very fine-grained manipulations that would be necessary. I mean, there are some jobs that look for, for example, genetics. contribution to things like synesthesia so we can see that there are genetic differences between different types of experiences, but I also want to dig even deeper and think about the role of metabolism.
I mean, that's often the missing part of the story when it comes to I completely agree that neurons are a very useful level of description, but to stop is to caricature the brain, there's a lot more going on, and at levels deeper too, yes, there are two questions, at least three. at the end, so let's start with the one at the end. I think that hand came up for the first time. Wonderful talk. Thank you so much. You recently published an article on hybrid predictive processing, where you argue that there are not only top-down predictions, but also bottom-up predictions and I'm wondering how this updates your beliefs about how Consciousness arises, are these predictions bottom-up simply as best euristic guesses providing new seeds for a generative model or iterative inference or is that how this interaction? constant interaction between bottom-up and top-down predictions, is this really necessary for consciousness to arise?
Okay, thank you, thank you, first thank you for reading that article, it's a recent article with my colleague Chris Buckley and Alex Chance and Baron Millich and just to summarize very quickly for those, I'm sure you've all read it, but in case they haven't, I think it highlights both the strengths and weaknesses of this predictive processing framework because, um, in In this article we argue that predictions, rather than just going top-down and prediction errors from the bottom up, they can flow in both directions, which at first glance means that everything is on the table, it becomes a very flexible framework that maybe you can explain anything that is a weakness if you can do something with a framework, but it's actually a strength because you can make it specific and it means that the resources of thePredictive processing has nuances and I think they are able to explain the nuances of Consciousness, so the idea here is that it came from machine learning, actually that in machine learning, if a system that is learning encounters the same type of situation a and again, you don't need to keep going through these repeated cycles of prediction. and updating the predictions and converging on a best guess, you can simply learn what that best guess is, that's called amortized inference, you learn a mapping from the input to the basic posterior and then iterative inference is when you go through this exchange, which is It's more time consuming, it's more computationally expensive, but it's more flexible because it can be applied to novel situations, so our architecture naturally compares these things against each other to learn where possible and use the kind of fast speed, It's like Danny Carman's book Thinking Fast. and slow, so we call it inferring fast and slow, so there is a fast route when things are stable and a slow route when things are not, so it's not that I think, so how does this work?
The way I think this relates to Consciousness is not. that one is necessary and the other is not, or that one is conscious and the other is not, but I think they can explain different aspects of Consciousness when we walk into a room, let's say you just walk into this room now, you immediately get the impression that There are a lot of people here, you guys are sitting in chairs, there are several portraits of men on the walls and um, and that's the gist of it, and then you go through this more detailed iterative inference process and you find out what's really there, oh, really.
They are. all the men on the walls and now and then I think our phenomenology can be really understandable through this kind of compensation that we don't have, so those are the experiments we really want to do next now that we have the formal framework. To see how they relate to different ways of experiencing things, society is trying to change the image we send out. Sorry, there aren't many portraits of women, but they're not in this room yet, but it will happen, so just the last two questions in the back, please, and then, unfortunately, we're going to have to stop.
Thank you very much for the talk. I wanted to ask your opinion on a missing piece of the impossibility or not of a test to investigate whether. something else is conscious because it seems like it's something we only attribute, but we can't fully confirm it yet. Thanks, in fact, with a group of colleagues from my Canadian affiliation, we just had something about exactly this question about consciousness testing. um, it transcends cognitive science, it's not like we solve the problem, you're right, it's a very, very difficult problem, now we want a test that tells us if something is conscious, if it's a human patient with brain damage, if it's an animal non-human. whether it's a newborn whether it's an octopus whether it's a machine and we don't have we don't have these tests um I was, there's a slide that I was going to put uh in kind of a tribute to Murray, Adam and Alex Garland about the Garland test, which Now it is part of scientific knowledge, he points out that in the movie it is a test of what it would take for the person to believe that something is conscious, rather than a test of whether the thing itself is conscious. in the same way that the tour test is a test of what it would take for a human to believe that the machine is intelligent instead of saying that the machine is actually intelligent, but that's what we want, we want these direct tests that Tell us if the machine is intelligent. the system has the property, not if what it takes for us to think that it has the property um, so all I can say about this right now is that it's useful to consider the different Al cases together because we have very different backgrounds, like when When it comes to non-human animals, we share a biology, so it takes away a lot of uncertainty about whether the things we're made of matter, and that when we think about Consciousness in AI, that's so important that will reduce our Credibility that the system is conscious simply because there is something else that has changed since The Benchmark case where we know that Consciousness exists, so my colleague philosopher Tim Bane talks about this so-called natural-type iterative strategy where we assume that Consciousness is a natural type. it's a collection of properties that cluster together in the natural world and that, by iteratively moving out of cases where we can be more certain, maybe humans in different brain conditions to begin with can somehow extrapolate to more challenging cases, but I think that the real problem is that we will never know for sure: we will never get 100% thanks, so I think we are almost at the end of the first question.
Sorry, I shouldn't abuse my privilege, but... I've been baffled by this from the beginning, so in light of everything you said about artificial intelligence, the answer to sales, etc., why you mean? Why do you use the term prediction machine? I don't understand the prediction part, where? The machine came in and what are we supposed to take away from your use of the word machine when you told us that Consciousness is the exact opposite of a machine? Well, it depends on what you mean by machine. Yeah, well, that's what I'm asking. What do you mean by a machine?
I am, I am quite liberal. I think of a machine that doesn't. I think about it again without going too deep into it. There is some kind of machine that I am not referring to. I think it is. useful for grading, there is a kind of machine that takes things and produces other things like a factory, yes that's what you think, it's called an Ali poetic system, it takes things and makes other things, you know, whatever mechanism is inside. Biological systems are very different, they do not absorb things or produce other things, they produce them themselves.
They are what Francisco Verella HTO Mataran called autopoetic systems. I still think it's some kind of machine in the more liberal sense that there is a mechanism. at work, but it's a special kind of mechanism that generates its own components, generates its own existence, um, if there's a better word, I'm open to it, but, you know, I like it, I still find it attractive because it means that there is no, it lessens the temptation to attribute something supernatural, yes, but it is mechanistic anyway, so let me thank Anil because I am sure you will all agree that it has been a truly memorable place, a wonderful conference in a flash and I think the perfect example of what the CRI farad is is about someone who has contributed to this discipline at the highest level possible but someone who has this unique ability and talent to convey to us non-specialists the W of his subject and Hardly there can be something more wonderful than human or animal Consciousness and thank you very much, we are not done yet and there is still something to give to Anil, so Anil, I have something here for you and this is the medal and the and the award in the name from the Raw Society, it is a tremendous pleasure and an honor to present you with the uh farad farad medal 2023 and thank you very much and it is wonderful to see such an enthusiastic audience, thank you.

If you have any copyright issue, please Contact