YTread Logo
YTread Logo

Optogenetics: Illuminating the Path toward Causal Neuroscience

Mar 28, 2024
Good afternoon. Good afternoon and welcome to the 2019 Warren Alpert Foundation Prize Symposium. Today we celebrate the achievements of four pioneers whose collective work has propelled us closer to understanding the ultimate enigma of human biology: the brain. The discoveries we honor today span the fields of genetics,

neuroscience

, physiology, bioengineering, and more. And together, these discoveries helped give rise to the field of

optogenetics

, a revolutionary approach that allows us to visualize and modulate neurons with previously unimaginable power and precision simply by exposing them to light. The work of Edward Boyden, Karl Deisseroth, Peter Hegemann, and Gero Miesenbock has not only transformed our ability to see the inner workings of the brain, but has brought us closer than ever to unlocking some of the deepest secrets of the mind, secrets like such as the neural circuits involved in decision-making and behavior, as well as those involved in the development of neurological and psychiatric disorders.
optogenetics illuminating the path toward causal neuroscience
To be sure, the advent of

optogenetics

as a discipline emerged from the collective work of many scientists over a few decades, but the four recipients we honor today made fundamental discoveries and developed critical tools that have made optogenetics an indispensable technique in

neuroscience

. Now, the idea of ​​tinkering with the nervous system to illuminate its functions has plagued scientists for centuries. Consider the now-classic frog experiments conducted by 18th-century Italian physician Luigi Galvani, who demonstrated that information in nerve cells is in the form of electricity, or an electrical impulse. That notion that light could be used to manipulate the nervous system is, in a sense, a centuries-old extension of Galvani's ideas.
optogenetics illuminating the path toward causal neuroscience

More Interesting Facts About,

optogenetics illuminating the path toward causal neuroscience...

In the 1980s, Peter Hegemann set out to understand how green algae and other simple organisms perceive light. And in the 1990s, Peter and his team identified and characterized the light-activated molecules in algae that allow them to respond to light. Peter, together with Karl Deisseroth, then deciphered the key principles, structure and function of light-sensitive proteins. In 2002, the notion of optogenetic neural manipulation became a tangible reality thanks to the work done by Gero Miesenbock. Gero demonstrated that it was indeed possible to use light to modify neuronal activity. Gero used light-sensitive proteins from the eyes of fruit flies and genetically incorporated them into nerve cells.
optogenetics illuminating the path toward causal neuroscience
And the achievement not only made the neurons sensitive to light, but also offered a way to control their activity with light. Gero's work demonstrated de facto that it was possible to use optogenetics as a tool to study and manipulate the brain. Karl Deisseroth and his team subsequently carried out a series of key experiments that showed that light-sensitive rhodopsin proteins (the proteins first studied by Hegemann in single-celled organisms) could be used to activate neurons in the mammalian brain, What Karl and his team discovered had the chemicals needed to make these proteins functional. Karl continued to work in optogenetics, elucidating the structures of several light-sensitive ion channels and discovering multiple new optogenetic activators.
optogenetics illuminating the path toward causal neuroscience
Meanwhile, he has continued to use these tools to make fundamental discoveries about the inner workings of the mammalian brain. Edward Boyden worked on the first critical experiments in optogenetics. He was part of a team, along with Karl Deisseroth, Feng Zhang and others, that in 2005 published a key discovery showing that light-activated algal ion channels previously studied by Peter Hegemann could be used to control neuronal firing. Ed then, in his own independent laboratory, went on to refine optogenetics, developing optogenetic activators to enable independent control of multiple cell types in the brain, and using optogenetics to achieve neuronal silencing.
In collaboration with pioneers in holographic microscopy, Ed developed tools that used optogenetics as a way to impact exact patterns of electroactivities in small groups of neurons that mimic natural activations. Now, taken together, these discoveries have fundamentally reshaped the landscape of modern neuroscience. They have laid the foundation for optogenetics-based therapies that could one day be used to restore vision loss, preserve movement after spinal cord injury, or modulate the circuits that fuel anxiety and depression, and many other applications. Recognizing the people behind this type of transformative science—science that promises to change the way we understand, diagnose, and treat disease—is the Warren Alpert Foundation's raison d'être.
We would not be here today celebrating these momentous achievements without the vision of the Warren Alpert Foundation and its founder. Many of you will have heard the story behind the birth of the Foundation, but this rather mythical story is worth repeating. In 1987, Warren Alpert came across a newspaper article describing the work of Sir Kenneth Murray, a British scientist who had developed a vaccine against hepatitis B. Somewhat impulsively, Warren put down the newspaper, picked up the phone, and cold-called Murray. . Fortunately, Murray answered the phone. And Warren Alpert announced to him that he had won the Warren Alpert Foundation award.
The small detail that was missing is that the Warren Alpert Foundation did not yet exist. So Warren got to work. He contacted then-dean of Harvard Medical School, Daniel Tosteson, and asked him to help convene a panel of experts who could choose future award winners. And Dan said he did. And here we are 31 years later. During those three decades, the Foundation has awarded nearly $5 million to 69 scientists, 10 of whom have received Nobel Prizes. I am delighted to have members of the board of directors of the Warren Alpert Foundation with us today. I saw Fred Schiffman and Gus Schiesser in the back.
And I apologize if other directors have come and I haven't been able to greet them. But on behalf of the scientific community at Harvard Medical School and around the world, I certainly want to thank the Foundation for its support of science and discovery and for its tireless efforts to alleviate human suffering. I want to congratulate this year's Warren Alpert Award winners and express my deepest admiration for their transformative achievements. I now turn the podium over to our symposium moderator, who is one of the most prominent neuroscientists of our time, Bernardo Sabatini. Bernardo received his undergraduate degree in biomedical engineering from Harvard.
He later earned a doctorate in neurobiology, as well as a medical degree from Harvard Medical School. He completed postdoctoral training at Cold Spring Harbor Laboratory. Bernardo is a Howard Hughes Medical Institute investigator, a member of the National Academy of Sciences, and currently the Alice and Rodman Moorhead III Professor of Neurobiology at the Blavatnik Institute at Harvard Medical School. Bernardo and his team seek to discover the basic mechanisms underlying brain plasticity, a critical characteristic that allows mammalian brains to acquire new behaviors, learn and adapt to new cognitive challenges. And the ultimate goal of Bernardo and his team's work is to define the disturbances in these processes that can lead to neurological and neuropsychiatric disorders.
Please join me in welcoming Bernardo. Bernard. Thank you, George, and thank you all for attending this wonderful occasion where we will celebrate the arc of discovery and invention that led to the field of optogenetics. And as Dean Daley just mentioned, the excitement of optogenetics for the field of neurobiology is that it brought systems neuroscience into a modern era in which

causal

experiments suddenly became possible. And so, from the time when scientists first placed electrodes in the brains of animals, they discovered that there were neurons that reflected very specific characteristics of the animal's environment, the state of the animal, or the motor action of the animal.
And these beautiful studies over many decades led to precise theories about how the brain could perform calculations, store information, or generate motor actions. But the problem was that all these theories were very beautiful. In most cases, we lacked tools to really test them with high precision. And optogenetics has given us the types of gain- and loss-of-function experiments to control neuronal activity that allow us to test whether these theories are correct and whether the activity of particular neurons is necessary and sufficient to explain parts of the behavior and signals within of the brain. Now, as Dean Daley mentioned, people have wanted to use light to control neural activity for decades, and the literature is littered with many failed attempts to do so.
And there's a very nice essay by Francis Crick, the co-discoverer of the structure of DNA, that he gave to the Royal Society in the early '90s, in which he talks about how one would like to be able to manipulate the activity of cells in the brain remotely. And he said, the ideal sign would be luminous. This seems pretty far-fetched... sorry... it seems pretty far-fetched, but it is conceivable that molecular biologists could engineer a particular type of cell to be sensitive to light in this way. And so the four people we honor today are the ones who took that idea that was considered far-fetched and made it a reality.
And what I like about the history of optogenetics is that it brings together science done by many different people with many different styles. And so, we're going to look at examples of a biophysicist who wanted to solve a problem he encountered in nature, science driven by pure curiosity to understand how single-celled organisms detect and respond to light. We will see other examples of someone who wanted to solve problems in his own laboratory. He wanted to advance his own research, encountered obstacles and invented technologies to overcome them. And we'll also see examples of people who took an idea and almost on an industrial scale, created dozens and dozens of permutations of that idea to advance a field and gave us the tools we need for optogenetics.
There are very different types of science that came together to create this. Now, I've told you a little bit about why I'm excited about optogenetics, but I want to spend just a couple more minutes on why you should be excited about optogenetics and why the Alpert family should care. And George played these things. And there are actually two reasons. One is that optogenetics has enabled basic discoveries in the brain that now give us, we hope, new ways to treat neuropsychiatric diseases. And so we have been able to identify cells in the brains of animals that make them continue eating, for example, even if there is no caloric boost and, therefore, could be relevant to obesity and diabetes.
We find cells in the brain that mediate anxiety, that mediate feelings of anhedonia, other cells that exacerbate or can correct the symptoms of Parkinson's disease or that drive processes related to Alzheimer's. And all of this work has led to a new understanding within pharma that treating neuropsychiatric diseases may require targeting circuits rather than going after specific molecules. And I think that will be the wave of therapy for neuropsychiatric diseases in the future. Second, as George mentioned, in the future, it is very likely that humans with optogenetic manipulations of their brain will walk around and use these manipulations to correct disturbed patterns of activity within the brain.
Trials are already underway in which optogenetic actuators are placed in the eye to restore light sensitivity to the retina of individuals who have lost their own endogenous light-sensitive cells. So today we will hear from four leaders we have chosen within this field, who for decades in their own laboratories have moved this field forward. It turns out that some have done it somewhat accidentally, as I said, by studying natural processes and then, realizing the importance of what was here, continuing to push the field forward. And others have made it their mission to simply solve this problem. We have a packed day, so we're going to try to get there on time.
We won't be doing Q&A, so you'll have to find the speakers during breaks if you want to talk to them. I'm going to make the introductions brief. I'm not going to list the hundreds of awards these four have won collectively because the only thing that matters today is that they won the Warren Alpert Award. Our first speaker is Dr. Peter Hegemann. He is the Hertie Professor of Neuroscience at the Humboldt University of Berlin, a very historic institution that has made amazing contributions over the last century. And I think he's really worked on optogenetics throughout his entire scientific career.
Histhesis work was titled "Purification and characterization of functional chloride pumps: halorhodopsin", one of the key molecules we still use today to manipulate cell activity. The first group independent of him at the Max Planck Institute in Martinsried was called Microalgae Photoreceptors. Basically, his whole life he's been working on this problem of how single-celled organisms perceive light, detect it, and react to light. And through his curiosity-driven research, he found, characterized, and ultimately identified with his collaborators the core proteins that became the first wave of optogenetic activators that made all of this possible. Peter, I look forward to hearing the story from your side.
Thank you. Dear Dr. Daley, dear Dr. Sabatini, thank you very much for your kind introductory words. And being here is something really special and a great honor for me. And I would like to express my deepest gratitude to the Warren Alpert Foundation, especially the selection committee and all the members who participate in it. So today, in my short talk, the first 15 minutes, I would like to walk you through the story. And I don't want to bother you with all the biophysics we do. In the second part, I would like to show you some examples of what we are doing now.
The first slide you see here is the Brandenburg Gate. And this is a sign of separation of the country. And on the other hand, it is also a symbol of reunification. And it was a pleasure to be here on October 3 because it is exactly the day that Germany united 29 years ago. So my first conclusion is that walls never help solve any problem. So my second statement is that when you start a new research project, it should start with a bang. Then you should ask yourself about something you see that you can't explain. It may be a natural phenomenon or it may be a completely unknown or unexplored disease.
So here you see that wave, the orange waves, that occasionally occur in the ocean, sometimes here near Boston as well. Or if you go to the Northern Territories, you see red snow. And this is called watermelon snow. And the question is how is it produced? And the reason is that green algae or ocean algae are responsible for that, like these algae. And this is Alexandria. This is one of the most toxic organisms you can imagine. And if you keep them in your laboratory, you need good aeration, otherwise you die. And this is Chlamydomonas nivalis that is responsible for watermelon snow.
And it is also the reason why they are called watermelons because they have a sweet taste. So my lab is working on a relative of this Chlamydomonas nivalis. And I forgot to say the last sentence. The true joy of science is working on something whose outcome is completely open. That's why we work with Chlamydomonas reinhardtii, which is the green model organism. And when we started studying the behavior, after a few years, we realized that it is nothing new because there was a publication in 1866 by Andrei Famintsyn, a Russian scientist. And he described in this article the behavior of Chlamydomonas.
And this is written in German, published in a French magazine and edited by the University of Saint Petersburg in Russia. And he used an assay, seen here on the bottom right, this population of Chlamydomonas. And you shine the light on one side, and they move away from the light. And he already asked, what are the conditions under which he approaches or moves away from the light? This person became famous because he later founded a botanical institute at Moscow University. And unfortunately he no longer had time to work with algae. The other two people I would like to mention are physicist Ken Foster (he was a postdoc in the Mike lab) and chemist Koji Nakanishi because they worked with green algae many, many years later.
And they worked on a wide variety of species that were incapable of producing carotenoids and chlorophyll. And they discovered that these are not phototactic. And they added vitamin A, and they realized that they could increase the sensitivity of these algae by three orders of magnitude within a minute of adding this vitamin A. And that was the starting point when I got interested in the work. And I spent some time in Ken Foster's lab. And his laboratory was totally chaotic. Nothing could be done, but on the other hand he was also inspiring. So we studied this species for a while and then realized that it was highly dependent on ionic conditions, as Famintsyn already said 100 years earlier.
And we tried to establish the electrophysiology. And we were very discouraged by the Chlamydomonas community because they said it will never work, except for one person, Ursula Goodenough. Where is she? She should be in the audience. Ah, there she is, Úrsula, wonderful. It's great to have you here. And she was the only person in the community who encouraged me and provided me with a cell wall-deficient algae, which allowed Hartmann Hart in my group to establish electrophysiology. He then sucked the cell with the pipette and applied a brief flash. And what he noticed is that there is a fast current in the current, which means that the ions enter the eye, and a slow current of flagella, which enters the flagella.
And he measured the dependent wavelengths, showing that this is a spectrum of rhodopsin with a maximum of 500 nanometers. And we concluded that these photocurrents are mediated by rhodopsin. Over the years we learned to record directly from the... oh yes, thank you... directly from the eye using a slanted pipette, and this greatly improved the sensitivity and temporal resolution. And it allowed us to record with better temporal resolution, as you see here, and conclude that there is no delay. And from these and other measurements, we conclude that rhodopsin is directly coupled to ion channels. And a few years later, we came to another conclusion that together they form a light-gated ion channel.
And the channel conducts protons and calcium. And it's interesting that we had never talked about sodium, because these algae live in the absence of sodium, something very different from the neuroscience situation we face now. And this is a million charges in one of these photocurrents. And we concluded that the conductance is about 100 femtoseconds, which is very small and almost exactly the amount we know today. We also note that there is a low intensity range and a high intensity range, so they have a dynamic range of sensitivity of four log units. And so we established this model.
There are light-activated ion channels responsible for the upper light area from 1 to 100%, and a lower region, where you need some amplification. And this has not been completely resolved. We also measured the beating of flagella together with photocurrents and showed that the appearance of this action potential, which is so important for subsequent measurements, causes the switch from forward to backward swimming, triggering a completely different behavior. Then, in parallel, we worked on the purification of the photoreceptor using the Karen Foster mutant, reconstituting it with radioactive retina, we purified the most abundant retinal proteins, demonstrated that they are found in the eyespot and proposed that it is an ion activated by light. channel.
No community response at all. Five years later, we received a... four years later, we received an email from this gentleman. And he asks us: "I have been interested for some time in potential methods by which mammalian neurons could be transfected with genes whose product would allow light activation and depolarization of action potentials." That was a very interesting conversation that we studied. And probably most of you know that this is Roger Chen, who became famous for GFP studies and failed. And the reason he failed is that we sent him the wrong gene. So at least we have proven ourselves that it was the wrong gene.
The abundant retinal protein of Chlamydomonas is not the photoreceptor we were looking for. So, but the on and off that clearly shows the way forward is still yours. This is my tribute to Roger Chen, who died too soon. It was always wonderful. He has no social behavior. When you met him, he continued the discussion at the sentencing where we stopped three or four years before. Then Sunir Katarya joined my lab and discovered in a Catoosa library, the first cDNA library for Chlamydomonas, two new genes, and they were related to rhodopsin to some extent. And they showed a seven-fold transformation in the helix domain and a long cytomal end, which represents approximately 40% of the protein.
And we decided to express it in oocytes, but we didn't have the oocyte method established in the laboratory. So we teamed up with Georg Nagel in Frankfurt and expressed this protein in frog oocytes using two voltage clamp experiments. And we have shown immediately, after about three weeks, that these are the photocurrents that we pointed out in the Chlamydomonas photocurrent. It was immediately clear that this was the protein we were looking for. These are the original experiments with longer illumination times, with channelrhodopsin, which shows a very strong pH dependence, and a cation-dependent model. And we call this protein channelrhodopsin, because they unify an ion channel and a sensory unit.
And there are two of them, channelrhodopsin one and two. So why is Chen working unsuccessfully on channelrhodopsin one? Because the currents were too small. And then he didn't find a person in his lab to work on channelrhodopsin two. So after this experiment, we expressed this in human embryonic kidney cells, channelrhodopsin two, because channelrhodopsin one showed only small photocurrents. And the next thing is that we find that the seven transformations in the helical fragment are enough to activate this photocurrent, and 60% of the protein is unnecessary. So we had a small and very compact system that is a sensor and the ion channel together, that is, the channelrhodopsins.
And the conclusion was that channelrhodopsin can be functionally expressed in animal cells. So this conclusion stimulated many, or several, not many yet, five laboratories, basically, to work on this. And the first publication came out where Karl Deisseroth and Ed Boyden showed that this works in the neurons of the hippocampus, and you can fire trains of light, and the response you get is a train of action potentials. The next person was Hiromu Yawo in Sendai, and he demonstrated that it works with brain slices. And this could be forgotten. And the third person was Stefan Herlitze. He has shown that this worked then in an animal.
And the first animal was not the mouse. It was a chicken embryo. And the third person was Alexander Gottschalk. He produced a cell line that continually allowed channelrhodopsin to manipulate neurons. And the last is Zhuo-Hua Pan, who demonstrated in blind mice that he can restore vision. And he also arrived quite late, so he hasn't been in the spotlight until now. But he belongs to the key people. And then, as you know, Karl and Ed took over the course and did all these modifications and other things. And certainly, in the meantime, it is also strongly expressed in zebrafish and drosophila in the mouth, and you will hear more about this later.
So the technology is relatively simple. You take DNA from a microorganism, Chlamydomonas, for example, connect it to a promoter region of the cell of interest, package it into a virus, inject the virus into the brain and then wait a couple of weeks and then you can replace a needle for the light guide and then you can study the behavior, among other things. So what is needed is a photoreceptor that is small and genetically encodable, a promoter element, a chromophore that is present in sufficient quantity. And this was my biggest surprise: the brain contains retina in sufficient concentration to efficiently reconstitute the opsin.
And certainly, a response that can be interpreted, and probably many responses that are essential for a living organism in nature, are not so easily identified in a mouse in a cage. So specificity is an important issue: you can target a single neuron. And in parallel, you can target another neuron with another actuator or inhibitor, and then you can study learning and memory, sleep, locomotor activity, feeding, and certainly, as of recently, vision, hearing, sexuality , autism, addiction, anxiety, Parksinson, etc. And Karl and Ed will talk about this. So what are we left with? So we went back to our original starting point: we wanted to understand the photoreceptor.
And this is the current knowledge we have about channelrhodopsin. This is based mainly on mutagenesis and biophysics studies, and also on the exostructure provided by Osamo Diwaki in Japan. And he used a hybrid that was originally designed by Hiromu Yawo in Sendai. And certainly MD calculations that tell us where the water is most likely to be in the canal. And you see here the retinal chromophore that provides light sensitivity, and the green amino acids that are responsible for color tuning, and the brown amino acids that are responsible for conductance and ion selectivity. And we mutate them all, and we know more orless what they are doing.
But the key elements of the protein are the gates, the central gate and the inner gate, which are closed in the dark. And you should keep in mind that the image we have is a dark state of channelrhodopsin, so it means that it is closed. And what we still need is information about the open status, which is not available at the moment. I would also like to draw your attention to the fact that, unlike other proteins, sensory photoreceptors are very dynamic. Therefore, they undergo thousands of conformational changes after light absorption, and only a few of them are detectable as absorption changes, because they are related to the chromophore.
And this can be controlled by changing the absorption wavelengths. These are 470 of the dark state, and then 500, and then 390, and 520. And this is a main conductive state. And then the conductive state decays in 10 milliseconds, and returns to the dark state only on a time scale of seconds. But the most interesting part is not highlighted here, which is the initial state. And through experiments with bomb probes, we studied them in detail together with Johann Kennis. And if cells are excited from the dark electronic state to an excited state, conformational changes occur in a minimal energy landscape. And then there is a dissection, a clinical intersection between the excited state and the dark state.
And here the decision is made to return to the dark state, or move to the photo cycle product. This is a very central point for the efficiency of these rhodopsins. Therefore, the decision is made on a picosecond time scale in the range of 10 to minus 12 seconds after the flash. Everything else, what comes after, is a dark activity. So we certainly look at the chromophore to understand the system and also to manipulate it in the sense that we can use it. And that was done with Karl many years ago, and with Ofer Yizhar, his postdoc, who is now at Weizmann and will come tonight.
And I gave you some examples. So if you mutate this residue, you get larger currents. If you remove this residue, you will get a shorter open-state lifetime, but more importantly, you will eliminate the voltage sensitivity of the protein. And that allows you to fire action potential with greater speed, with greater frequency, to study, for example, interneurons. And the third example is this and if you mutate it, you greatly slow down the photographic cycle. And it goes from 10 milliseconds to approximately 100 seconds. And this can be used for continuous depolarization. So here is an example. You apply blue light.
You excite the cells. Triggers action potential. And then you apply green light. Then go back two states. So, this step function of rhodopsins was very useful for future experiments. But the current itself was also very difficult in another channelrhodopsin. For example, this one is inactivator and it is an inverted rectifier. And this becomes hyperactive during a flash of light. It is again reverse rectification. And here is another species that is not a reverse rectifier. And here's another species from somewhere near Hawaii, recently discovered. It is completely inactivated by continuous light, for some biological reason, we don't know.
Then these properties are grouped into different evolutionary branches. And also what turned out is that color adjustment is very, very important. So we can collect different organisms to address different cells. Here is an important experiment or an important question: the question of inactivation. And this is a typical biophysical question, because it requires a deeper insight. So if you look at the photo cycle again, I've shown you the basic photo cycle, which is shown here again. And recently we discovered that there are two conductances, one early and one late. And the first one, the first one, is selective for protons and the second one is selective for sodium.
And depending on the balance, in your neuroscience experiment you get a more selective photocurrent for protons or sodium. Alternatively, this isomerization more or less competes with a third and a soon-anti or anti-prompt isomerization is obtained, and produces a second dark state, which initiated its own photocycle. And the open state in this photocycle has only weak conductance and is more sensitive to protons. And this is why in steady state light you get this reduced steady state level. This one here. So, as an applicant, the system is more complicated than you can probably imagine. And the question is how can we manipulate this selectivity?
And a few years ago we identified two key residues, one on the interior door and one on the center door. And if you replace this glutamate, which is conserved in most channelrhodopsins, you can manipulate the relationship between proton conductance, shown in red, and sodium conductance, shown here in green. And then you can certainly combine these different mutations to split the two even further in one direction. But you can also look at y-type channel rhodopsin. This was done by Johannes and he compared the proton and sodium selectivity. And you could end up with a PsChR, which is almost exclusively sodium-selective, or you should look at the Chrimson and the CsChR, which is almost exclusively proton-selective.
So only at neutral pH and low sodium do you get a current here. And at alkaline pH, there's no... there's no current. So what we designed in the laboratory, nature made billions of years before. You just have to find it. This is a problem. If you don't know what to look for, then you won't get anything. And Ed's lab, for example, has identified this Chrimson, which is very interesting for many reasons. It is almost purely proton selective, so at neutral pH and low sodium you get a large current and no current at alkaline pH. But if you mutate this glutamate, in this position, you convert it into a sodium-selective channelrhodopsin.
And what we conclude from this is that this selectivity photo is on this Chrimson in a completely different position, very close to the surface, while the central door is not important at all and does not exist in this variant. So nature had developed many different possibilities on how to control conductance and selectivity. And if we look at the crystal structure, we will see the reason. So the water pore in this Chrimson is locked in this position. While it is free flowing water in Chlamydomonas channelrhodopsin. Unfortunately, we have failed to produce a potassium-selective channelrhodopsin. And this has been on the list for a long time.
And Karl and I met again to work on it. But until this is over, we reached a compromise and established two-component optogenetics. And in this case, we combined a soluble photoactivated enzyme, which is a cyclase that produces cyclic AMP, and that allowed us to activate a cyclic AMP-activated potassium channel, which can induce hyperpolarization. Alternatively, we recently worked on a rhodopsin cyclase, which is also a rhodopsin with an unusual tail, and this tail is an enzyme directly coupled to rhodopsin, never found before. And this produces cyclic GMP, and we can use cyclic GMP-activated potassium channels to hyperpolarize the cell.
And here's an example of "a postdoc in my group". He used this blue light-activated photoreceptor enzyme and combined it with a small potassium channel from a bacteria. And he got a very good hyperpolarization that was very efficient. And thanks to this amplification, it can be used with very low light intensities because it drives 10,000 charges after absorbing one photon, which is certainly much more than the pump, for example, which only transports one charge, or one channel. of ions. like Chlamydomonas that carries maybe 10 to 20. So I'd like to show you some examples. Francisca Schneider, a former student of my laboratory, now has a group working on cardio optogenetics.
And she tried it and she was able to inhibit the action potentials of the heart cells and also inhibit the heartbeat in her model systems. So it works very well. And here is an experiment with parametric neurons. And you see here the wonderful expression. And here's a little flash of blue light that causes prolonged hyperpolarization in these cells. And it would certainly be better, and probably more convenient, to use cyclic GMP instead of AMP. And that is why she recently established in my laboratory the functionality of rhodopsin cyclases, curiously identified by a theoretical physicist. She became unemployed and switched to biology, and she discovered these rhodopsin cyclases with her friends.
And this is good for controlling cyclic AMP and also for hyperpolarizing cells. These are just some examples. And I would like to summarize: the main actor remains the light-activated cation channel. What we are still missing is a selective potassium channel. It has been complemented by anion channels and pumps, and also, for some time, by light-activated enzymes that complement ion transporters. So this is still the main actor. I could continue, but in the interest of time I would like to finish. My conclusion is that algae has taken over brain research. And if we continue to destroy the climate, they will probably take over the planet and control it.
What they have done for 3 billion years. And I still have hope that that doesn't happen, that the human species also survives for some time. And this is my group. And I would like to express my gratitude, thank you very much to all my coworkers that I had the privilege of working with, to my friends and colleagues in photoreceptors and neuroscience, and to the Chlamydomonas community where the whole business began. And certainly, I had collaborators over the last three decades, and I would like to mention a few. Firstly, Karl, who has worked with us for the last 12 or so years.
And he transferred all the knowledge to the neuroscience committee, and he had enough deep molecular knowledge, so it was always a pleasure to work with him. Thanks Karl for that. And Ofer Yizhar, former student, now group leader at Weizmann. At the Weizmann Institute he founded optogenetics, Georg Nagel, who worked with us in the past, and surely many spectroscopists and crystallographers. And I would like to thank you for your attention to him. Thank you. Near. OK. Thank you very much for that wonderful talk. So we have two halves of today's presentation and in each we have two of our award winners speaking.
Interspersed between those talks are two postdoctoral fellows from laboratories in the Department of Neurobiology at Harvard. And these are talks from young scientists who are using optogenetics in their own research to advance the inner workings of, in this case, the mouse brain for both. So the first of these conversations will be from Kimberly Reinhold. She did her undergraduate work here at MIT and then went to UC San Diego to do a PhD with Mossimos Kanziani, and then I was lucky enough to have her join my lab here in the ring. And she'll tell us about her work using optogenetics to activate and suppress neurons in the brain to discover how a mouse learns a new skill.
Kim? Thank you. It is a real pleasure to be able to share a vignette of how we apply optogenetics. When I was in college they required me to take a physical education class, a physical education class. So I signed up for squash because I had never played a racket sport before. And the first day I showed up and tried to hit the ball with the racket and it went very, very badly. The instructor sent me to a court alone to practice. And I did it, I practiced. And I attended every class that semester. And at the end of the semester I still couldn't hit the ball with the racket, but other students seemed to learn.
Squash is an example of how we learn through trial and error. We learn to associate sensory inputs, like the ball flying toward my head, with appropriate motor outputs. Maybe swing the racket or, in my case, run away. And we learn these associations through practice and feedback. Learning by trial and error is a fundamental component of many different cognitive processes. Therefore, it is vital that we understand where in the brain and how it occurs. What do we know? Well, we know that people with Parkinson's disease have damage to the basal ganglia, a set of nuclei deep in the brain outlined in green.
And we know that these people have trial and error learning problems. These people can learn things like episodic events, names of people, the time of day something happened. But they have deficits, both in motor learning (as in the squash example) and in purely cognitive trial-and-error learning tasks. Interestingly, people with damage to a different part of the brain, the temporal lobe, are amnesiac. So these people can't learn things like the color of the experimenter's clothes, but they can learn practice-based tasks, trial and error tasks. And so we see a dissociation that suggests that the basal ganglia specifically supports trial-and-error learning.
And this has been confirmed in several model species. To figure out what goes wrong in a disease, it is important that we understand how trial-and-error learning works in a healthy brain. So todayI will tell you about our work to try to do this. Trying to pinpoint more precisely where in the brain trial and error learning is calculated, and how. I will first explain our approach. We have developed a task in which mice learn through trial and error. There are two stages: learning and then executing the learned behavior. I will ask if basal ganglia are needed after the mice have learned.
Whether basal ganglia are needed during learning. And if they are, how? To ask the question: are the basal ganglia needed? We would like to find a way to close these structures and look for effects on behavior. Diseases and strokes do this: they shut down areas of the brain. And injury studies do the same. But those changes are rarely specific to a single brain circuit and, unfortunately, are irreversible. There are other types of changes we can impose on the brain. And these have greater spatial and temporal precision, but what we're really looking for is a technique with high spatial precision that also has excellent temporal precision.
For example, capable of probing really fast cognitive processes, such as the updating of learning that occurs between racket hits that occur in seconds. And here, optogenetics fills the gap. First let me tell you about the task we used. Mice, like humans, have basal ganglia that receive sensory information and project motor information. To study this area we have designed a task in which mice learn through trial and error to associate a sensory component with a motor component. The sensory component, while we could have chosen an external stimulus, such as a flash of light, that would activate the eye and a number of visually sensitive areas throughout the brain (many of which project to the basal ganglia), we now have There are many active brain areas in different regions, and it is difficult to follow the flow of neural activity through the brain from onset to motor output.
So we decided to play tricks. We restricted the signal to the activation of a single area of ​​the brain, the visual cortex. It is well studied, accessible and has a specific projection in the basal ganglia. We can use viral and genetic tools to express an optogenetic protein specifically in these neurons, with cell bodies in the cortex sending their axons to the basal ganglia. And then we can implant a fiber through the mouse's skull, shine a blue light through that fiber to the brain, and that will activate the optogenetic protein channelrhodopsin, which you've already heard a little about.
These cation channels. And that causes the cell to depolarize and fire action potentials. We then activate neuronal activity in a specific population. The motor component of the task is reaching for a food pellet. We have the mice do this in the dark so they can't see the food. And we've made sure they can't smell or shake their food either, so they have to use their forearm and front paw to see if the food pellet is there. So the task proceeds like this. Mice reach a lot. Often there is no food there, they just wait. The optogenetic signal is activated and then they reach out and there is food.
Therefore, they have to learn through trial and error that the optogenetic signal predicts food. Mice learn to do this. Here I will show you a movie where the pellets are placed into position. Let's see if I can find a pointer. Thank you. You will then see the pellets move into position. And then the signal is activated, the optogenetic signal. So keep your eyes on this blue circle when the light flashes. That means we are stimulating neurons in your brain. Then the signal is about to turn on. There it turned on. And he has learned that that means the food is already available, so he reaches for it, grabs it, and eats it.
Remember, this happens in the dark, so you can't see the ball. We're spying on him with an infrared camera. The light here is just a flashing blue light that has nothing to do with the signal. And it's just to make sure the mouse doesn't reach the flashing lights. And we have a variable interval between pellet presentations to make sure you're not counting time. So he learned this task. And we can plot their behavior across many cue presentations or trials on the y-axis versus time in seconds on the x-axis. And then place your hand on this and see that sometimes you successfully catch the ball, other times you drop it, sometimes you miss it, and often you reach for it and there is no ball there.
But the important thing is that when we add up all the reaches across all of these tests and plot it as a histogram, with reaches on the y-axis and time on the x-axis, we see this huge increase in the frequency of reaching to the right. after the signal. And this tells us that he has learned the association between signal and food. We have taught several mice to do this. Here is the mean and standard error. And we think these animals are paying attention to the optogenetic signal, because when the signal is activated, they reach. This is the same thing I showed you on the last slide.
But when we omit the signal in a series of randomized trials, the mice do not reach for it, even though the ball is there. And when we omit the pill but the signal still comes on, the mice still reach. So it seems that animals can learn this signal-response association. Are the basal ganglia needed? The basal ganglia are good candidates for linking the signal to motor output, because these structures receive diverse sensory inputs and project motor outputs. And because there is a direct

path

way from the signal-activated cortex to the input nucleus of the basal ganglia called the striatum.
And now I want to focus on this input nucleus, the striatum, and in particular, the subregion that receives that input from the cortex, that receives that signaling signal. This is the dorsomedial tail of the striatum. And for the rest of the talk, when I say striatum, I'm referring specifically to this part. Does the striatum link the signal to the motor output? Does it trigger the motor output? Well, if this simple linear model is correct, then we should be able to see a change in neural activity here around the time of the signal. We then tested this by recording neuronal activity in the striatum.
We can intelligently implant recording electrodes in the animal's brain and record the activity of the neurons. Here we see action potentials or spikes. And then we can draw a line every time we see a spike. And represent the activity of a neuron this way, where different rows are different presentations of signals and the x-axis is time. So you see that this neuron has some activity at the time of the signal. And we have found several neurons that appear to show changes in signal timing. So this could be triggering the scope. Second, if this linear model is correct, then when we close this striatum, animals should no longer be able to perform the indicated reach.
So we have an optogenetic approach to do this. There are output neurons in the striatum. But there is a second general class of neurons in this area, and they are the locally projecting inhibitory neurons. So we can put a red activatable optogenetic protein into these cells, shine red light through two bilaterally implanted fibers, basically activate these inhibitory neurons, and they act to turn off the neurons that project the output neurons. Basically, what we're doing here is performing a spatially and temporally precise loss of function, where we specifically turn off the part of the striatum that receives the input signal and then sends the output to the rest of the brain.
And we want to know if this exit from the striatum is necessary for the indicated range. Importantly, previous work has shown that inactivating this part of the brain using drugs does not paralyze the animal's arm, so it can still move. What effect does red light have on neuronal activity? Let's take a look at this cell we saw earlier. What we found is that turning on the red light prevents spikes in this neuron. And this is an inactivation that lasts one second. But at the end of the red light, activity returns. So unlike an injury, this is reversible.
We can suppress the activity of cells that respond to signals. You can see it right here. And in all the striatal projection neurons that we recorded, we see about an 86% reduction in activity. So red light suppresses the striatum. We can turn on the red light in a random set of trials at the time of the cue and ask if the mouse can still reach the indicated range. So here's the scope with signs you've seen before. And now we want to know what happens when we eliminate the striatum. Can the mouse still perform that reach with prompts or does it no longer exist?
What we see is that the mice are perfectly capable of reaching the signals. And, in fact, there is no change in the animal's reaction time or ability to successfully grab the pellet and eat it. So we don't see any motor deficit. Thus, it appears that the striatum does not trigger the indicated scope after learning, and there must be some other area of ​​the brain that fulfills this function. We don't know yet. We have some ideas. It turns out that these neurons that project to the striatum also have collaterals to the thalamus, pons, and superior colliculus. And now we're investigating whether one of those areas might be the link.
But I started the talk by telling them that the basal ganglia are really important for trial and error tasks. So perhaps the basal ganglia are necessary during learning. To test this, we must have a way to measure learning. Mice make cue reaches, but they also make spontaneous reaches even before the cue is activated, just hoping that a pellet is there. And we would expect learning to involve an increase in cued reach, plotted here on the y-axis, and a decrease in uncued reach, plotted on the x-axis. And this is what we see. Below is an example of a single mouse learning

path

.
You can see it from the first day to the last day when the animal increases the range with cues. And there is a small decrease in non-specific scope. We can then plot the direction of this learning change by placing a vector from the first day to the last day. And then in mice, we see that all animals learn by modifying behavior in this way. Interestingly, when we closed the striatum for one second on each cue presentation, the animals did not show the normal pattern of learning. His behavior is altered. And here are the averages for those two groups.
And so it seems that the striatum is necessary during learning. We can plot this data in a different way and combine the x and y axes by asking how much more cued reaching the mouse does relative to uncued reaching. And this is a metric that we call D prime. It doesn't matter, it's just a way to quantify learning. And I'm going to plot it on the y-axis, and then the training day on the x-axis. To give you an idea of ​​what this means, low learning values ​​(learning rate values) mean that there are no cues to hit.
Average values, the mouse is beginning to make indicated ranges. And at high values, the animal is performing reaches with really stereotyped signals. Thus, animals with an intact striatum learn this task. But when we inhibit the striatum, the mice don't learn. Maybe my striatum was asleep when I was trying to learn squash. Importantly, we have not permanently damaged the mice in this red cohort, because we can perform a recovery experiment. So we stop the manipulation and now these animals have the striatum intact and they can, the same animals, can learn. Maybe I still have hopes of playing squash.
So it appears that the striatum is needed to learn to reach with cues. Can we have any idea how? One hypothesis is that direct m could obtain information about the outcome of the animal's behavior in the past and use that feedback to update the animal's behavior in the future. And that could happen on different time scales. This update could be quick, as in the case of the squash racket swing. When you swing, you're not very good, when you swing again, you're a little better, but you're still not very good. And that update is very fast on a time scale of seconds.
Or we can imagine a student preparing for an exam. I learned a ton of information, but that doesn't really sink in until the little nap starts. And maybe here the update is done on a time scale from minutes to hours. So if we had a way to measure learning on a faster time scale, we could ask about basal ganglia involvement on a faster time scale. I have shown them that we can measure learning throughout the days. If we had a way to measureSimilarly learning between trials in a day, then we could observe faster cognitive mechanisms. So now, instead of looking at the probability of reaching it, we will look at the animal's reaction time, which is the delay time until the first reach.
And we're going to compare the reaction time at the beginning of the day with the reaction time 100 trials later. If the animal improves, then its reaction time speeds up. Let's plot that improvement, that acceleration, on the y-axis. In x I'm going to show the change we expect if the animal simply changes its reaching rate before that signal. So this is reached before the signal, so it has nothing to do with reaching with a signal. And then we have a component without keys and a component with keys. We found that mice learn in one day, as shown here.
They increase the range with cues and decrease the range without cues. Does the striatum store this accumulated learning in a day? If so, then when we shut down the striatum at the end of the day (which we can do with optogenetics very precisely, wait and then shut down the striatum) we expect the animal's behavior to return to where it was. the beginning of the day. But if the striatum does not store this learned cumulative performance change, then we expect there to be little change in the indicated range. And this is what we see. So it seems that the striatum does not store the improvement in one day.
But if we turn off the striatum early in the day and turn it off in all tests, we see that the animals never seem to develop or accumulate that improved performance. And we can see incremental updates. When the mouse touches the ball, it improves. But this incremental improvement is reduced when we close the striatum. Therefore, we favor the hypothesis that the striatum acts on a very rapid time scale to update behavior. And in conclusion, what I have shown you today is that activating a specific set of neurons in the brain is enough to teach mice a guided response.
The striatum is not necessary after learning, but it is necessary during learning. And we think it's needed specifically to provide these quick incremental updates. And thus an image emerges. This is just a cartoon, but we can imagine that the striatum is the arrow that pushes the animal through behavioral space. And that in healthy learning, these incremental updates add up to get the animal to an optimal place. Maybe addiction is overloaded learning. Maybe there is a Parkinson's deficit in that update. Maybe Tourette syndrome or OCD, or wandering into the wrong part of behavioral space, or the brain getting stuck in a local minimum.
We think that perhaps the deficits that we see when we shut down the striatum, although small on a rapid time scale, could really accumulate over a long time to produce the really serious problems that we see in basal ganglia dysfunction and disease. So I would like to thank the people who contributed. Of course, thanks to all the people who developed optogenetics. Bernardo Sabatini, my advisor, and Marci Iadarola, an excellent technician who has been working with me on this project. Thank you. Thank you very much Kim. Our next speaker is Dr. Edward Boyden from MIT. Ed is the Eva Tan Professor of Neurotechnology at MIT, and also a Professor at Media Labs, as well as an investigator at the Howard Hughes Medical Institute.
Ed did her undergraduate studies at MIT, where she apparently studied quantum computing. That's what I learned from her CV. I was surprised by that. And then she went to Stanford to do her PhD in neurobiology, working with Jennifer Raymond and Richard Chen, who is Roger Chen's brother who we'd heard about before, somewhat ironically. While at Stanford, she worked with Karl Deisseroth and Feng Zhang to first introduce channelrhodopsin into neurons and demonstrate, as we learned earlier, that the activity of mammalian neurons could be controlled with this tool. She then she came to MIT. And I think her laboratory is truly one of the most extensive and yet consistently creative, in many areas, that I have ever seen.
His group has, of course, worked on finding and engineering new optogenetic actuators, and we've heard a little about that. Her group has also developed robotic systems to perform electrophysiological analyzes on the brain. She has worked on new amplifiers for electrophysiological analysis. And more recently, she developed what almost seems like a comical approach, but is actually incredible: to look at the tissue at higher resolution, she decided not to improve the microscope, but to enlarge the tissue. And so he invented the field of expansion microscopy, which is really providing remarkable insight into how complex tissues are organized. Ed, welcome.
Excellent. Well, first I would like to express my gratitude for being here receiving this award along with my good friends, collaborators and colleagues. It is a tremendous honor. And I'm excited to be able to talk today about optogenetics, but also about how different technologies could fit into this great quest to understand the brain. The brain is so complicated that I think we need to think about an integrated set of tools that allows us to map the brain, that allows us to control it. And optogenetics, of course, is one of those key techniques. And then what we could call the opposite of optogenetics: observing the brain in action.
And, interestingly, tools of that nature are emerging from the optogenetic toolkit itself. Why is so difficult? Well, the brain, among biological systems, exhibits extraordinary spatial complexity and temporal complexity. So if you think about it, you know, brain cells are huge, right? They are centimeters of spatial extension. We have neurons one meter long that run through our spinal cord. And yet, if we care about the building blocks of neural computing, we also care about axons and dendrites, synaptic connections, and biomolecules inside brain cells. So how do you see across all those scales and control all those scales of spatial extent?
And there is also time. So, of course, if you care about learning, memory, Alzheimer's, development, or aging, these are your long-term processes. They take from hours to days or years. And the quantum components of brain calculations, however, are these electrical pulses on a millisecond time scale. So if we're thinking about ways to create tools to address Bring questions, we really need to think fundamentally about space and time. So today I'm going to tell you how we think about these properties that produced principles on how to discover and design tools focused on optogenetics. But toward the end, I hope to also talk a little bit about how we're trying to develop strategies for imaging neural activity.
And of course, if you can see and control neural activity, great. But it would be nice to have a brain map to know where in the brain to look or disturb neural activity. And my hope is that if we can bring these toolsets together, emerging, comprehensive pictures of how brain circuits work could become increasingly feasible. So I'll start with optogenetics. You have already heard the introduction of the idea in the first two talks. I first met Karl when I came to Stanford and we started thinking about how to control neural dynamics and started analyzing the laws of physics.
Could you use magnetic fields? Could you use light? And light, of course, would be great, as Francis Crick also independently highlighted, because it's as fast as anything can be and you can point it at things. You have to bring light to the brain. And while people have been inserting electrodes into the brain for more than a century, you can also insert optical fibers or other types of optical devices. The next question is: do you make a molecule that converts light into electricity or do you find one? And as Peter already presented, there is a family of microbial opsins whose study goes back many decades and that in single-celled organisms convert light into electrical signals.
So the first of these to be characterized was actually a light-driven proton pump shown here in structural form. It is a seven transmembrane protein with an all-trans retinal chromophore that absorbs light. And you get these rapid conformational changes of what the discoverers called bacteriorhodopsin, like a different proton pump. And that is found in halophilic archaea: microbes that live in very salty water. Now, a decade later, several groups found in the same species a light-driven chloride pump that they called halorhodopsin. And it shares some similarities, but differs in certain key specific residues that make it a chloride pump rather than a positive charge pump.
And then you've already heard about Peter and his colleagues who discovered channelrhodopsin, these light-driven ion channels. Originally, the ones they found were cationic, and now we also know that there are inhibitors that allow the negative charge to pass through. So for me, one of the key and interesting papers that got me interested in these opsins was from 1999, written by colleagues. And at the time, these molecules have been characterized in these halophilic archaea. So they worked with very high salt concentrations. Here's electric current versus chloride. And you can see that the peak is a very high level of chloride.
And if your brain is like mine, then chloride is low around here. So this molecule wouldn't work very well. This is a species-specific halorhodopsin light-driven chloride pump. But one of these molecules had, for some strange evolutionary reason, its maximum function in the low chloride regime. This is actually one of the first molecules that Carl and I started collecting from our colleagues. The first one we tested was, of course, the one you've already heard about from Peter and which was discovered by him and his colleagues: the rhodopsin 2 channel. And we put the gene in neurons, emitted short pulses of blue light from just one light source. standard light to see GFP at that time.
And suddenly, on the first try, we discovered that action potentials could be generated in cultured hippocampal neurons. And furthermore, it did not require the addition of all-trans retinel, the chemical cofactor. For some strange reason, mammalian neurons produced the chemical cofactor. So a lot of what we've been doing since then has been trying to figure out, well, what the principles are for finding these molecules and pushing their physical properties to the limits of speed and spectral sensitivity and all the other parameters. What would we like to achieve? And since this is a summary slide, I'll go over some of the molecule examples on the next few slides.
But what we found is that members of these three classes (light-driven proton pumps, light-driven chloride pumps, and light-driven ion channels) can be found to be sufficiently safe, effective, and rapid. sufficient and powerful enough that they can function in neurons that, of course, are in a somewhat delicate environment with a lot of complex physiology. So Brian, when they were working with me, looked at light-driven proton pumps and discovered that a member of this family of the archaeerhodopsin class, if it is genetically expressed in neurons and then shines green or yellow light, it will powerfully pump out protons. . outside the neurons and silence their activity quite strongly.
Halorhodopsins carry out an inverse pump but with a negative charge. Therefore, it has a similar physiological effect although different on a biophysical level. They pump out chloride in response to green or orange-yellow light and hyperpolarize the neuron, shutting it down. By eliminating neuronal activity, the need for a set of neurons for a behavior or pathological state can be observed. And then channelrhodopsin 2 that we put into neurons in 2004. And you can shine blue light on neurons and let positive charge in. And you can activate them, allowing you to investigate what is enough to activate those neurons. And I should mention that Amy Truong, when she was a graduate student in our group, pushed halorhodopsins to some of their different physical limits.
And Nathan, when he was in our group, tried and achieved the same thing with channelrhodopsins. So light-driven proton pumps... we're a little surprised that this worked. We do not think that protons are very abundant in neurons or outside of them. At neutral pH, they are much less concentrated than sodium and potassium and other ions we often think about in neurophysiology. So, to our surprise, we found that this molecule, rhodopsin 3, when we introduced it into neurons, allowed us to generate large photocurrents and silence neuronal activity even in mice that were behaving awake. So this was really to experience the first almost 100% digital silencing of neural activity inanimals that behave awake.
And we find these molecules by searching genomic databases or sometimes by doing our own genomic research. And there are different strategies you can take. So, of course, if you find a molecule you like, you can search for it locally in genomic space. Look at the species related to the species you found the molecule from and see if you can find improvements. So, for example, there is a species of archaea that archaeerhodopsin 3 came from. And Brian continued to look for closely related species. And the original molecule arch was powerful in silencing, but the little ArchT was even more powerful.
You can also perform a broad search across genomic space. Thus, Brown had discovered that a species of fungus, actually Leptosphaeria maculans, had a proton pump powered by light. And we got the gene, which we briefly nicknamed Mac. And we found that it was also capable of silencing neuronal activity. And because Mac had a color shift (this is the action spectrum, current on the y-axis and color on the x-axis), you could express Mac in a more redshifted opsin in two different neurons and then use two different colors. of light to affect them differentially. So a neuron expressing Mac would be more silenced by blue light, and a molecule expressed as a molecule more sensitive to red light will be more silenced by red light.
That molecule that was most silenced by red light was actually the same halorhodopsin that I mentioned earlier, which in his and his colleagues' paper found to be sensitive to light in a salt concentration range that was much lower than what they thought. would be expected. And this molecule... we published the first proof of concept of neuronal silencing in 2007. But it was a pretty weak molecule. The currents were not as impressive as we expected. So we started thinking about the same genomic search properties and thought, well, could we find molecules that are light-driven chloride pumps that are much more powerful?
And you could also find molecules that have a red-shifted activation spectrum. Now why would you want that? Well, when Amy was a PhD student and working with me, we started thinking about light propagation in the brain. And, of course, many researchers knew this well long before us. But if you put blue, green or yellow light into the brain, light absorption and scattering occurs. But if red or even infrared light is used, there is less absorption. And that's one of the reasons why blood looks red, right? It doesn't absorb as much red light. So the top part is models and the bottom part is actual measurements we made that suggested that redshift molecules could be quite powerful.
So, as it usually starts, we started by looking for candidate genes in different genomes and we stumbled upon a class of molecules in this class of Crook's halorhodopsin for the technical term, which seemed to have a red-shifted spectrum from the original. And then, thanks to decades of structural work, both through point mutagenesis and crystallography, we achieved a couple of point mutations that increased the current. And what we found was that if we expressed the gene for this molecule that we call Shark because it comes from the shark strain of..., we could actually get light from a red laser, shine even through the intact skull of an individual that was behaves awake. mouse, and turns off neurons many millimeters deep in the brain.
Nathan, when he was a PhD student in the group, tried to do something similar but with activators. And so he made a very large scale screen. He computationally looked at more than 1,000 plant genomes in a project headed by Wong called the 1,000 plant project and identified more than 60 new channelrhodopsins and expressed them all for function. As you can see from the red x's, many of them didn't work at all, but some did. And these are the screen currents of him in red, green, blue. And he found exactly one of these after this huge search that responded well to red light, which he called crimson.
And crimson allows you to boost neural activity in response to red light. We made a crimson R point mutant that has better kinetics. And you can even use light that reaches infrared. Here's a 735-centimeter light that causes a neuron to fire in a portion of the mouse's visual cortex. So, of course, you can use crimson and red light to activate large volumes and deep into the tissue, but it is also used in other areas that at the time I didn't even think about. The research campus really wanted a better optogenetic activator for Drosophila, the fruit fly.
And the problem with fruit flies is that if you use blue, green or yellow light, they have a startle response. I guess they wave their arms and get scared. But if you use crimson and red light, the effect is minimized. And then he was able to provoke, by activating red and crimson light, the behavior in drosophila. And now its use is widespread among the fly community, for example. Nathan also in this screen found molecules that were very fast. One that he called Cronos, which is a channelrhodopsin with very fast kinetics, and that is why it has found uses in parts of neuroscience where kinetics are important, such as in the auditory system or in the stimulation of axons that have high rates of activation.
And interestingly, these two molecules, crimson and Cronos, have also been very valuable because they can be used together. So if you look at the photocurrent on the y-axis and the color on the x-axis, you can see that... of course here it's crimson, where it has a peak here in orange, and you can drive it inwards. the Red. Here's Chronos in the blue circles and then the original channelrhodopsin 2 in the black triangles. But all of them can be recruited, at least to some extent, by blue light. So we made the observation that if we used a dim blue light to drive Cronos, so dim that it wouldn't actually recruit crimson, and then we could use a bright red light to drive crimson, we could differentially control the rise of independent populations.
So groups have now used this to look at multiple synaptic inputs to the same cell or how a neuromodulatory pathway might affect a given excitatory pathway in the brain because it gives it differential control over these pathways. So much of the work done has focused on the search for genomes for interesting properties such as extreme color change or extreme kinetic performance. But what about reaching the maximum levels of spatial resolution? Could single cell, single synaptic event, or single spike control be gained over the circuits? So we started collaborating with an expert in holographic neural stimulation: Valentina Miliani from the French Vision Institute.
And this is a job that we spearhead three times. So she builds microscopes that look like this, where you have a laser and you bounce it off a spatial light modulator and you basically project a hologram onto the brain, a three-dimensional sculpture of light, if you will. So we decided, well, what if we could try to create opsins optimized for this purpose? Importantly, we also want to ensure that opsins are located only in the cell body and not in all axons and dendrites. This is an idea that several of Frank and Bolton's groups had tried by taking the original channelrhodopsin and fusing peptides together to get right to the cell body.
Now why is that? Well, even if you drive a cell holographically, you'll hit the axons and dendrites of passing neighbors. And by fusing a peptide to it, you can localize it to the cell body. So we thought, well, we have all these new options. What if we do a double analysis and look for new opsins that allow very powerful control when only one cell body is activated and also peptides that allow them to target there? It turns out that one of the molecules Nathan had found on his screen, CRCHR (which, if we'd known it was going to be cool, we would have come up with a better name), is a very powerful molecule, about an order of magnitude occurrence. greater than the channelrhodopsin 2 molecules.
And so we thought that if we located just the cell body, that could help compensate for the lack of current because you are depriving the axons and dendrites of all those currents that they normally see there. And then I found a peptide that, when infused into COCHR, directs it only to the cell body for expression. Here is a sea of ​​green in this piece of bark. And here you can see cells spaced with darker intervals between them. So why is that useful? Well, if you record a cell and then scan it with your holographic laser, when opsins are everywhere, about a third of the time, Valentina's team saw direct activation.
But if you placed the molecules right in the cell body, then the effect was reduced to essentially zero. In short, a lot of this search has been a matter of luck, right? The essentially off-the-shelf molecules had velocities, amplitudes, and profiles that made them suitable for controlling neuronal activity. And in recent years, we've really tried to push the toolbox to its physical performance limits: maximizing amplitude, accelerating speed, changing colors, and improving spatial precision. But it's interesting to think, well, what about the opposite? Can we learn from this experience and do something to achieve the opposite goal of imaging neural activity?
Can we get neurons to light up when they are active? So in this case, of course, the natural world hasn't made us so lucky. There is no single molecule that converts neuronal activity into light with the appropriate speed, safety profile, and efficacy. You know, what we were lucky in optogenetics didn't translate into the inverse problem with imaging. So naturally we started thinking: well, if the natural world doesn't evolve these things, why don't we build a robot that does the evolution in the lab? So when Erica Yung and Erica were postdocs in my group, we decided to try to basically build a robotic scientist.
Why can't we build a robot that does what we do when looking for optogenetic tools but in an automated way? So how do you do that? Well, let's say you have several genes. They can be obtained from nature or they could be mutants of an ancient gene that you want to evolve in some direction. Some of the mutants may be more interesting to your target and others may be less so. And then we transfect these genes into cultured mammalian cells and each cell gets one copy, a different mutant. And then an automated microscope can be used to scan and look for cells and therefore molecules that have the right speed, safety profile and efficacy and everything that we want for the indicators that, for optogenetics, the world provides natural.
Then we bring a robotic arm developed by our collaborator, . And then we can extract the cells and therefore the genes that are interesting. Now it turns out that Adam Cohen's group here at Harvard had made the serendipitous discovery that the molecule archaerhodopsin 3 that I had mentioned before and that we had discovered to be a very powerful neuronal silencer was actually a weakly fluorescent voltage indicator. And his team went on to create a mutant called quasar 2 that was brighter than archaerhodopsin 3 but still quite dim and not very well localized to the membrane. And yet it has proven useful in cultured neurons for voltage imaging.
So we thought, why don't we try to do it on a large scale? Still, this could be one of the largest direct evolution screens ever performed in mammalian cells. We made almost 10 million mutants in two rounds of evolution. And let's look for multiple parameters. We want the molecule to be bright, well localized, safe and photostable. Why search for multiple parameters? Well, if you're trying to take a molecule and mutate it and then look for better mutants, if you push it in one direction (for example, look for brightness) you can divert it from the other properties you're looking for.
Evolution doesn't mess around. It's just about trying to get the job done. And here you can see the brightness on the y-axis and the location on the membrane, which is a kind of indicator of safety and function. And each circle is a different cell containing a different mutant. And, in fact, you can see cells there coming from molecules that are very well localized but, you know what, not so bright. And then there are molecules that are much brighter, but hey, they are not better located than their parents. With this, we did a couple of rounds of directed evolution and found a molecule we called archon, which is well localized in the membrane.
At the bottom left, it has good kinetics, which it inherited from the main quasar 2. And on the right, it has good changes in fluorescence and signal to noise. So we gave it to groups like Bernardo's group, who did some measurements of synaptic events in brain slices. They would stimulate a layer of the cortex andthey would imagine synaptic events in a different layer of the cortex and only focus on the lower left. And black is what you see when you record with a kind of fundamental truth, so to speak. And the magenta is the unaveraged traces reflected in a microscope.
So it turns out that it is a red fluorescent molecule. You can shine a 630 nanometer light, which is kind of the color of a laser pointer, and it will give off redder light around 660 nanometers or higher. More recently, Seth and Michelle Hahn's group have expressed this in awake, behaving mammals and have been able to visualize normal activity in multiple brain regions: motor cortex, visual cortex, stratum. These traces look like traces, but they are being imaged under a microscope. In this case, an epifluorescent or one-photon microscope can be seen at the top left. And you can even see a population of neurons in an awake animal here in the mouse hippocampus and observe the dynamics of these cells in a local network.
Importantly, since this molecule is a red fluorescent molecule, you can use it in conjunction with blue light optogenetics and drive neurons while visualizing them. Therefore, you can imagine perturbing neuronal activity in a closed-loop fashion while also observing the voltage of the cells. In short, we want to do with images what the natural world had done with optogenetics. And it turns out that an optogenetic tool can be mutated to become a quite useful fluorescent indicator of neuronal activity (in this case, membrane voltage). But the ideal would be to be able to visualize a circuit and perturb it with also some knowledge of how these cells are connected to other cells: upstream cells that bring inputs, downstream cells that have outputs.
How do you know what the network is like? So in the last few minutes, I want to talk about some newer work that we've been doing, which I think will be very useful in building a pipeline for generating new hypotheses to be tested with optogenetics. And this is a way we developed to map brain circuits. Now, why is this difficult? Well, many people are using electron microscopy to map brain circuits, some here at Harvard have been pioneers in the field of kinetomy, looking at mapping the brain with large-scale electron microscopy. But it is very difficult to see molecular information with electron microscopy.
There is also super resolution microscopy. Tent microscopy was invented here at Harvard. But it is difficult to extend this to large 3D structures due to the physical properties of super-resolution microscopy. So, starting with two graduate students, Fay Chen and Paul Tilburg, and now with half of our group working on this, we decided, well, what if instead of getting close to the brain we could physically exploit it? What if you could install a dense, spiderweb-like mixture of swallowable material, like the material in baby diapers, around and between all the biomolecules in a cell, soften the sample by treating it with chemicals, add water, and be able to blow up the brain? and make it bigger?
And this owes a debt to a lot of old lines of research. Actually, my MIT colleague Toshi Tanaka, who unfortunately died relatively young from a heart attack, but in the early 1980s, was studying the physics of these highly swallowed polymers. In this cartoon, you see the white polymer mesh. Add water and it is absorbed by osmosis. The polymer swells. And, more importantly, it is a highly charged polymer. So physical growth can be enormous in a very short time. And he published this beautiful paper that studies the kind of phase transition physics when the polymer increases 1000 times its volume in a matter of minutes.
Polymer also has to be introduced, and that also has a long history. People like Peter were using uncharged hydrogels, like polyacrylamide, taking samples and embedding them in these polyacrylamide hydrogels to improve their images. So if you could synthesize this dense, spiderweb-like mesh but turn it into a charged polymer, you could try to take a brain cell like the one on the left and separate the building blocks of life from each other to make something more like this. . On the right, the constellation of biomolecules floating in space, but with their relative organization preserved. Then, how do we do it?
Well, we had to invent a couple of chemistries. In this cartoon, the proteins are shown in brown. And we had to invent handles that would bind to DNA, RNA, proteins (and now we are even working with sugars and lipids) and put small anchors or handles on all of them, to be able to apply force to them. and separate them. Then we have to make the polymer. And then we use free radical polymerization to synthesize the polymer hydrogel mesh, except we use these charged monomers, sodium acrylate, to form a polyacrylate mesh. And the space between the polymer chains is very small, about the size of a biomolecule.
And when these chains meet the handles or anchors, they form a bond. Finally, we soften the fabric by adding detergents or heat or even enzymes to break things down. And then we add water. The polymers will swell, as Tanaka had marvelously developed physics long ago, but this time, biomolecules will go along for the ride. So we published the initial discovery that we could evenly expand the biological system in 2015. And Panel B is a piece of mouse brain. The polymer is very, very dense. So the spacing is, again, at the biomolecular scale. After the process, this piece of tissue grows to be like the one on the right: approximately 100 times larger in volume.
Now, by design, we made the mesh so dense and so uniformly synthesized that we wanted it to be a uniform expansion process. But this is biology. It is not enough to design it. You have to prove it. That's why we and many others have been doing very detailed control experiments where we take a pre-expansion image with the classic nanoimaging method like Storm. And then we take a pulse expansion image after popping the sample and comparing it. And the distortion is not zero, but it is really small, maybe around a couple of percent on length scales of tens to hundreds of microns.
Here on the left, you can see a part of the mouse brain: the cortex and the hippocampus. This is a Thy1-YFP mouse that Guoping Feng, Josh Sanes, and their colleagues made many years ago. And we're going to zoom from top to bottom. We exploit that white square. And you can see two cell bodies and some purple dots that are synapses that we stained with antibodies. And let's zoom in again. And the purple spots become blurry because you've reached the resolution limit of our confocal microscope. That's all before expansion, but after we expand, you can now clearly see the pre- and postsynaptic sides of these neural connections.
Blue is an antibody against the presynaptic bassoon protein. Magenta represents the image of... taken with an antibody against Homer1a, a postsynaptic protein. And the distance between these two protein densities is the same one that Catherine Dulac and Xiaowei Zhuang measured many years ago with STORM microscopy. Except now you can use hardware that most groups already have. We worked with the group to try to apply some light sheet microscopes that their group had invented to expanded brain tissues. And the effect is that we are now several orders of magnitude faster than competing technologies of equivalent resolution.
It's just a matter of engineering to make microscopes go faster. And this was a job that was triple headed by our entire group and Eric Betzig's group. Images of mitochondria and lysosomes at the top. And myelin at the bottom. We can observe synapses, dendritic architectures and exome architectures throughout the thickness of the cortex. And our hope is that we can achieve 50,000 times the speed simply with more engineering, hopefully not too many months from now. So the nice thing here is that you can really get images at scale through extended neural circuits, but without losing sight of the nanoarchitecture of what's in a brain.
So here is the same color code as before. We have synaptic proteins in blue and magenta. And now we have YFP in yellow. And now we're on a meter scale, but we can zoom in and get very close to individual synapses. And this is kind of a long movie. And for the sake of time, I'm going to skip to the part here where we're going to start zooming out and looking at more context. And then you can zoom back in and see the details. This is a movie we made of a whole fruit fly brain where dopaminergic neurons express a protein.
And I just like it because it feels like a roller coaster when you go through it. Let's go through the ellipsoid body there. And now we are going to delve into the more lateral sides of the fruit fly brain. And I hope you can see that we can see individual axons and dendrites, but we can also zoom out and see the entire brain. So why is it useful? Well, you can actually start looking at the brain wiring. Here's another Brainbow from Harvard Tech by Jeff Whitman, Ross Haynes and others in which fluorophores are expressed in combinations in brain cells.
Then this blue cell received a fluorophore delivered by a virus. This green cell has a different one. This aquamarine color could have received a copy of each. And if you zoom in on two axons (we're here in the mouse hippocampus) you quickly reach the resolution limit of the microscope. And it's hard to see these axons. It's blurry, right? What shape is this green banana? But after expanding, it can cleanly resolve the individual axons of this bundle. So now we and others are trying to design machine learning techniques to automatically trace color-coded neural circuits with a strategy we call Brainbow and also use expansion to give resolution at scale.
To summarize, we discovered that biological systems can be physically magnified. And this technique has really started to become popular with people making discoveries that are published every week in a wide variety of species. Not just brain cells... Giardia parasites. In the lower left, E. coli. In the upper right, specimens of planarian and kidney. And the list goes on and on and on. Our last slide is... what I'd really like to see is if we can integrate them into a channel. Suppose that with expansion mapping, complete maps of brain circuits can be created. And then you can go in and look at neuronal activity using fluorescent voltage indicators and other signals that neurons make.
And then get into optogenetics and use it to do a

causal

test of what a pattern means. Can we assemble this into a process that could, who knows, maybe even produce computational models of how neural circuits work or how they fail in the event of dysfunction? So I think about the knowledge along the way, about all the people who have led specific projects, but I'll put up this slide, which I don't have time to review. I will only recognize those people within the group at the top of our alumni who helped with these projects and an even longer list of people in the middle who collaborated with us to make this a reality.
Neuroscience is truly an omnidisciplinary field today. So I hope you can use these techniques in your group. We have a great teaching culture. And feel free to email me if you have any questions. Thank you very much Ed. Now we have a break of about 20 minutes. And we will return with the second session. Well, if everyone can take a seat please, we're ready to start again. Welcome back to the second half of our Alpert Prize Symposium. We are going to follow the same format with the two awardees classifying speech by opposite speech. So our first talk is by Gero Miesenbock.
Gero is Waynflete Professor of Physiology at the University of Oxford and director of the Center for Neural Circuits and Behavior. He is from Austria. And he got his medical degree in Innsbruck, where he studied really classical physiology and then took a remarkable reductionist turn and came to the United States to work at Yale with Jim Rothman on the mechanisms of vesicle secretory pathways in cells. And what I really like about Gero is that when he faces a problem in biology, I think he is very motivated by biological problems. When he faces a biology problem, he creates a tool to solve it.
So when he was in Jim's lab, he invented what I think is really the first GFP-based sensor of a cellular process, or at least one of the first, which was SynaptopHluorin, in which he exploited pH sensitivity and modified the pH sensitivity of GFP to create a protein whose fluorescence changed from the acidic environment to the extracellular environment in the chainsecretory This has been very important as a tool that is still used today to monitor, among other things, the release of neurotransmitters from neurons. When he set up his own laboratory, he chose something between the secretory pathway and animal physiology (or rather, mammalian physiology), studied the nervous system of Drosophila and tried to understand how the brain of that smaller animal controls its behavior.
Again, faced with problems, he invented various forms of optogenetics. One was to reconstitute, as we've heard before, the fly's entire visual transduction pathway in neurons to make those cells sensitive to light. It was a great demonstration. I don't think you've really used it for biological discoveries. So he went ahead and invented a second approach, which was to exploit ion channels that were not present in the fruit fly for which he could design light-activated ligands. And he has used it extensively to make fundamental discoveries about the relationship between activity and behavior in the fruit fly. And I have great admiration for his work.
I think he's going to talk to us about some biology today, including some wonderful work on the basis of the sleep drive in fruit flies. Thank you. Thank you very much Bernardo for this very, very kind presentation. Obviously it is a huge honor and a huge pleasure to be here. In fact, the honor and pleasure are so great that I decided to share them with my double. This is Dr. Gero. We have more in common than just a name. He is also a scientist. And a crazy guy in this Japanese comic called Dragon Balls. He fights to dominate the world just like me.
And if he looks closely, you can see that his skull has been replaced by a transparent plexiglass dome, so that the function of certain genetically engineered neural circuits in his brain can be controlled with light. And that's what today is about. Now, what motivated the invention of optogenetics about 20 years ago was the idea that a technology like this would open three experimental doors for neuroscience that until then had been closed. The first of these doors was the ability to pinpoint the neural causes of behavior with much greater precision than had previously been practical. And this idea, really, reflects my scientific education as a Jim Rothman postdoc, where the mantra that I was exposed to on a daily basis was reconstitution, reconstitution, reconstitution.
In other words, if you are a biochemist, you will want to understand how a biochemical process works. What you do is purify the responsible actor while putting them back together. And from these pure components the biologically processed ones are reconstituted. So when I started my own lab, I thought: what would be the equivalent of a biochemical constitution for a neuroscientist? And, of course, that equivalent is to metaphorically purify the patterns of electrical activity that sustain our mental life, to reproduce them in a nervous system. And if, in this way, you can reconstitute perception, action, emotion, and thought, then you have a credible claim that you really understand how these mental events are actually based on the physics of the nervous system.
The second experimental toy that I thought optogenetics would unlock was the probing of neural connections, which is classically done painstakingly in paired electrode searches. And in more modern approaches, equally thorough, through large-scale reconstructions of neural circuits. An alternative approach, of course, would be to replace one of the stimulating electrodes with a beam of light that can be projected through the tissue. And then simply listening with an electrode each time the light beam hits a connected partner, thus unraveling synaptic connectivity. And the third experimental door, of course, is the testing of mechanistic ideas. If you have a guess about how a system works, then, of course, the only way to determine whether that guess is right or wrong is to specifically interfere with the process.
So for much of the rest of my talk today, I'll relate some of our recent work on a biological problem where optogenetics has really opened these three experimental doors for us. And that problem is the biological function and neural control of sleep. Sleep is one of the great biological mysteries. Every night, we disconnect from the world for seven or eight hours, a state that leaves us vulnerable and unproductive. And yet, despite these risks and costs, we still have no idea what sleep is good for. We are trying to get at the biological role of sleep by understanding its neural regulation based on the premise that somehow the brain's sleep control systems must respond to molecular changes that are closely related to the central function of sleep.
It is widely believed that there are two such control systems in our brain which are symbolized in this classic diagram by two different forms of oscillation. The sine wave represents the well-known circadian clock that oscillates in sync with predictable external changes caused by the Earth's rotation. As such, it is a purely adaptive mechanism that ensures that we sleep when it best suits our lifestyle. But understanding the clock is unlikely to clarify the deeper mystery of why we need sleep in the first place. We believe the solution to that mystery will come from understanding the second control system: the sawtooth oscillation that is superimposed on the circadian clock.
And that sawtooth-shaped oscillation represents the sleep homeostat. The homeostat measures something that happens in our brain or body while we are awake. That something accumulates or is depleted - logically it is the same - during wakefulness. And when a certain threshold is reached, we go to sleep. The process restarts while we sleep and then the cycle begins again when we wake up the next morning. We know a lot about the circadian clock. And this is really the Rosetta Stone that opened that problem, the discovery by Seymour Benzer and his graduate student Ron Konopka almost 50 years ago of fruit flies whose circadian clocks ran abnormally fast or slow.
And from that discovery followed, through the work of many laboratories over the past five decades, a fairly complete molecular, cellular, and systemic understanding of circadian timing. This slide, by contrast, summarizes virtually everything we know about the molecular basis of sleep homeostasis. And it is an exaggeration, but perhaps not too serious. And my goal for the rest of the next 20 minutes will be to draw at least some outlines on that blank canvas. $$$ Conceptually, we know how homeostats should work. It's a relaxation oscillator, a bistable system that switches between a fill and unload mode where waking corresponds to fill mode where something called sleep pressure builds up.
Until a tipping point is reached, the system goes into download mode. And then the built-up sleep pressure dissipates. Now, at the end of my talk, I hope to propose a molecular interpretation of what sleep pressure is, when it builds up in the brain, and what the processes are that underlie this bistability, this switch between a filling mode and a discharging mode. The story begins, like that of Seymour Benzer and Ron Konopka, in fruit flies with the discovery by a former postdoc in the lab, Jeff Donnelly, when he was actually a graduate student under Paul Shaw at the University of Washington in St.
Louis, of neurons in the brain. of fruit flies that exert a powerful influence on sleep and wakefulness. There are about two dozen of these neurons. They are tagged here by a promotion upgrade item called R23E10. So every time you see that string of symbols, you know that a genetic manipulation is selectively targeting these two dozen of the 100,000 cells that make up the fruit fly brain. Neurons project to this inverted V structure in the midline. This is a particular layer of the fan-shaped body of the central complex. Why such a small number of neurons can exert such a powerful influence on the probably most dramatic global state transitions we experience every day is another mystery, but a topic for a different talk.
Now, together, Jeff and I discovered that these neurons represent the output arm of the sleep homeostat. The neurons themselves, I should say, were originally identified in an activation screen where flies' brains were randomly peppered with activated molecules. So this is an example of an optogenetic or, in your case, thermal genetics application, where the neural substrates of behavior can be identified in an almost classical forward genetic screen. So the way we normally do our experiments is by fixing a fly. We let it walk on a spherical tape, a polystyrene ball, whose rotations we read with an optical computer mouse.
And since there are no documented cases of sleepwalking in flies, we know that whenever the ball spins, the fly must be awake. What you can't see is that the head capsule is actually open. And we inserted a patch electrode into one of these 24 sleep control cells and expressed an optogenetic actuator in the entire population of neurons. In this way we can optically monitor the electrical activity of these neurons and, at the same time, have a recording electrode as a measurement of a member of that population. And this is now an experiment that lasts half an hour.
You will see that the fly starts awake. Move forward happily. The ball is spinning. These are the tariff marks. And the sleep control neuron is completely silent. After three or four minutes we turn on the lights. You can see that the neuron whose activity we are recording begins to emit electrical impulses. And all movement stops practically instantly. At about 19 minutes, we turned the lights off again. The sleep control neuron goes silent. And the movement quickly resumes. So we've isolated a switch in the animal's brain that allows us to activate it and wake it up on command.
Now, during many of these recordings, we found that when we targeted one of these sleep-inducing cells with our patch electrodes, these neurons were generally in one of two states. In a state, as shown here on the left, where the neuron behaves as a well-behaved neuron would be expected to act, it is seen to respond to injections of depolarizing currents with action potentials whose frequency gradually increases with amplitude. of the injected current. The neuron on the right, by contrast, initially didn't look like a neuron at all. You can see that we can still depolarize this cell to positive membrane voltages and still not squeeze a single electrical impulse out of that neuron.
It is not just the active properties of the membrane that have changed. They are also the passive properties of the membrane. If you compare the size of the voltage steps caused by standard-sized current injections, you can see that the voltage deviations to the left are very large, suggesting that this neuron opposes the injected current with a large resistance, while the voltage deviations to the right are much, much smaller. Furthermore, the neuron on the right takes much less time to establish itself at a new equilibrium membrane potential after a current step, while the neuron on the left takes much longer.
So this combination of a short membrane time constant and the low input resistance on the right is almost diagnostic of opening a current leak or current shunt. And I'll show you in a few minutes what the molecular basis of that current derivation is. Now, when we saw this, of course, we immediately thought, well, maybe this is the homeostatic control mechanism of sleep. Perhaps these neurons naturally switch between the electrically active state and the silent state, depending on whether the fly is asleep or awake. And our sampling of flies, whose sleep histories have been manipulated, confirmed this prediction anecdotally.
But of course, to be able to make this point, to demonstrate that a neuron is capable of transitioning between these two states based on its sleep history, one would like to be able to control that translation directly. And to do that, it would be necessary to know a signal that normally acts on these neurons and flips the switch. Now, what could that signal be? Well, a clue to the identity of that signal had come in the first experiments in which the behavior of an animal was controlled optogenetically. These experiments were performed by my then graduate student, Susana Lima, at Yale in 2004.
And what Susana had done was express light-activated ion channels in all the dopaminergic neurons in the flies' brains and then record their movement trajectories. for two minutes before and after activating dopaminergic activity in these animals. Examples of these movement trajectories are seen here in a circular arena of four animals before andafter the activation of the dopaminergic system, and it is clearly seen that dopamine has a very arousing effect in flies, as, of course, in mammals. Most psychostimulants (cocaine, amphetamine) of course act by inhibiting the reuptake of dopamine into the synapse and therefore elevating synaptic dopamine levels.
So, one potential signal that should act on these sleep-inducing cells is an arousal-inducing dopaminergic projection. And, in fact, there is a class of dopaminergic neurons that extend their processes to exactly the same region of the brain where sleep-inducing neurons also live. In fact, the two neurons follow each other so closely that the question naturally arises as to whether they are synaptically connected. And optogenetics gives us the tool to investigate these connections by recording, with a patch electrode, one of the neurons that control sleep, while we manipulate the activity of the supposed presynaptic partner, the dopaminergic neurons.
So this is now an experiment where we start with our sleep-inducing neurons in the electrically active state and then activate the dopaminergic projections that innervate these neurons optogenetically. We can even predict what the effect of such an excitatory dopaminergic signal on these neurons should be. Of course, you should mute them and turn them off. And this should be the mechanism underlying awakening. And this is exactly what happens. It can be seen that after the administration of dopamine, the neuron becomes silent and the action potentials disappear. Furthermore, the passive properties of the membrane change, the input resistance drops, the time constant of the membrane becomes shorter, and more importantly, if we keep the recording long enough, which we can do in some cases, we can see that these changes are completely reversible after a prolonged period. time frame.
So this suspension of electrical activity is temporary. It is part of the normal duty cycle of neuronal activity, and not an artifact caused by our experimental manipulations. If we use the input resistance of the passive membrane properties and the time constant as a measure of the kinetics of these changes, we can see that the change occurs rapidly with a time constant of about one minute, which is, of course, too fast to be explained by the production of new ion channels, but must involve modulation of the existing channel repertoire of these neurons. And we can also demonstrate that the action of dopamine on the neurons that control sleep is direct because we have discovered the dopamine receptor on these neurons that mediates the effect.
And if we selectively remove that receptor from these neurons using RNEI, the neurons become resistant to the dopaminergic signal and the flies become unable to wake up. And literally numb your existence, spending 23 and a half hours a day sleeping. Now, the ability to control this excitability switch of the neurons that control sleep also gave us the means to analyze the underlying biophysical mechanism. For reasons of time, I will only summarize the results. What we discovered is that there are two potassium channels that are modulated antagonistically between the electrically activated state, which corresponds to the sleep state, and the electrically silent deactivated state, which corresponds to the awake state.
There's the classic voltage-gated Kv1 channel shaker, which is upregulated during electrical activity, and a potassium channel we've discovered and called "Sandman" that translocates to the membrane of sleep-inducing neurons when dopamine shuts down electrical activity. And it is the potassium current through this leak channel that supports the shunt that you have seen in electrophysiology. That is responsible for the short membrane time constant and low input resistance of these neurons. Now, knowing the biophysical basis of this transition between sleep and wakefulness allows us to reframe the biological question of relative wakefulness, what is the biological purpose of sleep, into a mechanistically well-defined problem.
We can ask what signal or process activates the dorsal fan-shaped neurons in the body that induce sleep. And in fact, we can make our question even more mechanistically precise because we know the crucial role these two potassium channels play. Any sleep-inducing signal that is detected by these neurons must ultimately act by upregulating the shaker current and driving internalization of the Sandman channel that acts as a deterrent on the electrical output of these neurons. I will focus, for the rest of the day, on our understanding of shaker regulation, in which there has been more progress recently than in Sandman control.
Now, the shaker, like many voltage-gated potassium channels, is a beautiful structure composed of two different types of subunits. Shown here in gray is a pore-making alpha subunit, to which is attached on the cytoplasmic side a beta subunit shown here in blue. Now, if you zoom in closer to the beta subunit, you'll see that it actually has a small molecule cofactor attached to it shown here in red, which is nicotinamide NADPH. This structure solved by Rod MacKinnon revealing this enzymatic nature was not unexpected because when the first of these potassium channel beta subunits was cloned about 25 years ago, the sequences suggested that they were actually enzymes, specifically oxidized reductases.
And that raised the question: Are these molecules voltage-gated enzymes or are they redox-gated ion channels? I will present you with evidence that they are indeed the latter and that their ability to detect changes in cellular redox chemistry is an integral component of sleep regulation, and perhaps even causally related to the biological function of sleep. I will also argue that it may actually be the interaction between the channel pore and the active site of the enzyme that is the fundamental accounting principle underlying sleep homeostasis. It was also noted early on that even though these molecules, these potassium channel beta subunits, clearly resemble aldo-keto reductases, they are terrible enzymes.
They have very, very low turnover numbers. And one of the structural reasons is evident in this structure here. If you look closely, you can see that the binding cleft, in which NADPH is located, is almost latched shut by a tryptophan residue that locks the cofactor in place. And it is this obstacle to the exchange of cofactors that slows down the renewal of the enzyme. We believe this is an absolutely essential feature for the ability of these neurons to monitor changes in sleep pressure. Now, in fruit flies, the Kv beta subunit is a so-called "hyperkinetic" protein, which Chiara Cirelli and Giulio Tononi discovered more than 10 years ago.
It causes insomnia and then mutates. It is as if mutations in the alpha subunit also cause insomniac flies. But here, we have reproduced these experiments showing that the homicidal like hyperkinetic mutant flies are actually insomniacs, and that we can rescue this insomnia from the hyperkinetic mutants by restoring the function of the wild-type Rod protein only in these 24 sleep-regulating neurons. This indicates that these cells are the site of action of the sleep-relevant protein. Now, surprisingly, if a rescue construct is used that carries a single point mutation that allows normal expression, folding, and association of the beta subunit with the channel, but suppresses its catalytic activity, the rescue no longer works.
Thus insomniac flies remain sleepless. This suggested to us that the regulatory role of hyperkinetic sleep must be linked to its ability to bind to this NADPH cofactor and detect changes in the cellular redox state. From that inference, two predictions followed. The first is that changes in redox chemistry are expected to accompany changes in sleep pressure, and the second is that if we could somehow perturb the redox chemistry of these neurons that control sleep, that should have consequences for the dream. From this inference, and these two predictions, and our knowledge of intermediary metabolism, also follows the conclusion that dFB neurons probably monitor these redox processes as an indicator of energy metabolism because this is, of course, where it is ultimately determined. redox chemistry. specifically in the way electrons derived from food are handled in the electron transport chain of the mitochondria.
So when we stumbled upon this particular area of ​​research, we certainly needed a bit of a refresher on mitochondrial electron transport. And I suspect something similar may happen to you. So here's a very simple review of mitochondrial electron transport. We have three proton pump complexes in the inner mitochondrial membrane: one, three and four. Electrons derived from food are accepted, mainly from the Krebs cycle, but also from the oxidation of fatty acids in the form of NADH. And these electrons are then transferred in a very carefully controlled manner due to the explosive nature of the combustion oxygen use from one complex to the other using two mobile carriers: ubiquinone, or Q, between complexes one and three and cytochrome c. between complexes three and four.
The proton gradient that builds up across the inner mitochondrial membrane is then, of course, used by the proton-driven turbine, ATP synthase, seen on the right, which phosphorylates ADP to ATP. So what you see here is a condition where the demand for ATP is high. There is a high level of ADP and there is a sufficient supply of NADH fuel. Therefore, supply and demand are in balance. But when that is not the case, that is, when there is an excess of NADH, but the ATP stores are full and the proton motive force is large, then ATP synthase slows down.
The electrons are still stuck in the transport chain in complex one, but they have nowhere to go. They accumulate, mainly in the ubiquinone group, and begin to transfer directly to molecular oxygen and produce the oxygen free radical, superoxide. So we would predict that these sleep-inducing neurons during the waking state, when, as you may recall, Sandman inserts into the membrane, divert their electrical activity, thus preventing them from producing energetically costly action potentials. But the animal, being awake, has just had breakfast or lunch and therefore has ample caloric reserves that lead to exactly these conditions. That electrons are fed into the mitochondrial transport chain, but somehow, there is little demand for ATP synthesis, which should make these neurons particularly prone to mitochondrial oxidative stress.
To test this idea, we filled the mitochondria of sleep-inducing neurons with a protein called "MitoTimerm," which is the derivative of green fluorescent protein, whose chroma four irreversibly converts from green to red as it is oxidized. This is a kind of integrative indicator of mitochondrial oxidative burn. They express this protein in these 24 sleep-inducing neurons and are then visualized as dendritic fields. What is seen here are two photon stacks across the dendritic fields of the twelfth fly, whose sleep histories differ. And you can see that in the flies that have been forcibly kept awake (that's just the top row) there is a clear red shift of the MitoTimer fluorescence, suggesting that these sleep-deprived animals actually suffer greater degree of oxidative stress than those who rest well. flies in the background.
Now, what we also noticed is that when we measured the basal sleep of the flies that expressed this reporter protein in the mitochondria, only of these 2000 neurons controlled by sleep, is that there was a small, but significant, observer effect present, That is, the flies that had MitoTimer in their mitochondria lost a small but significant average of about two hours of sleep daily. And we suggest that this reflects the fact that as MitoTimer oxidizes, it actually acts as a buffer for oxygen free radicals. And it is the consumption of these oxygen free radicals that is reflected in reduced sleep.
To test this notion more clearly, we looked for better tools to do so. And probably the best thing there is is a molecule of plant origin. I guess this is another topic nowadays: many of the best tools come from unexpected parts of the realm of life. Many plants have forked mitochondrial electron transport chains with a second terminal oxidase. Our terminal oxidase is complex IV. And plants have an alternative oxidation called AOX that directly accesses the ubiquinone group and acts as an overflow valve when too many electrons accumulate in that group. So it's not a decoupling. Does not interfere with energy metabolism.
It simply takes the leftover electrons and detoxifies them by transferring them to molecular oxygen and producing water. So,When we introduced this particular molecule into the inner mitochondrial membrane of these 24 neurons, we saw that the sleep loss was dramatic, almost eight hours per day. Therefore, limiting mitochondrial reactive oxygen species production at the source appeared to alleviate the pressure to sleep. Now, in animals, the typical antioxidant defenses are two enzymes: superoxide dismutases. We manipulate both. I showed you the results with just one. Superoxide... this mutates one. The cytoplasmic form, which exists in antioxidant form, which has the intended effect, namely a reduction in sleep, but there is also a point mutation that converts the antioxidant into a pro-oxidant, and the introduction of this particular variant has the opposite effect, that is to say. increases sleep.
But the increase in sleep is blocked if we eliminate the hyperkinetic beta subunit of the potassium channel or the agitating alpha subunit of these neurons. So, taken together, we believe that these behavioral and imaging results suggest that the potassium channel beta subunit indeed couples mitochondrial electron transport to sleep. Now, you are probably wondering, how can an extremely short-lived agent, like superoxide or hydrogen peroxide, have such a short life because it is so highly reactive and can serve as a signal that is transmitted from the transport of electrons from mitochondria? chains until it reaches a beta subunit of the potassium channel that is suspended from the plasma membrane.
What we think is that we are probably missing a crucial biochemical link in this signaling chain. And we also have a hypothesis about what this particular intermediary might be. We believe that they are lipid pro-oxidation products that are derived from mitochondrial membrane lipids. These lipids are some of the most susceptible targets for oxygen free radicals. More importantly, they produce compounds like the 4-oxo-2-nonenal you see, which are established substrates of the beta subunit of the potassium channel. The beta subunits are oxidized reductases or, specifically, aldo-keto reductases. So they take carbonyl compounds and reduce them to alcohol.
And oxo-2-nonenal is obviously an aldehyde, so it has the right chemistry to reduce to alcohol. And that reduction would then be combined with the oxidation of NADPH to NADP+. There is several additional evidence that suggests this is a likely candidate. One is that Rod MacKinnon, analyzing its structure, observes that the active site of the beta subunit is unusually hydrophobic. And he also notes that there is an ill-defined electron density in the active site. To me, this suggests that it is a lipid or a mixture of lipids bound to the crystal. And of course, since fatty acids are heterogeneous, the degradation products they will produce through pure oxidation.
It will also be heterogeneous and will not produce a clear diffraction pattern in the crystals. So, the idea is that the molecular signal that transmits the increase in sea pressure in these neurons is the progressive oxidation of the cofactor in the beta subunit of the potassium channel from NADPH to NADP+, and that this, in some way, is related with sleep induction. . So, of course, we looked for ways to test this idea causally and found an optogenetic tool (or adapted an optogenetic tool) that would allow us to do just that and, through a pulse of light, reverse the redox state of the cofactor that is bound to the beta subunit of the potassium channel.
The tool we used was developed by the late Roger Chen, who has also been mentioned several times today, as a genetically encoded contrast agent for electron microscopy. The tool is called miniSOG, or small superoxide generator. It is an engineered flavor protein that, in this case, we have anchored with a lipid modification in the leaflet of the plasma membrane. Very close to the potassium channel. And then, upon illumination, we expect this tool to oxidize the NADPH cofactor, either directly or through locally produced lipid peroxidation in the medium. And that, of course, then, if our idea is correct, should lull the flies to sleep.
And as can be seen in these experiments, that was indeed the case. The crucial column to look for is the center one. In all cases, we measured sleep of individual flies (each row is one fly) for 30 minutes after an initial 9-minute exposure to blue light. And you can see that compared to their parental controls, the flies that have miniSOG fall asleep at a much higher rate and for a longer time than the parental controls. Once again, the effect is blocked by removing the hyperkinetic. That's the fourth column from the left. But it is not blocked by removing a harmless potassium channel, KD4.
Now, the ability to set cofactor redox chemistry directly across the entire genetic tool also opens the door to biophysical studies of what's really going on with excitability. of these neurons as we reverse the cofactor state. So we can plug in one of these sleep-inducing neurons and then again, after 9 minutes of illumination, measure either, on the current clamp, its spike behavior, or on the voltage clamp, the characteristics of the voltage-dependent neuron. potassium currents. This is an example of a neuron. It can be seen that clearly after illumination the spike rate increases. The input-output function becomes more pronounced.
The interval between peaks contracts. In other words, the neuron becomes much more vigorously electrically active. And the biophysical change that underpins all of this in the voltage-dependent A-type potassium current is a lengthening of the inactivation time constant. The potassium channel then begins to inactivate more slowly with an oxidized cofactor than with a reduced cofactor. When Chuck Stevens and John Connor defined type A current in 1971, they included a modeling study in which they proposed that type A current is the primary determinant of the interspike interval of persistently active neurons. And conceptually, or intuitively, the way to link inactivation kinetics to activation rate is this: a strong type A current is needed to restore the membrane potential to its resting level after a spike.
If your neuron is persistently active with each spike, it will push a certain fraction of its potassium channels into the inactivated state and therefore render them unavailable during the next repolarization event. By slowing down inactivation, you keep a larger fraction of your channel's population in an active, driving state. And that allows you faster repolarization and therefore higher spike rates and, in this particular physiological context, deeper or longer sleep. If we express only GFP and not miniSOG, it can be seen that light has no effect. But the changes we saw during the illumination experiments inside the cells were also reflected between the cell recordings where we simply compared the properties of neurons (this is the bottom row now) expressing the catalytically active or catalytically dead rescue transgenes. .
It can then be seen that catalytically active rescue transgenes, again, cause high rates of slowly inactivating A-type spikes and currents. And the same goes for manipulations of the cells' ability to prevent the production of reactive oxygen species or induce them with the presence of this pro-oxidant version of SOT1. So this suggests that there may in fact be a direct mechanistic connection between life rate and sleep, which is not entirely unexpected given the epidemiological evidence. Many things that cause oxidative stress have been implicated in aging and degenerative diseases. And, of course, chronic sleep deprivation has also been implicated as a cause of shortened life expectancy.
Possibly this is the mechanism that could link these two important phenomena. So we've reached a stage where I can go back to this conceptual animation and try to replace it, for you, with a molecular and mechanistic animation. In the next animation, that will happen quite a bit, but I'll explain, slowly, how to do it. So the crucial regulator that determines whether this sleep-inducing neuron is in filling or firing mode, and whether the animal is awake or asleep, is the sandman channel, shown here in yellow, which may be in the plasma membrane or in intracellular vesicles. We know that dopamine drives internalization.
And we are working feverishly to find the signal that causes endocytosis of the Sandman channel. So when Sandman is on the plasma membrane, the spikes are blocked. And the potassium channel beta subunit population cofactor is progressively oxidized to NADP+ as a reflection of the speed of operation of the mitochondrial electron transport chain. Now I mentioned before that these beta subunits are probably the lousiest enzymes known to man. And of course, that's exactly the property you would want if you built a system like that. Because what is needed is a biochemical memory that retains each oxidation event and, from multiples of these events, constructs an analog measure of accumulated sleep pressure.
If the enzyme were catalytically active, each oxidation would be fleeting and ephemeral. And your accumulated sleep pressure would disappear. Now, through this ability of the beta subunit to communicate with the channel's inactivation gate and regulate the inactivation time constant, the same process also automatically determines the corresponding corrective action. Because it is the fraction of the hyperkinetic pool that has been oxidized that determines the kinetics of the type A current and, therefore, the spike velocity of the neuron. Now, a particularly important aspect of a system like this is, of course, that the built-up sleep pressure must somehow dissipate when the animal actually goes to sleep.
And we think a particularly beautiful way to achieve this is to couple beta subunit enzymatic activity to voltage-driven rearrangements of the alpha subunit. As you can see in the animation here, when Sandman leaves the membrane, the neuron becomes electrically active. The voltage sensors start to move. We believe that these conformational changes are transmitted to the beta subunit. And suddenly an escape route opens up for the oxidized cofactor. NADP+ is expelled and replaced by NADPH. And the animal wakes up refreshed, with its cofactor reserve replenished. Before finishing, I would like to return, for a moment, to the Stone Age of optogenetics itself.
When we started working on the first optogenetic actuators, I learned, through a citation alert, of the article on synapto-pHluorin that Ben Heidel mentioned, of a quote that Ben Heidel also mentioned in his introduction. And that showed me that I wasn't the only scientist who had seen the need for technologies like this. And obviously, Francis Crick, in an essay titled "The Impact of Molecular Biology on Neuroscience," which was published in the millennial edition of The Philosophical Transactions of the Royal Society, wrote, as Ben Heidel already said: "The following requirement is being able to turn on or off the firing of one or more types of neurons rapidly in the behaving animal.
The ideal signal would be light. This seems pretty far-fetched, but it is conceivable that molecular biologists could design one in particular. . type of cell is sensitive to light in this way." So when we had the first experiments that turned that wild possibility into reality, and these are these experiments. These are hippocampal neurons grown in culture and transfected with an opsin protein extracted from the fly's eye. Because the correct gene for General Dobson had not yet been identified. And GFP... so we then attached it to one of these transfected neurons. You can see that in the dark it is around the resting value.
But as soon as we turn on the lights, there is a depolarizing step. And the neuron responds with a barrage of action potentials. So when we did these experiments, I sent Crick a preprint of our paper. And if you've read this wonderful book called The Eighth Day of Creation, which chronicles the early days of molecular biology, you know that Crick was a prolific letter writer who guided the development of many fields through vast correspondence. And his stylistic characteristics as his correspondent were twofold. He was always encouraging and also constructively critical. And that's exactly what I got.
Then he responded to me and told me that he had read the document I had sent him with great interest and that he was excited to see that the system now works, at least to some extent. However, he realized, as did I, that he still needed improvement and that was... and that this would require more work. Unfortunately, Crick did not live to hear how not only our experiments but those of many others progressed. But I think it's fair to say that the improvements have come largely thanks to my award-winning colleagues, and that as a result of all these efforts, the way neuroscientists approach our business has changed.fundamentally.
With that, thank you very much. And thank you too. To my group members, I would just like to mention some of the key people. So the current crew is aligned to the left. And some of the notable alumni have been moved one tab to the right. Boris Zemelman was the postdoc who created the first optogenetic actuator, Susana Lima the graduate student who used them in animal behavior. And the recent work on sleep was done, initially, by two postdocs, Jeff Donlea and Diogo Pimentel, and the more recent work on the redox control of sleep by a postdoc, Anissa Kempf, and a graduate student, Michael Song.
Thank you. Thank you Carol for that wonderful talk. So our next talk is from another postdoc. It's from Dr. Charlotte Arlt. She is originally from Germany, did her undergraduate work at the University of Cologne and then went from there to do her PhD with Michael Houser at University College London, and then came to Harvard and joined Chris Harvey's lab in the department of neurobiology. . And she will tell us how she uses light to read and manipulate the activity of neurons in an effort to understand the decision-making processes in a mouse brain. Thank you very much for the opportunity to share our recent work.
It is truly an honor. And I'm excited to be able to share our work on this occasion. Going back to the topic of racquet sports, we make decisions in our daily lives all the time. If you think about a tennis player like Roger Federer, for example, which I do every day, he has to decide to hit the ball left or right over and over again. But the process by which you arrive at a decision can depend largely on the context in which you make it. So in this case, imagine you are Roger Federer and you are in a training situation.
Your coach is on the other side of the net. And the coach tells you to always hit the ball in exactly the same place. So in this case, mapping sensation to action is very simple for you, guided by key instructions. But now imagine you are Roger Federer again and you are playing in the Wimbledon final against Rafael Nadal. The ball comes towards you in exactly the same way as in training. And you could hit it in exactly the same place. But now what guides your decision is a complex model of your environment, including a model of your opponent and statistics from a match like this.
So this combination of sensory information with some internal knowledge or experience to guide action is what we consider cognition. And at the Harvey Laboratory we would love to understand how this process is implemented in the nervous system. We understand that this is a very ambitious question, so we try to address it by asking two specific questions here in this talk. First, we want to know what brain circuits actually mediate decisions as seemingly simple as hitting a ball left or right. And once we have identified these circuits, we can ask: how does context actually affect the implementation of said decision-making process in those same circuits?
And what I want to tell you today is how we look for circuits that mediate simple decisions. And to our surprise, we found that identity and the brain areas involved that mediate decisions would change depending on the context and experience of the animal. So for the rest of the talk, I'd like to tell you how we came to this conclusion. Wanting to study brain circuits for decision making, we don't have tennis players. But we train mice to make decisions by running left or right in mazes. We think this is quite naturalistic behavior for animals, because that's what they also need to do in the wild to survive.
And once we train the mice to make decisions that way, we'll be able to inhibit and silence different parts of their brain and ask: what areas of the brain are they actually using to guide their decisions? And you can imagine that experimental setups to manipulate different brain regions in the same mouse can be quite heavy and difficult for an animal to transport. So instead of having the mouse move freely in the environment, we actually move the world around the mouse and keep it stationary. So here you see an animal in a virtual reality setting. It is working while the head is fixed.
And the polystyrene ball on which it runs, the movement of this ball is translated into movement of the virtual world that we project on a screen that surrounds the animal's field of vision. So the animals are using the visual feedback they receive from this world to update their running patterns. We can now use the system to train mice to turn left or right in a very simple Y-shaped maze. And we train them to use visual associations of cues they see on the walls of the maze with rewarded turning instructions. . So in the example above, you see that the mouse sees vertical cues on the side of the maze.
And in this case, you have to run to the left at the end of the maze to get a reward once you get there. In the lower case, you see the opposite type of vertical signal test. And then you have to run to the right. So let's see what this actually looks like in action on a trained mouse. The animal in the left case finds this horizontal bar, successfully runs to the left, where it is supposed to run, and receives visual feedback about the correctness of its choice. And then a drop of milk is dispensed to reward him.
And a few trials later, you encounter the opposite type of trial, those vertical bars, you choose to run to the right, which is the correct decision in this case, and again, you get rewarded at the end. Now that we have animals making decisions in this virtual reality setting, we can expose their brains by removing the skin from the skull and thus gain optical access to the surface of the brain just below. We are using mice that express channelrhodopsin, something we have heard a lot about so far, specifically in the GABAergic neurons of the neocortex. These neurons are the small interneurons here, depicted in green, that inhibit the parameter population.
And parameter neurons are normally those that transport information from the local circuit to different regions of the brain. So when we irradiate light of the appropriate wavelength onto the skull, we can activate interneurons throughout the skull. And those interneurons, in turn, inhibit the local population, thus silencing a certain volume of the brain. And we can do this with very high precision, again, as you've heard in previous talks. Here the light is on, indicated by the blue bar. The interneuron seen in the top row reacts reliably and powerfully for just a few hundred milliseconds. And the simultaneously recorded parameter neuron below stops and ramps up at the same time.
And at this point, I would like to thank the pioneers of optogenetics, whom we honor here today. Because none of the experiments that I am going to describe would have been possible without their contributions. And even after running these experiments for quite some time, it's still amazing to be able to remotely control the brain activity of a live decision-making mouse. What areas do we really want to inhibit? We focus on some candidate areas for decision making, one of them is the parietal cortex. The parietal cortex receives sensory information for many different modalities and, in turn, projects to different areas involved in action selection.
So it's really at the intersection of sensation and action, and it's been implicated in interspecies decision making. Another region we focused on is the retrosplenial cortex, as we used a navigation-based decision-making task. Because the retrosplenial cortex lies at the interface between subcortical navigation systems, such as the hippocampus and entorhinal cortex, and other cortical regions, including the parietal cortex. We can now couple our blue laser light to a system of mirrors whose position we can change very quickly to direct the laser beam toward different target locations on the skull. So in a trial, for example, we may be inhibiting the parietal cortex in both hemispheres as the animal runs through the maze and chooses to turn left or right.
And then, in the next test, we will be able to quickly change the positions of the mirrors and direct the laser beam to a different region: here, the retrosplenial cortex. We choose the order of these target locations randomly. And we also intersperse them with controlled tests in which we direct the laser beam away from the cortex. We have another control test in some external sensory cortex where we inhibit a local volume that is supposedly not involved in making these types of visual association decisions. So, given that we now have a system where animals make decisions in this virtual reality and we can inhibit different areas of the brain, we can finally ask ourselves, what area is really necessary to make this simple type of navigation decision?
And we do this by subselecting trials in which the laser beam was at a particular location and then quantifying the animal's average performance on those trials. We then quantify performance as a fraction correct, where 1, or 100%, means the animal makes no errors. And 0.5, or 50%, means the animal would perform at the chance level, whether simply guessing at random or continually running to the same side. But as you see here, in the control case, the performance is very high, very far from the probability level. The animal makes very few mistakes, that is, one, that it knows this task very well, but two, also that it is not distracted by blue light in general.
When we now inhibit some external sensory cortex, we see a very similar picture, indicating that the animal is not relying on intense activity in that area to guide its navigation decision. When we inhibit the retrosplenial cortex, we now see a very different picture where, on average, every time we inhibit this area of ​​the brain, we are causing the animal to make errors in this type of decision-making task, indicating that it is actually relying on activity in this area to guide their decisions. And now, surprisingly, when we inhibit the parietal cortex, the animal can still perform the task very well, indicating that activity in this area appears to be dispensable in this environment, and the animal can function quite well without it.
So, having now identified the retrosplenial cortex as an area that mediates these simple types of decisions, we can now go ahead and modify the context in which the simple decision is made. So now we create a flexible context where, in addition to the two associations that I have shown you before, sometimes the animal has to make the opposite decision given the same visual cue. In a given experimental session, we introduced these two pairs of associations in blocks of dozens of trials. And once an animal has been trained in this environment for quite some time, it can perform quite well, mostly making mistakes, actually, at the switching points of these blocks.
So after changing the association block, the animal makes some mistakes, because it is still using the old association. Then you realize that you are not rewarded this way, you update your strategy and use the new pair of associations. Then, towards the end of each of these blocks, the animals perform with a very high percentage of success. And their decisions, outwardly, look very similar to those made by animals in the simple context. And let me show you how similar those decisions look. In the case on the left, you see an animal that was trained in the simple context.
Find this horizontal sign and have to turn left. On the right you see an animal that was trained in the flexible context, encounters the same type of trial and also has to turn to the left. And when you watch these movies side by side, they really do look identical, indicating that the types of decisions, outwardly, that these animals make are very similar. So now, as a sanity test, we first wanted to see if, again, retrosplenial is actually mediating the decisions in the case on the right. So we inhibit the same targets that I showed you before, but now specifically, towards the end of these blocks, where the animal performs with a very high fraction correct, which means that it knows the association well.
Again, in the control environment, or with some inhibition of the external sensory cortex, the animal is performing the task very well. Once again, with the inhibition of the retrosplenial cortex we induce many errors. But now you see that the drop is quite large, actually close to 50%, which means that the animal is almost performing at chance level. So in this flexible context, it seems to especially rely on activity in this part of the brain to guide its decisions. But now, surprisingly, when we inhibit the parietal cortex, we also see a verygreat in performance. And again, we didn't see such a performance drop in the simple context.
So in the flexible context, it's not just the retrosplenial cortex that guides the animal's decisions, but the animal also depends on this additional brain area, the parietal cortex, which, just to remind you, in a simple context, for the same external decision, he did not do it. There's no need. And since this result surprised us quite a bit, we wanted to make sure we understand what's going on. It seems that the current cognitive context dictates whether this parietal area is also necessary for decision making. Now, if that's true, we should be able to take the exact same mouse, change the cognitive context it's experiencing, and therefore change the number of brain areas that are involved in the decision.
So to test this, as a sanity check, we took an animal that was trained in the flexible context. And again, to remind you, with inhibition of the parietal cortex, we see a very large drop in performance. So the animal is using this part of the brain. And then we transition this animal to the simple context for 14 days, where it doesn't experience any association changes. In the control case, of course, the performance is still very high. But now, surprisingly, when we inhibit the parietal cortex, we still see the strong effect on performance, suggesting that the brain persists in using this decision-making area for this very simple decision, even weeks after we change the animal. flexible to flexible. simple context.
And we thought this was quite surprising, especially, again, when we compared the lack of effect in the animals that were trained in the simple context here. Because, again, animals trained on the simple context that have never seen the flexible context do not rely at all on this area of ​​the brain, the parietal cortex, to guide their decisions. So, in addition to the current context dictating which areas of the brain are used to make decisions, context can also have a very lasting impact on which areas of the brain are used to guide the same simple decision. Finally, we wonder if, perhaps, we see this large effect of context.
Because perhaps we are facing a very extreme case. Or maybe it's some special case where we compare this simple context with a flexible context. It could be that, perhaps, the flexible context is especially demanding on the nervous system, where the same signal has to be mapped to opposite choices. So we create another context where the animal does not have to reverse its choices, but we simply create a more diverse context. So we keep the two associations that you saw before. And then we added two more associations with different visual cues. But now, more importantly, the mapping from the visual cue to the choice of which animal is rewarded is consistent.
Finally, we asked what areas of the brain the animal relies on in this type of decision making. And again, we see that the animal uses not only the retrosplenial cortex, but also the parietal cortex to guide its decisions. So, it seems that context, in general, has a very strong impact on the areas of the brain that are used for decision making. And the context can be changed in several ways. It can increase the diversity of the context or it can make the context more flexible. And probably, in addition to these two variations that we've shown you, there are many different ways that would change the number of brain regions involved in decision making.
So we think we've seen something quite interesting about the brain, namely that it can implement exactly the same decision from the outside using completely different brain circuits. And this suggests that the brain is actually tremendously flexible and shaped very deeply by context and experience. Because here we are not just talking about changing the synaptic connections between individual neurons. We're talking about using completely different sets of brain regions for the same decision. And we think this could have some important implications for how we want to study cognition as systems neuroscientists. Because we've shown that behavioral task design, details, and training history really matter.
If we have two animals that perform this seemingly identical task, they may actually rely on different brain regions to do it, depending on their experience. Therefore, we must control the experience much more carefully. But in addition, we also suggest that perhaps we should take advantage of this diversity to create more diverse and naturalistic laboratory environments in which we study decision making. Now, where do we take this work? With Sofia Soares, at the Harvey Laboratory, we have built a special microscope that allows us to visualize different areas of the brain simultaneously. We're currently using this approach to look at activity in all of those decision-making areas that I told you about earlier.
And we wonder how activity in those areas, but also between them, might differ depending on the cognitive context in which the animals apparently make the same decisions. And another interesting research direction taken by other members of the lab is to try to get at the inner workings of those individual areas. So all of the optogenetic approaches that I've shown you today were pretty crude and silenced entire regions of the brain. But Selmaan, in the lab, has developed a method by which she can activate individual neurons one at a time while monitoring the activity of the surrounding tissue.
Here you see, from left to right, he is intentionally, with lights, activating one neuron at a time in the living brain. And he can use this technique to ask questions about the function and architecture of the microcircuit and about the calculations he may be performing on a given circuit. And Dan Wilson, in Harvey's lab, has taken this approach a step further by now activating 10 neurons simultaneously, as shown here with the blue arrow. And he does it while the animals perform decision-making tasks. He can then ask what the causal link is between the activity of individual neurons that he can functionally identify, for example, neurons that respond to a visual cue that the animal is using to guide its choice.
So now you can link the activity of those neurons with the activity of the network, but also with the performance of the animal, trying to establish causal links between the activity of individual neurons and cognition. And with that, I would like to thank everyone who contributed to this work, first and foremost, Roberto Barroso-Luque. He was a research technician at the Harvey Laboratory with whom I collaborated very closely on this project and who is now in graduate school. But he really helped push it forward and push it in all different directions, to the scale you see today.
I would also like to thank Chris Harvey, who has been an excellent advisor on all aspects of the project, from training mice to preparing this talk. I really value his contributions, his advice, and his passion for science in general. I would also like to thank Selmaan, who originally designed this flexible decision-making task that really inspired the entire project, and then the entire Harvey Lab community for their fun, scientific discussions and excellent feedback, our funding sources, the research and instrumentation. core and the broader medical school community. Thank you, Charlotte. That was wonderful. Our last speaker is Dr.
Karl Deisseroth from Stanford. He is a Chen Professor and chair of the departments of psychiatry and bioengineering, an interesting combination that really defines his career. Karl did his undergraduate work here at Harvard, specializing in biochemistry, and then went to Stanford to do a combined Ph.D. He also worked with Dick Chen, as did Ed, and studied the coupling between neuron activity, calcium influx, and cellular processes. And this was where I met him for the first time. Because he was doing similar work here with Wade Regier. After completing graduate school and his doctorate, he completed clinical training in psychiatry and still practices as a psychiatrist.
As we've already mentioned, he, along with Ed and Feng, were the first to introduce channelrhodopsin into mammalian neurons and show that they could control the excitability of those cells using light. Since then, his lab has led a steady pace of developments in optogenetics over the years, producing literally dozens of different types of optogenetic actuators that we can use to manipulate the activity of cells. His lab has also produced light-activated G protein-coupled receptors, step function options, and many, many other tools. Furthermore, his lab also invented the light clearing method: the CLARITY brain clearing method, which is now ubiquitously used to observe brain structure in intact organs that do not need to be cut.
This has become a very powerful technology. And as you'll see in a minute, in addition to inventing technologies, Karl's lab has consistently used them to make fundamental discoveries about the organization of the mouse brain, and I think, guided by his own experience as a psychiatrist, has really begun to reveal how the animal not only makes decisions in a normal state, but also how this goes wrong in some pathological states. Carl, thank you. Very good, thanks Renardo. Very grateful for this tremendous honor. And my sincere congratulations to my fellow award winners. This is a wonderful moment. So thank you for everything you've done over the years.
I want... since last time, you've heard a lot of things. So I'm going to move, as quickly as I can, to the present without spending too much time on the past. I want to talk a little bit more, even in greater detail, about the inner workings of the channelrhodopsin protein itself. Progress in this field has been very rapid. In 2011, we didn't know much about the internal structure of channelrhodopsin. But things have progressed very quickly over the next six years. Now we know a lot. Here's that retinol binding pocket that Peter mentioned. This is the ion pore, lined with polar and charged residues, including these five glutamates E1, E2, E3, E4, E5.
And reaching this level of protein understanding is, of course, exciting in itself for people who care about proteins and molecules, and incredibly elegant natural machines like this, but of course it has also led us to be able to change them, fundamentally change their properties in ways that really matter and are useful. For example, we were able to make them faster, as Peter mentioned, so here, going up to 200 Hertz, increasing with a very fast mutant, as we described in 2010, getting a red light-driven increase, as we did in 2011, together with Peter Hegemon and Ofar Izar, who is here today, doing this bistable operation with the step function tools, reversing the cells in and out of excitable states, and then making the channelrhodopsins inhibitory, and then making them in turn , are bistable. inhibition too.
And all of this came from molecular modeling, structural determinations, and a lot of work, much of it in collaboration with Peter Hegemon and many other very talented colleagues. I want to address two aspects of this scientific journey that were particularly useful and relevant to modern neurobiology. Of course, a big part of it was getting these three crystal structures. When we obtained the one from 2012 in collaboration with Cato, Feng Zhong, Ofar Izar, Shar Ramakrishnan and Osama Nurekee, we immediately saw that it was a dimer of two 7 transmembrane proteins. Each one had its own retina, its own pore.
But also... seeing the pore, for the first time we had the opportunity to change it. It had been hypothesized that the pore could be found, rather than within each monomer, at the interface of a dimer, or even a trimer. This turned out to be wrong. But of course, without even knowing where the pore was, it would be very difficult to redesign it. And we were able to do that. Looking at the inner pore lining from our structure, we could see that it was largely lined with polar residues, but also with residues that were predicted to give rise to a negative surface electrostatic potential in the inner pore lining and in the inner lining. . and exterior lobbies.
And this led to the idea of ​​changing the selectivity of the ions. This wild-type channelrhodopsin was a non-selective cation channel, as you may have heard, that flowed sodium, potassium, not calcium or protons. And Andre Barent and Su Li, in my lab, worked hard to change that inner lining and make it more positive. And they did it. Against all odds, this came to light, along with a beautiful article by Peter Hegemon: a similar result and different mutations, both ending in the creation of this chloride-conducting capacity of channelrhodopsin, which allows applying an inhibition ofthe peaks based on blue light.
And together with Peter, we optimized them further and created the step function inhibition pathways of these chloride-conducting inhibitory channelrhodopsins. So these have been, now, widely used. For example, together with Will Allen, in my lab (Will is now here as a Harvard Fellow) we were able to use this fast IC++, the next generation version of the chloride channel, to identify the causal functions of neurons that are involved in the fundamental survival drive of thirst. Now this is just an example. Getting to the inhibitory chloride that connects channelrhodopsins in 2014 was one step. But then something very interesting happened.
The following year, John Spudich's laboratory identified naturally occurring chloride-conducting channelrhodopsins from Gulliardia theta. And just last year we were able to obtain the crystal structures of both the natural chloride channel and the one we had produced together with Peter. And this gave us a very interesting insight into both natural and engineered chloride-conducting channelrhodopsins, in particular that both the engineered one and the one that nature had developed, in fact, used this principle of surface electrostatic potential inside the pore, and also in the surface vestibules of the channel pore, to exclude, in this case, probably, anions and cations, in this case, to create an anion-conducting pore.
Then this was revealed... and we were able to convert anion-conducting channelrhodopsins to give them cation selectivity, take cation-conducting channelrhodopsins, give them anionic selectivity, all based on this structure-based analysis of the pore. So at this point, we understand the pore, at least to some extent. And as you'll see later, that even helped us detect, identify, and understand new types of opsins that have new types of functionality. But first I want to talk about color selectivity as a very important step. And this red light-driven spike, in part, depended on the discovery of a red light-driven channelrhodopsin, which was work, again, in collaboration with Peter and Ofar Izar, but work led by Feng Zhong in my lab.
In 2008, he found this red light-driven channelrhodopsin from a multicellular green alga called Volvox carteri. And this allowed us, ultimately, although we didn't realize it would have this impact at the time, this ultimately allowed us to even go beyond what Crick's initial concept of the utility of light control might be. And this has already been demonstrated a couple of times. But I want to focus on a different aspect of Crick's initial statement. He focused very clearly on types, types of neurons: designing a type of cell. And actually, this is very useful. And in fact, this is how the vast majority of optogenetics has been done around the world, allowing genetically specific cell types to be activated or deactivated.
But even in this paper he did not describe control of multiple single neurons, which is what red light-driven channelrhodopsin has ultimately gone a long way to enabling. Of course, in the first experiments we controlled individual cells. But this was in... with a reading from a patch clamp, a pipette in culture. And here, showing some of those early experiments, here is the small group from back then. Here's me, Ed and Feng... back in the good old days. With Mike Greenberg in the audience, it is good to note that the initial readout of membrane depolarization was the phosphorylation of CREB Ser133.
At first I worked as a patch fixer, but I also did a lot of work that ultimately followed Mike's identification and creation of reagents that allowed us to study this very interesting phosphorylation event. So let the word spread, from the Warren Alpert Symposium, that Mike Greenberg helped launch optogenetics. So thanks Mike. But then, of course, Ed's magnificent recordings, Fong's elegant virus design, and his design of fiber optic interfaces to allow us to monitor animal behavior, and that led to this initial monitoring of mammalian behavior in 2007. , where this illumination of the M2 supplementary motor cortex in the unilateral animal causes the animal to turn in the opposite direction.
As soon as the little blue light goes off, the animal stops spinning. Now, even this was done with control of cell types. In this case, the population of photosensitive cells were layer 5 cortical neurons. And from there we went on to target the deep hypocretin neurons in the lateral hypothalamus (again, work led by Feng Zhong and Antoine Adamantidis), but all these types of cells. And what ultimately turned out to be particularly useful in opening the door to single-celled was a derivative of the initial Volvox channelrhodopsin that Ofar, Peter, I, and several members of our group described in 2011 we called C1V1.
It is a chimera of channelrhodopsin-1 from minimonis and channelrhodopsin-1 from Volvox. Roheat Prekash, in my lab, was able to express this in 2012, here using a patch clamp electrode and a loose patch configuration on a live, awake mouse, and doing a raster scan for photon illumination just above the cell, without obtaining peaks, inside the cell, with peaks, and just below the cell, without peaks, thus controlling single-cell resolution in vivo, in mammals. And this was a collaborative work with Adam Packer and Rafa Usta. And it was back to back with a paper, also collaborative, between our two groups showing the first spatial light modulator, a liquid crystal-based holographic control of single cells.
But that was in the culture. It took several years, between 2012 and the present, to translate this into the control of mammalian behavior by controlling multiple individually specified cells. The path to this was through all optical experiments, so using the red light-driven aspect of Volvox-driven channelrhodopsins and blue light-driven calcium sensors, like the GCaMP series you've already heard quite a bit about . today. And we had combined... in an experiment, together with David Tank, we had combined a Volvox-derived opsin with calcium signal readouts, blue light-driven calcium signals, the GCaMPs, in 2014. But it wasn't with behavior like readout . He was simply demonstrating that all the optical interrogations of neural circuits could be performed, but without affecting behavior.
It was in the behavior of the animals, but not in the behavior. It took some time, even from that point, to get to the point where we could exert control over mammalian behavior at the level of multiple individual cells. This was work from earlier this year led by Josh Jennings, Tina Kim and Jim Marshall in a lab. And what we did was we targeted the orbital frontal cortex. This is a part of the mammalian brain that, in humans, if injured, gives rise to a syndrome called orbital frontal syndrome, where serious deregulations in eating behavior and social behavior can occur.
This is a structure that, of course, also does many other things. Is involved in value-based decision making. But we were interested in understanding and realizing our potential for single-cell resolution control. Our goal was to see if we could study the interaction and competition of two primary drives, the primary survival drives, feeding and social interaction, within this structure. And these cells are very often observed, as is common in systems neuroscience, to be active during behaviors, but they are not necessarily known to be causally involved in those behaviors. And that was a big, open question. So what we did was use GRIN lens-based optics to give us access to this somewhat deep cortical structure and exert control over individual cells while getting readings through calcium imaging of the activity of those individual cells.
And under this system, we can quite easily identify the cells that are fed. As this trail progresses through each of the gray bars, a small drop of a high-calorie reward is delivered. And we can see cells responding quite reliably to the delivery of feeding droplets. And these are, as we call them, feeding cells. These are cells that always (or almost always) respond when given this high-calorie reward. This is its natural activity and it is very useful. They are scattered among other cells that do not have these properties. And then we can come in and give optogenetic control, taking advantage of our ability to exert this single-cell resolution control.
And we can do this as shown here. The non-target cells here called NT, are cells that are next to the stimulated cells. And this gives you an idea of ​​the spatial resolution. You can see that the non-target cells, the NT cells, are active in their own right and doing their thing. But they are not activated when we activate, optically, the nutrient cells that are right next to them, absolutely shoulder to shoulder with them in this structure. So we have single-cell resolution control over these cells. And then the question is: are these cells causally involved in behavior?
And here is the result of that experiment. If you, in this case, driving, we found that if we just drove 20 to 25 of these food cells, we could enhance and extend the feeding response to the drop where each little tick is a lick delivered by the animal. He made several checks. For example, if you leave out the Volvox derived option, you won't get that answer. So this is... this is good news for the experiment. It shows that it is not a light artifact, for example. But of course you might ask: does it matter that you target the feeding cells?
What if you had also targeted other cells? What if they were important cells for the animal cells involved in the appetitive drive? And here we wanted to identify socially receptive cells. And we did that experiment as shown here. Now all of this is done in the fixed head configuration. So one might ask: what really is head-centered social behavior? And it may not be exactly the same as natural social behavior. But in this case, there is definitely a conspecific social interaction, an interaction with another member of the species, a juvenile mouse of the same sex that can move freely around this chamber and occasionally enter here, where there is a prolonged period of flapping and sniffing.
And indeed, there are social cells that are activated when this happens. And those are not the same as the cells that are fed. You might also ask, maybe these are just surprise cells or novelty cells. And that was something important. When I am very surprised or shocked, I may not eat as much. And that is an important distinction that we should know. Are these cells truly social, if that's what we're interested in, or something else? And here we 3D printed a mouse and made it appear in a quite shocking way, almost like a horror movie. I'll play this movie here.
So the mouse is here, waiting. Oh. And the surprising thing is that the social cells do not respond at all, not even to that shocking stimulus. There are other novel target cells that respond. It's going to come up again here. And other controls give us confidence that they are related, in some way, to social interaction. And then the question was... --if you manage those cells, what happens? Does it increase, decrease or not affect the feeding response? And what we found, at least for the first few minutes, was that there was, in fact, a suppression of the feeding response by driving the social cell.
So here, in the orbital frontal cortex, there is an interaction between cells that observe or relate to the primary survival impulses. If you push other cells that are not feeding or social, into the SNF cells, you will see no effect. So it seems to be, at least in some ways, a specific aspect of the food cells, of the social cells in their effect on food. So this was exciting, because we were able to take these Volvox-derived opsins and finally do an experiment that we had wanted to do all along, which was to exert individually specified multi-cell level control over mammalian behavior.
But of course we wanted to take this to greater and greater levels. One limitation that, at first, seems almost physical is that you have to be very careful with the amount of light energy delivered to living tissue. They are powerful lasers that we use to generate these points and drive these cells. And if we start wanting to control more and more cells, we can, if we're not careful, get into regimes where we supply too much heat or damage the cells in other ways. So we've been working hard on this, working on ways to generate spots of light in wider fields of view, which is also good in itself, and also finding, identifying and using opsins that need much less light in order to give rise. to responses that are still rapid and triggered.And this has also happened quite quickly.
Along with developing devices, we have been producing very large spatial light modulators that can give rise to holograms projected onto very large swaths of the mammalian brain, up to an area of ​​1 by 1 millimeter, which, for example, can cover the most of the visual cortex. These look like this. And this has now allowed us to do the next type of experiment. What is shown here are six (1, 2, 3, 4, 5, 6) squares representing 1 by 1 millimeter areas of primary visual cortex in an awake, behaving mouse. And these six areas are listed this way because we go from the superficial to the deep. So this is in three dimensions.
And all the cells circled in red are cells that, first of all, coexpress both GCaMP and a new opsin that allows us to deliver much less light and at the same time control many cells that I'll talk to you about in a moment. But these are also cells that we have selected by virtue of their natural activity. We can then individually specify these visual cortex cells by first identifying the ones we want to control by presenting visual stimuli to the animal and selecting the cells that respond the way we want. In this case, all the circled cells are responding, they are all cells that we have selected because they respond to one orientation of a drift classification, but not another.
So with this setup, we can perform the following types of experiments. For example, here are two cells that are more than a millimeter apart. By setting up our holograms, we can actually stimulate, simultaneously, two cells that are so far apart, or dozens, tens, or even hundreds of individually specified cells at once in three dimensions. And the opsin we used is very interesting and strange. This is the first to have this unusual property. This is a conducting cation of channelrhodopsin. But its primary sequence phylogeny places it closer to pumps, actually, than to the other channelrhodopsins, which, in itself, was interesting.
We found this in a collaboration with Susuma Yoshizawa and Hideaki Kato. It is from a marine organism. And we call it ChRmine, because we used a structure-guided genome mining approach, and it's a channelrhodopsin. And carmine, I found out from the director of my laboratory, is a deep red color, something I didn't know before. You learn a lot in this business. And I learned a lot about colors. But it was a name that actually turned out to be... although it seems kind of sinister, it's actually a pretty good choice. Don't say "crime." It's ChRmine. So what this allowed us to do was... its key properties, it's activated by red light.
But it has extraordinarily high photocurrents, more than 5 nanoamps per cell. Of course, very often that is too much. And then we can reduce the light power, which is what we ultimately want, and give rise to currents that drive spikes at extremely low light power densities. And that in turn allows us to control many cells, from dozens to hundreds. And I'll show you how we can use this to control individually specified cells, observe population dynamics elicited in the cortex, and observe behavior. ChRmine has... although we don't have a structure for ChRmine yet, it has these very large photocurrents in red.
But also, through homology modeling, we believe that there are some interesting features in its probable internal structural design, not yet proven. What is interesting is that most of its predicted surface electrostatic potential is located, we believe, towards the inner and outer vestibules and not so much in the channel lining itself, which may reduce the effective electrostatic adhesion of the channel interior and allow fluid flows. of higher ions while still giving it selectivity by affecting the access of ions to the vestibule. We are working hard to get the structure. We don't have that yet. But very large photocurrents, very low light sensitivities... and this allows us to do the next type of experiment where we can have an animal awake and alert.
We present visual stimuli, for example, vertical or horizontal floating gratings. We can find all the cells that respond to vertical stimuli, all the cells that respond to horizontal stimuli, show that they are selective and select them in 3D through a visual cortex. We can also identify non-selective cells (we call them random population) to see if it is important to select cells that have or do not have a particular orientation. And then we can go in and monitor cells of one orientation or another, or random cells, while imaging thousands of cells across the visual cortex, looking at the elicited population dynamics, the internal representations, if you will, of the visual stimuli. and also, later, as I will show you, observing behavior.
The first thing we found was something...we didn't know if that would be the case. We didn't know if... by stimulating some cells, we didn't know what would happen to the rest of the cortex. Wouldn't there be a broad and generalized response? If one were to stimulate 10, 20, or 100 individual cells, would these generally be the only cells activated? Or would we recruit large numbers of other cells? And if we recruit many other cells, what would they be? What would be the patterns that gave rise? Would it seem natural, naturalistic, as if the animal were seeing the vertical or horizontal stripes?
Or would it be some other aberrant pattern of activity? And to do this, we look at population dynamics. In principle, it is very high dimensional, but it can be reduced to a lower dimensional space with principal component analysis. And here are the trajectories that these thousands of cells in the visual cortex take when the animal looks at vertical or horizontal stimuli. Here is a mouse and another mouse. And what we can see is that this is the natural response to visual stimuli. And this is the response to optogenetic stimulation of only about 20 individually specified cells of the same orientation.
And what can be seen is that the trajectories in this principal component space resemble those seen during natural visual stimuli. And that is not seen with random cellular stimulation or without stimulation. And it's not just us looking at this and saying, hey, they look kind of similar. You can train a classifier and see that the classifier can automatically identify what orientations the cells were stimulating based on the population dynamics of the response. This was reassuring in many ways. It was pleasing to see that the population dynamics at this regional level caused by appropriately targeted optogenetics resembled those generated by natural stimuli.
It was also quite interesting to stimulate only about 20 cells, and these are untrained animal cells that have not been worked intensively with light patterns for long periods of time that could have gone through plasticity. These are found in animals that are not well behaved. We are only looking at the responses of the population. And stimulating only about 20 cells can lead to this broad recruitment of hundreds of neurons among the thousands that we're imaging in this three-dimensional space of the visual cortex, which, in and of itself, was interesting. And in this paper, together with Surya Ganguli, a leading computational neuroscientist at Stanford, we have begun to explore what this means that cortical circuits appear to exist in this critically excitable regime.
But of course we also wanted to see if we could affect behavior. And for these types of experiments, we take animals and train them to respond to one orientation or another of the visual stimulus. And we can make the job more complicated by reducing the contrast, for example, of the grid. And we can generate psychometric curves. We train the animals in high contrast, which learns the task very well and performs at high levels. You've seen this d-prime measurement before. But even after training, they don't do well at low contrast, at 2% contrast. And the 10% are pretty intermediate and can do well at that.
We found a couple of things. First, by optogenetically stimulating a population of cells consistent with a weak contrast visual stimulus, we were able to improve the animal's behavioral performance. We could help it detect, much more reliably, which is the correct orientation if we gave it stimuli consistent with the orientation of the visual stimulus. But then we even completely eliminate the visual stimulus. So in the dark, we asked, by just stimulating a few cells, can we get the animal to respond as if it were seeing the visual stimulus? And the answer is yes. What's more, we could reduce cell numbers to remarkably low levels.
And this is both in terms of behavioral measures and in terms of looking at population dynamics, the classifier automatically detects and classifies the nature of the stimulus. And you can see the number of neurons stimulated here. In fact, we can reduce that even below 20, to just a handful of cells where we can still detectably see both the correct behavior and the population dynamics response. It appears that layer 5 cells are slightly more powerful than those in layers 2 and 3 and can be reduced to less to obtain a comparable level of behavioral response. So this, in itself, is also quite interesting and leads to a lot of interesting questions about how the crust is set up to allow this type of ignition, or critical dynamics, to be present and not cause any problems.
This, of course, was also interesting. Because we were looking at a naturalistic dynamic caused by a few cells, it was broad, three-dimensional, and covered most of the visual cortex. But of course we would like to know, even in the whole brain, does properly targeted optogenetics also elicit naturalistic responses in the whole brain? And although we can't see, in real time, activity throughout the mammalian brain with optical tools, we can obtain those measurements with electrical tools. And this is work led by Will Allen in the lab, who, as you can see, has many talents. And he led this experiment where we used neuropixel probes, which are probes that are very long-handled, high-density electrical recording devices, placed in different trajectories in different animals.
In this experiment, there was one neuropixel probe per animal. But by using a temporally precise task, we can combine the results of many different animals. With known trajectories, we clean the brains. We see where the neuropixel probe trajectory was and align it with the Allen Brain Institute Atlas. And in this way, we can develop a brain-wide understanding of the cell populations that are active during behaviors. And as a first step, we chose probably the simplest possible behavior that we could imagine, which is just an animal licking water when it's thirsty. And this is, by design, the simplest possible task.
Because we wanted to see, throughout the brain, what the representation of this task would be. And starting with the simplest possible task was a good starting point. And in this task of going and not going, the animal has learned that a smell means that water will come. Another smell means no water will arrive. There is a start, an end of the smell and then a start of reward, three vertical lines. And here is just the behavior. Animals learn to lick to detect the good smell and not the forbidden smell. And this happens after many trials, until the animal is finally satisfied and no longer licks, not even to smell the odor.
The first question is what's wrong with my computer? We'll see. Here we go. The first question is what happens in the brain without optogenetics? What is the representation of the entire brain? And an interesting finding: these are all different brain regions, coded by color. Here are those three dotted lines that indicate the different phases of the task. Red means more active and blue means less active. You can see that virtually the entire brain is recruited through this very simple task. In fact, more than half of all the neurons we recorded were statistically modulated by the operation of this task.
It was an interesting finding in itself, which has its own implications, which perhaps we can talk about later. But then the question was: what happens if we optogenetically drive, appropriately, a deep population of thirst neurons? And here we targeted the same pathway that we had identified using IC++, the engineered chloride-conducting channelrhodopsin, to implicate a particular population of thirst-inducing neurons. And so, by directing the input from the subfornical organ to the MnPO thirst neurons, you can get the kind of behavioral result: an animal licking at the start signal, satiated. And then when you drive the thirst neurons, this small population deep in the brain, you can restore this triggered survival drive behavior, licking for water.
So the question is: what is happening in the brain in this context? And answer,Surprisingly, it is very similar to the natural state. Throughout the brain, tens of thousands of neurons... here is the natural licking of the water state. Here is the state of satiety. And here is this optogenetically induced state that resembles the natural state to a quite remarkable degree. So this is good news, of course, for everyone who studies optogenetics: if you focus things right, you trigger naturalistic dynamics both locally and throughout the brain. And it also raises a lot of interesting questions analogous to the ones I mentioned about the visual cortex.
What does it mean for brain controllability? How is it structured to allow small populations of cells to elicit such broad responses? This is clearly a very elegant design, one that can probably go wrong and is beautiful when it goes right. And I'll end there. But I'll show this last slide, which, coincidentally, was the first slide that Peter showed from the beginning. I think it's a useful slide to reflect on. Because all of the exciting advances we have made in understanding the brain, mammalian behavior, and the behavior of many species and circuits in biology, in many ways, are deeply rooted in botany and the basic science of studying organisms. floors.
I think it's a nice story that we should keep in mind when thinking about the value, importance and necessity of supporting basic science. And at the end I will take a moment to thank all of my incredibly talented students and postdoctoral fellows. I mentioned many of them along the way. The crystal structure work... I mentioned that the material that came out recently was led by Yun Kim, a graduate student in my lab. I think I forgot to mention it from the beginning: extremely talented. Maybe he'll come here too. We'll see. But many other very talented students and postdocs along the way (the work he did was also important for the identification of ChRmine and for) the paper on eliciting visual responses in mice was led by Jim Marshall, along with Tim Machado and Sean .
Kerin in the laboratory. And to all my many collaborators around the world over the years, first of all, Peter Hegemon, but also many others, and again, the incredible and talented people who worked with me at Stanford, Ed and Feng, and many Others have been wonderful times, with many surprising progress. And it has been a pleasure to share it with you. So thanks. Wow. Well, thank you very, very much. An incredible afternoon of great science. I want to thank Bernardo for moderating this. I want to recognize, again, the members of the Warren Alpert Foundation. I had forgotten to mention earlier that our former dean, Joe Martin, is here and is also a member of the foundation board.
And finally, congratulate the winners for this spectacular science exhibition... thank you very much. And I hope to see you all here again next year.

If you have any copyright issue, please Contact