YTread Logo
YTread Logo

Deep Learning State of the Art (2020) | MIT Deep Learning Series

May 02, 2020
welcome to

2020

and welcome to the

deep

learning

lecture

series

, let's start today to take a quick tour of all the exciting things that happened in seventeen, eighteen and nineteen, especially, and the amazing things that we will see in this year in

2020

as well as Part of this

series

will be some talks from some of the most important people in

learning

artificial intelligence after today, of course, begins the celebrations from the Turing award to limitations and debates and the exciting first and first growth of Of course, I step back to the quote I used before. I love it.
deep learning state of the art 2020 mit deep learning series
I will continue reusing it. AI did not begin with Alan Turing or McCarthy, but with the ancient desire to forge gods. A quote from Pamela McCord. Akande machines that think visualization exists. It's just three percent of the neurons in our brain's thalamocortical system, that magical thing between our ears that allows us all to see, hear, think, reason, hope, dream and fear our eventual mortality, all of that is what we wish understand that that is the dream of artificial intelligence and recreate versions of it, its echoes in the engineering of our intelligence systems, that is the dream that we must never forget in the details I will talk about the exciting things that I will talk about today, that's kind of the reason why this is exciting this mystery that is our mind the modern human brain the modern human as we know them today we know them and we love them today was just 300,000 years ago and the Industrial Revolution was about 300 years ago that's point one percent of development since The early modern human is when we've seen a lot of the machines.
deep learning state of the art 2020 mit deep learning series

More Interesting Facts About,

deep learning state of the art 2020 mit deep learning series...

Machin was not born in stories but in reality, the machine was designed since the Industrial Revolution and the steam engine and the mechanized factory system and the machining tools, that's just the point. one percent in history and that's the three hundred years that are now summarized in the 60 70 years since the founder, possibly the father of artificial intelligence, Alan Turing and the dreams, you know there has always been a dance in artificial intelligence between dreams and mathematicians. fundamentals and and and when dreams meet engineering practice reality then Alan Turing has said many times that by the year 2000 he would be sure that we would pass the Turing natural language test, it seems that he probably said that once the machine thought The method had begun, it wouldn't take long to overcome our weak powers, they could converse with each other to sharpen their wits at some point, therefore, we should wait for the machines to take control, a little shout to play there, so that is the He dreamed of both the father of the mathematical basis of artificial intelligence and the father of dreams in artificial intelligence and that dream again in the early days was coming true.
deep learning state of the art 2020 mit deep learning series
The practice came across the perceptron which was often thought of as a single layer neural network, but in reality it was not so much so. Much known as Frank Rosenblatt was also the developer of the multilayer perceptron and that history that has been covered has amazed our civilization. For me, one of the most inspiring things about this in the gaming world, first with the great Gary Kasparov losing to IBM D blue. in 1997, then Lisa Dahl lost to alphago in 2016, momentous moments and captivated the world through engineering real systems of robots on four wheels, as we will talk about today from Weymouth to Tesla and all the autonomous vehicle companies working in the space. robots with two legs captivating the world from what performance what type of manipulation can be achieved the history of

deep

learning since 1943 the initial models of neuroscience thinking about neural networks how to mathematically model neural networks until the creation, as I said, of the single layer and Frank Rosenblatt's multilayer perceptron and so 57 and 62 until the ideas of backpropagation and recurrent neural networks in the 70s and 80s - convolutional neural networks and LCL is a bidirectional rnn in the 80s and 90s until the birth of the deep learning term and the new wave of revolution in 2006: image net and Alix net, the seminal moment that captured the possibility of the AI ​​community's imagination of what neural networks can do in the image and language space natural, years after the development of the popularization of the Gans generative adversarial network, so the alpha became alpha zero in 2016 and seven and, as we will talk about the language models of transformers in seventeen, eighteen and nineteen, Those last few years have been dominated by ideas. of deep learning in the natural language processing space ok celebrations this year the Turing award was awarded for deep learning this is like deep learning has grown we can finally start giving awards yawn laocoön Geoffrey Hinton yoshua bengio received the Turing award for conceptual engineering advances that have made deep neural networks a critical component of computing.
deep learning state of the art 2020 mit deep learning series
I would also like to add that perhaps the popularization in the face of skepticism from those a little older has met the skepticism that neural networks have received throughout the 90s in the face of that. Skepticism continues to drive belief and work in this field and popularizing it in the face of that skepticism. I think that's part of the reason these three people received the award, but of course the community that contributed to deep learning is much larger than those three. many of whom could be here today at MIT, generally in academia and industry, looking at early key figures Walter Pitts and Warren McCulloch, as I mentioned for computational models of neural networks, these ideas of thinking that the type of neural networks is a biological disease. can have in our brain could be modeled mathematically and then engineering those models into real physical and conceptual mathematical systems by Frank Rosenblatt 57 again single layer multilayer in 1962 Frank Rosenblatt could be said to be the father of deep learning, the first person to actually in 62 I mentioned the idea of ​​multiple hidden layers in neural networks, as far as I know, someone corrected me, but in 1965 I thank the Soviet Union and Ukraine, the person considered the father of deep learning, Alexei even Enco + V G. lapa co-author of that work is the first learning algorithm are multilayer perceptrons multiple hidden layers the work on backpropagation on automatic differentiation 1970 1979 convolutional neural networks were first introduced and john hopfield analyzes recurrent neural networks what are now called hopfield networks a special kind of lawyer, all networks are fine, that is the early birth of deep learning.
I want to mention this because there's been a bit of a holding space now that we can celebrate the incredible awareness of deep learning, just like in reinforcement learning and academic credit assignment, it's a big problem and the embodiment of that almost the The target of the meme is the great Jurgen Schmidhuber. I encourage people who are interested in the incredible contribution of different people in the field of deep learning to read their work on deep learning in neural networks. It's a general description. Of all the people who have contributed, besides young laocoon Geoffrey Hinton and yoshua bengio, it is a large and beautiful community, so full of great ideas and great people.
My hope for this community is given the tension that some of you may have seen around this type of The problem with credit allocation is that we have more not on this slide but love that can never be enough love in the world but general respect open - and collaboration and shared credit in the community less mockery, jealousy and stubbornness and academic silos within institutions within disciplines also 2019 was the first time it was great to highlight the limits of deep learning. This is the interesting moment. In recent years, several books and several articles have been published highlighting that deep learning is not capable of doing the broad-spectrum type. of tasks that we may think artificial intelligence cannot perform, such as reading common sense reasoning, such as building knowledge bases, etc., Rodney Brooks said that by 2020 the popular press will begin to have stories that the era of deep learning has finished and certainly there have been echoes of that through the press through the Twittersphere and all that kind of world and I would like to say that a little bit of skepticism, a little bit of criticism, is always really good for the community, but not as much as a little spice in the soup. of progress aside from that kind of skepticism, the growth of cvpr iclear europe.
All of these papers presented at conferences have grown year after year. There has been a lot of interesting research, some of which I would like to cover today in my hope in this deep learning growth space. celebrations, the limitations for 2020 is that there is less advertising, unless there is less advertising, less tweets about how there is too much advertising in AI and more solid research, less criticism and more work, but again, a little criticism, a little spicy, it is always good for the recipe. hybrid research, less contentious counterproductive debates and more open-mindedness in interdisciplinary collaboration between neuroscience, cognitive science, computer science, robotics, Mathematics, Physics in all these disciplines working together and the research topics I would love to see more contributions to, such as We will talk briefly in some domains is reasoning common sense reasoning integrate it into the learning architecture active learning multimodal lifelong learning multitasking learning open domain conversation thus expanding the success of natural language to open domain dialogue dialogue and conversation and then applications both most interesting of which We will talk about medical and autonomous vehicles, then algorithmic ethics in all its forms, fairness, privacy bias, there has been a lot of interesting research there.
I hope they continue to take responsibility for the flaws in our data and the flaws in our human and then robotics ethics. In terms of deep learning application robotics, I would love to see a lot of development, continued development, deep reinforcement learning applications and robotics and robot manipulation. By the way, there may be a little time for questions at the end if you have a really urgent question. you can ask them along the way two questions so far thank goodness okay first the practical the practical deep learning and deep IRL frameworks this has really been a year where the frameworks have really matured and converged on popular deep learning frameworks that people have used like tensorflow and pytor attesa float 2.0 and pi torch 1.3 is the latest version and they have converged with each other taking the best features removing the weaknesses of each so the competition has been really fruitful in a certain sense for the development of the community. on the tensorflow side, eager execution, programming so imperative, the type of programming in Python has become the default, it has been integrated for the first time, it has been made easy to use and it has become the default and I am The circular or script site allowed for now, graphical rendering, so do what you are used to being able to do and what used to be the default mode of operation.
Intensive flow allows you to have this intermediate representation that is in graph form, the unintentional flow side, just the deep integration of caris and promotion is the main citizen. is the default API citizen of the way the tracker tends to flow, allowing complete beginners, anyone outside of machine learning, to use tensor flow with just a few lines of code to train and do inference with a model that is really exciting. configure the API, documentation, etc. and of course mature javascript in the browser implementation, heavy flow tends to flow light, being able to run toothless phones on mobile phones and serve, apparently this is something that the industry cares a lot about, of course, it is being able efficiently use cloud models and pi torch catch up with TPU support and experimental versions of pi torch mobile to be able to use a smartphone on your side in this tense and exciting competition, and I almost forgot to mention that we have to say goodbye to our favorite Python - this is the year when support finally, on January 1, 2020, support for Python, and support for tensor streams and pythor for Python, ended, so goodbye, print, goodbye, cruel world, okay, on the reinforcement learning front, we're pretty much on the same page. space as JavaScript libraries are found, there are no clear winners if you are a beginner in the space, the one I recommend is a fork of open-air baselines as stable baselines, but there are manyinteresting, some of them are very close. built in tensorflow, some are built in PI torch, of course, from Google, from Facebook, from deep mind, dopamine, TF agents, pulling force, most of these I have used, if you have specific questions I can answer them, so stable baselines are open for any baseline because I said this implements many of the basic deep RL algorithms PPO, you see, all the good documentation and it only allows a very simple and minimal implementation of a few lines of code of the basics, matching the basic algorithms of the outdoor gym environments, that's the one I recommend, ok. for the framework world, my hope for 2020 is framework agnostic research, so one of the things I mentioned is that PI torch has actually almost surpassed tensorflow in popularity in the research world.
What I would love to see is to be able to develop an architecture in tensorflow or develop a PI torch which currently can and then Trent once you train the model to be able to easily transfer it to the other from PI to the phone from the test flow to the PI torch currently takes three four five hours if you know what we're doing in both languages ​​to do that, it would be nice if there was a very easy way to do that transfer and then the maturation of the DRL frameworks. I would love to see open AI advance deeply and actually bring some of these frameworks to a maturity that we can all agree on, much like the idea of ​​openness for the environment that the world has done and the ongoing work which Kerris has started and many other rappers around tensorflow started with bigger and bigger abstractions that allow people to use machine learning.
Outside of the field of machine learning, I think the powerful thing about basic, supervised supervised learning is that people in biology and chemistry, in neuroscience, in physics, and in astronomy can handle the enormous amount of data that they are working with and without needing learn any of the details, not even Python, so I'd love to see bigger and bigger abstractions that train scientists outside the field. Natural language processing in 2017-2018 was developed in the transformer and its power was demonstrated especially by burt achieved many

state

-of-the-art results on many language benchmarks, from synthesis classification to question answer tagging, etc. ., hundreds of data sets and benchmarks emerged, most of which Burt mastered in 2018-2019. from the year transformer really blew up in terms of all the different variations again starting with Burt Excel net, it's cool to use Burt in the name of their new spin-off transformer Roberto distills Burt from face hugging Salesforce opening his eyes GPT - of course Albert and Nvidia's Megatron, a huge transformer, some tools have emerged, so one in Hugging Face is a company and also a repository that has implemented both in the pi torch intensive flow and in many of these national language models based ​​in transformers, so it's really exciting so most people here can just easily use it to already be pre-trained models and the other interesting thing is that Sebastian Reuter, a big researcher in the field of natural language processing , has put together an LP progress that includes all the different benchmarks for tracking all the different natural language tasks. who kind of leaderboards of who's winning where okay, I'll mention a few models that stand out this year's work Nvidia's Megatron LM is basically taking, I think, the GPT - transformer model and just putting it on steroids, eight point three against one . point five billion parameters and a lot of interesting things there, as you would expect from Nvidia, of course, always brilliant research, but also interesting aspects about how to parallel train the model and data parallelism in training , the first revolutionary results in terms of performance.
The model that replaced Bert as king of transformers is CMU's XL net from Google research. They combined Bert's directionality and recurrence aspect of Transform Excel, the relative position embeddings and the recurrence mechanism of Transform Excel to take bidirectionality and recurrence by combining it. To achieve

state

-of-the-art performance on 20 tasks, Albert is a recent addition to Google research and significantly reduces the number of parameters compared to Bert by sharing parameters between layers and has achieved the state-of-the-art. results on 12 LP tasks, including the difficult Squadron 2 Stanford question answering benchmark, and provide an open source tensorflow implementation that includes a series of ready-to-use pretrained language models, another way for people to who are completely new in this field.
A bunch of apps with Transformer is one of them, from Hugging Face to a popup that allows you to explore the capabilities of these language models and I think they're pretty fascinating from a philosophical point of view and this has actually been in the At the heart of much of the tension of how much these transformers actually understand, basically memorizing the language statistics in a self-supervised way by reading a lot of text, is that really understanding a lot of people say no until it impressed us and then we all. I'll say it's obvious, but with Transformer it's a really powerful way to generate text to reveal to you how much these models actually learn before this, I came up with a bunch of prompts yesterday, so on the left is a prompt to give you the meaning.
Life here for example is not what I think it is, it is what I do to achieve it and you can make many indications with this nature, it is very deep and some of them will be simply absurd, you will understand it statistically, but it will be absurd and it will reveal that the model doesn't really understand the fundamentals of the message being provided, but at the same time it's amazing what kind of text it is capable of generating the limits of deep learning. I was having fun with this at At this point, we're still in the process of figuring it out.
It's very true, I had to mentalize this most important person in the history of deep learning, it's probably Andrew and I have to agree, so this model knows what he's doing and I tried to achieve it. say something nice about me and it's a lot of tries so this is kind of funny it's that one finally did it I said he likes Prima's best qualities that she's smart he said finally but I said never anything but it never happens but I think he gets more attention in all the Twitter comments and that is very true. Well, it's a good way to reveal through this that the models are not capable of understanding any type of language, they only do problems that show understanding of concepts or being able to reason with them. concepts common sense reasoning trivia one is doing 2+2 is a 3 5 is a six seven the result of the simple equation 4 + 2 + 3 is like you did it right and then you changed your mind, okay, 2 less 2 is 7, so on you can reveal any kind of reasoning you can do with blocks, you can ask him about gravity, all that kind of stuff, he shows that he doesn't understand the fundamentals of the concepts being reasoned about and I'll mention the work it requires In the next few slides we'll go beyond that world of reasoning, but I should also mention this GPT 2 model.
If you remember, about a year ago there was a lot of thought put into this 1.5 billion open AI parameter model. That was the idea. could be so powerful that it would be dangerous, so the idea of ​​opening your eyes when you have an artificial intelligence system that you are about to launch could be dangerous, in this case probably used by the Russians as fake news to misinform that type of death, that's the kind of thinking that's how we release it and I think that while it turned out that the GPG for modeling is not so dangerous that humans are actually more dangerous than AI, currently that thought experiment is very interesting, they published a report . executing liberation strategies on the social impacts of language models that almost didn't have as much intention as I think they should and it was a little disappointing to me how little people are concerned about this type of situation, there is more attention.
Oh, these language models aren't as smart as we thought they might be, but the reality is that once they are, it's a very interesting thought experiment on what the process of companies and experts communicating with each other should be like during that launch. support I think about some of those details what I learned from reading the reporter all this year from that event is that the conversation about this topic is difficult because we, as a public, seem to penalize anyone who tries to have that conversation and the model of sharing In private. Confidentially between ML machine learning organizations and experts, there is no incentive, model, story or culture to share.
Well, the best paper from ACL, the main conference for languages ​​was about the difficult task of, so we talked about language models, now there's the task. go one step beyond dialogue, multi-domain task-oriented dialogue, which is like the next challenge for dialogue systems and they have had some ideas on how to track the state of dialogues across domains, achieving a state-of-the-art performance on multiple laws which is a fide-domain human-to-human dialogue dataset that challenges five domains. There are some ideas there. I should probably hurry up and start skipping things. The common sense reasoning that is really interesting is this one of the open questions for the deep learning community.
The community at large is how we can have hybrid systems, whether it's symbolic and deep learning or generally common sense reasoning with learning systems, and there have been a few articles in this space about my favorite from Salesforce on building a set of data where we can start doing. answer questions and discover the concepts that are being explored in the question and answer the question here while eating a hamburger with friends what are people trying to do multiple choice have fun tasty indigestion the idea that should be generated there and that is where language The model that would come is that normally a hamburger with friends indicates a good time, so basically you take the question, you generate the common sense concept and from there you can determine the multiple choice, what is happening, what is the state of things on this particular question, okay?
I'll let the surprise again not get enough attention like I think it should have, maybe because there hasn't been any major progress, but these are open domain conversations that all of us, anyone with an Alexa, can participate in as a data provider. , but there has been a lot of incredible work from universities around the world on the surprised elect in recent years and there have been many interesting lessons summarized in articles and blog posts, some lessons from Alcoa that I particularly like and this one is in a way echoes the work of IBM Watson, who Jeopardy's challenge is that one of the most important is that machine learning is not an essential tool for effective conversation, so machine learning is useful for general chat when you fail in a deep and meaningful conversation or in reality. understand what the topic is about or what is being talked about, so including talks and classifications, a kind of attempt at classification, finding the entities, detecting the sentiment of the sentences, is a kind of helping tool, but the fundamentals of The conversation are the following, so you have to break them first.
A kind of aside is that you can think of it as a long dance and the way you have fun dancing is to break it down into a series of movements and turns etc. and focus on that kind of life in the moment. type of thing, so focus on small parts of the conversation taken at a time and then also have a graph. Conversation is also about tangents, so have a topic chart and be prepared to jump context from one context to another and back again if you want. look at some of these natural language conversations they post, it's all over the place in terms of topics, you jump back and forth and that's the beauty, the humor, the wit, the fun of the conversations, you jump from topic to topic. and opinions are one of the things that natural language systems don't seem to have much of is opinions, if I learned anything, one of the simplest ways to convey intelligence is to be very opinionated about something and be confident, and that's a really concept. interesting about constantly and in general there are just a lot of lessons ah and finally of course maximize entertainment not information this is true forThomas' vehicles, this is true for Thomas' vehicles, this is true for natural language conversation, it's fun, it should be part of the objective function, okay, there are many lessons to be learned, this is really the jackpot , the Turing test. of our generation, I'm excited to see if there's anyone capable of solving the Lexer Prize again.
A Lexer award is your task of talking to a robot and the measure of quality is the same as the Lobner award, it just measures how good that was. conversation, but also the task is to try to continue the conversation for 20 minutes if you try to talk to a bot today and you have the option to talk to a bot or do something else, watch Netflix for the last 10, but probably less. seconds you would be bored, the point is to keep getting caught up in the conversation because you are enjoying it a lot in the 20 minutes.
It's a really good benchmark to get past the spirit of what the conservative test represented, examples here of the Elect is surprised that the alcohol was bought, then the difference in two types of conversations, then Alko says: have you been to Brazil? The user says what the population of Brazil is. Alco says it's around 20 million. The user says well, okay, this is what happens a lot with similar people. I meant that your cross domain conversation is that once you jump to a new domain you stay there, once you switch context you stay there, the reality is you want to go back and continue jumping like in the second most successful conversation, do you? have you been to Brazil?
It's the population of Brazil, it's around 20 million anyway, I was saying: have you been to Brazil? So they're going back in context, that's how the conversation goes, change you to change you and come back quickly, there's been a lot of sequins with sequins, type of work. using natural language to summarize many applications, one of the ones that I clarified and wanted to highlight from Technion and that I find particularly interesting is abstract syntax tree based code summarization, so computer code modeling, in this case, unfortunately, java and c-sharp. in trees in syntax trees and then use operate on those trees to then do the summary in the text here an example of a basic power to run at the bottom right in Java the code for the summary SEC says get power of two, that is an interesting possibility of automated source code documentation I thought it was particularly interesting in the future there are great hopes for 2020 for natural language processing is reasoning common sense reasoning becomes an increasing part of the old model of work type language that will be seen in depth world of learning expanding the context from thousands of hundreds of thousands of words to tens of thousands of words being able to read complete stories and maintain the context, something that Transformers again with Excel Net Transformer Excel is beginning to be able to do, but we are still far from that long-term and permanent maintenance of contextual dialogue, open domain dialogue forever since Alan Turing: today it is the dream that artificial intelligence can pass the Turing test and the dream of a kind of Transformative natural language model are self-supervised learning and Yann Laocoön's dream is, for these kinds of things that used to be called unsupervised but now he calls them self-supervised learning. systems to be able to watch YouTube videos and from there begin to form a representation based on which the world can be understood.
The hope for 2020 and beyond is to be able to transfer some of the success of Transformers to the world of visual information the world of video, for example DRL and autonomous gaming, this has been an exciting year and continues to be an exciting time for reinforcement learning in games and robotics, so first dota2 is an open AI, an exceptionally popular competitive e-sports game in which people compete when millions of dollars, so there are many world-class professional players, so in 2018 it opened to the five, this is a team game that did its best at the international level and lost and said that we are looking forward to taking five to the next level, which In April 2018, they beat the 2018 world champions in five-on-five games, so the key was to calculate eight times the training calculation because the actual calculation was already maxed out.
The way they achieved the 8x is over time, by simply training for longer, so the current version of open ni 5 that jacob will talk about next friday has consumed 800 petaflop per second days and has experienced about 45,000 years of dota autoplay for 10 months in real time again behind many of the game systems that talk about them. use autoplay to have them play against each other. This is one of the most interesting concepts in deep learning systems that learn by playing with each other and gradually improving over time, so start from being terrible and getting better and better and better and better and you're done. always being challenged by a little better opponent because due to the natural process of own game, it is a fascinating process, the 2019 version, the latest version of open AI 5 has a win rate of 99.9 compared to the 2018 version, ok, so Deep Mind also in parallel has I've been working on and using autonomous play to solve some of these multi-agent games, which is a really difficult space when people have to collaborate as part of the competition, it's exceptionally difficult since reinforcement learning perspective, so this is from raw pixels, so the whole arena captures the flag game quake 3 arena one of the things I love, just kind of a side note on how opening your eyes and deep mind and general research and reinforcement learning, there will always be a paragraph or two of philosophy in this case from the deep mind of billions of people.
They inhabit the planet, each with their own individual goals and actions, but are still able to come together across teams, organizations, and societies in impressive displays of collective intelligence. This is an environment we call multi-agent learning. Many individual agents must act independently but learn to interact and cooperate. another agent, this is an immensely difficult problem because with the adaptation agent Co the world is constantly changing. The fact that we, seven billion people on Earth, people in this room in families in villages, can collaborate while being largely self-interested agents is fascinating. Actually, one of my hopes for 2020 is to explore the social behaviors that emerge in reinforcement learning agents and how they are reflected in real person-to-person social systems.
Well, here are some visualizations that the agents automatically discover, as seen in other games, they discover the concepts. So, knowing very little, without knowing anything about the rules of the game, about the cost of the game, about the strategy and the behaviors capable of solving it, there are the TC visualizations of the different states, states of importance and concepts in the game that this solves, etc. By jumping forward, the automatic discovery of different behaviors, this happens in all the different games we talk about, from Dota to Starcraft 2, to shake up the different strategies that you don't know, you realize automatically and really exciting work in terms of multiple agents.
RL on the deep mind side was meeting world class players and reaching the grandmaster level in a game that I don't know what Starcraft is in December 2018 alpha star beat mana one of the strongest professional Starcraft players in the world, but that was in a very restricted environment and There is only one race, I think Protoss and in 2019, the alpha star of the Beach Grand Master level by doing what we humans do, using a camera to observe the game and playing against other humans, so this is not an artificial secondary system, it is doing exactly the same thing. process that humans will undertake and reach Grand Master, which is the highest level.
It's good, awesome. I encourage you to take a look at many of the interesting things in their blog posts and videos about the different strategies our Alliance can discover. Here is a quote from one of the professional Starcraft players and we see this with zero alpha, and chess is alpha stars and an intriguing unorthodox player with the reflexes and speed of top professionals, but the strategies and style are completely his own, just as he trained with the alpha star. Agents competing against each other in a league have resulted in gameplay that is unimaginably unusual. It really makes you wonder how many possibilities the stock has.
Professional players have really explored. That's what's really exciting about the reinforcement learning agent in chess and in games and hopefully simulated games. systems in the future that teach us they teach experts who believe they understand the dynamics of a particular game a particular simulation of new strategies of new behaviors to study that is one of the interesting applications from almost a psychological perspective that I would love to see learning by reinforcement I pushed towards and into the imperfect information side of the game of poker in 2018 CMU no Brown that I was able to beat, I had two head to head matchups with No Limit Texas Hold'em and now a six player team with No Limit Texas Hold'em against professional players, a lot of the same results, the mania, the same approaches, it was the Monte Carlo iterative self-play and there are a lot of ideas in terms of abstractions, so there are so many possibilities under the imperfect information that you have to form these containers of abstractions in both lower actions to reduce. the action space and the information abstraction space, so the probabilities of all the different hands that they could possibly have and all the different hands that the betting strategies could represent and, in a way, you have to do this kind of course planning so that they use their own game. to generate a field plan strategy that in real time then use Monte Carlo search to adjust as they play again, unlike deep mind and open eye approaches, very little computation is required and they are able to manage to beat players of World class.
Again, I like this to get quotes from professional players after being defeated, so Chris Ferguson is famous all over the world, he's a poker player, he said pluribus, that's the agent's name, he's a very hard to face, it's really hard to identify. type of hand is also very good at making low value bets on the river is very good at extracting value from his good hands making bets without scaring the opponent Darren Elias said that his main strength is his ability to use mixed strategies, that's the same What humans try to do is a matter of execution for humans to do it in a perfectly random way and to do it consistently, most people just can't, so in the robotics space there have been many applications of reinforcement learning, one of the most exciting is the manipulation manipulation enough to be able to solve the Rubik's cube again this is learned through reinforcement learning again because self-games in this context are not possible they use automatic randomization of ADR domains so they generate progressively more difficult environments for the hand there is a giraffe Go there and you will see that there are many disturbances in the system, so they modify it a lot and then a lot of noise is injected into the system in order to teach the hand to manipulate the cube and then solve the real solution of calculate.
Figuring out how to get from this particular face to the solved cube is an obvious problem. The article in this work focuses on learning to manipulate the cube, which is much more difficult. It's really exciting, again, a bit of philosophy, as you would expect from open AI. They have this idea of ​​emerging meta-learning, this idea that the capacity of the neural network that is learning this manipulation is restricted, while ADR, automatic domain randomization, is progressively making an environment more and more difficult, so the The environment's ability to be difficult is not restricted and because of that, there is an emerging self-optimization of the neural network to learn general concepts rather than memorize particular manipulations.
The hope for me in the deep reinforcement learning space, I mean, for 2020, is the continued application of robotics, even sort of legged robotics, but also robotics. Manipulating human behavior is the use of multi-agent self-play that I have mentioned to explore naturally emerging social behaviors, build simulations of social behavior, and see what kind of multi-human behavior emerges in the context of soft play. I think it's one of the good things that always exists. I hope that one day there will be an autonomous reinforcement learning game psychology department where reinforcement learning is usedto study human behavior by reverse engineering and study it that way and again in games.
I'm not sure what big challenges it entails. remained, but I'd love to see it, at least it's exciting to see solutions learned for self-play games instead of deep learning. I'd say there have been a lot of really interesting developments here that deserve their own lecture. I will mention just a few here. from MIT in early 2018, but it sparked a lot of interest in 2019. The follow-up work is the idea of ​​the lottery ticket hypothesis, so this work showed that subnets, the small subnets within the larger network, are the ones who are doing all the work. Thinking that you can achieve the same results in accuracy from a small subnetwork from within a neural network and have a very simple process of arriving at a random initialization subnetwork in your network.
I guess the lottery ticket train and the network controller converge. is an iterative process: prune the fraction of the network with low weights, reset the waste of the remaining network with the original initialization, this same lottery ticket and then retrain the pre-pruned network on the train and continue this iteratively and continues to arrive at a network that is much smaller using the same original initializations. It's fascinating that within these large networks there is often a much smaller network that can achieve the same type of accuracy. Now, in practical terms, it is not clear what the big conclusions are, except the inspiring conclusion that architectures exist. which are much more efficient, so it is worth investing time in finding these networks, then you have to untangle representations that again serve for your own reading, but here a representation of 10 vectors is shown and the objective is that each part of the vector can learn a particular concept. on a data set, so the dream of unsupervised learning is that you can learn compressed representations where everything is untangled and you can learn some fundamental concept about the underlying data that can be extracted from saturdays of data and that is the best disentangled representation that exists theoretically. works best on the 2019 ICML paper which shows that that is impossible, so disentanglement representation is impossible without some inductive bias-free and so the suggestion is that the biases you use should be made as explicit as possible.
The open problem is to find good inductive biases that Funster provides. selection of models that work on multiple data sets, we are actually interested in many more articles, but one of the most interesting is the idea of ​​double dissent that has been extended and to the deep context of the network through open AI to explore the phenomena that as we increase the number of parameters in a neural network, the test error initially decreases, increases and just when the model is able to fit the training set, it suffers the second decrease, so it decreases, increases, decreases, so there is this critical moment where the training set is just a perfect fit and this outdoors shows that it is applicable not only to model size but also to training time and data set time.
This is more of an open problem of why to understand this and how to leverage it to optimize training. dynamics and their networks, there are a lot of really interesting theoretical questions there, so my hope for the science of deep learning in 2020 is to continue exploring the fundamentals of model selection training dynamics, people focused on the performance of There has been work on memory and speed and representational features with respect to architectural features, so much of the fundamental work is in understanding neural networks, two areas in which that I had two sections and articles, which is super exciting.
My first love is weed, so graphic. Neural networks are a really interesting area of ​​deep learning. Graph convolution neural networks, as well as solving combinatorial problems and recommender systems, are really useful in any type of problem that, fundamentally, can be modeled as a graph and then can be solved or at least helped to buy . everyone knows there is a lot of exciting area there and Bayesian deep learning using Bayesian neural networks has been around for several years and one exciting possibility is it is very difficult to train large Bayesian networks but in the context where you can and it is small data sets. useful that provide measurements of uncertainty in predictions, it's an extremely powerful capability of Bayesian Nets a Bayesian its networks and incremental learning online releases of these new levels there are a lot of really good articles there it's exciting okay autonomous vehicles oh boy let me try using These few sentences as possible to describe this section of some slides, it is one of the most interesting areas of applications of AI and learning in the real world today and I think it is the way that artificial intelligence is the place where Artificial intelligence systems touch humans who know nothing about artificial intelligence, hundreds of thousands. soon millions of cars will be interacting with humans, robots, so this is a really exciting area and a really difficult problem and there are two approaches, one is level two, where the human is fundamentally responsible for the supervision of the AI ​​system and level four or at least At least the dream is that the AI ​​system is responsible for the actions and the human being does not need to be a supervisor.
Well, two companies represent each of these approaches that are leading the way in October 2018, ten million miles traveled today. year they have traveled twenty million miles in simulation ten billion miles and I have had the opportunity to visit them in Arizona. They're doing some really exciting work and they're obsessed with testing, so the type of testing that they're doing is incredible twenty thousand kinds of structured testing putting the system through every type of testing that these years can think of and that appear in the real world and they have started road tests with real consumers without a safety driver which if you don't know, that means the car is truly responsible, there is no human capture, the exciting thing is that there are seven hundred thousand eight hundred thousand autopilot systems Tesla, meaning there are these systems that are supervised by humans, they are using fun.
Multi-headed neural network Multi-tasking neural network to perceive, predict and act in this world, so it's a really exciting large-scale real-world deployment of neural networks as a fundamentally deep learning system, unlike the way that deep learning is the icing on the cake. For Tesla, deep learning is the cake, it is at the center of the system's perception and action. It is estimated that more than two billion miles have to be completed and that continues to grow rapidly. I'll briefly mention what I think is a super exciting idea in all the applications of machine learning in the real world that is online, so iterative learning is active learning.
Andrey Carpathia was the head of autopilot calls. Is this the data engine? It's this iterative process of having a neural network perform the task and discover the edge cases. looking for other edge cases, they are similar and then we train the network, we note the education time we trained them and we do this loop continuously. This is what every company that uses machine learning seriously makes very few posts in this space and is active. learning, but this is the fundamental problem, machine learning is not creating a brilliant neural network, it is creating a dumb neural network that continually learns to improve until it is brilliant and that process is especially interesting when you take it outside of one-time learning. task, so most of the articles are written on single task learning, you take any landmark here in the case of driving this object detection, landmark detection, driving, trajectory generation by bulería, right, they all have benchmarks and you can have some separate neural networks for them, that's a single task with combination to use a single neural networks that perform all those tests together, that's the fascinating challenge where you reuse parts of the neural network to learn things that are coupled and then learn things that are completely independent and doing the continuous active learning cycle that are within companies. in case the test is done that way, in general, it's exciting to have people, these are real human beings who are responsible for these particular tasks, they have become experts in particular perceptual tasks, experts in a planning task particular, etc., etc., the work of that.
The expert is both training the neural network and discovering the edge cases that maximize the improvement of the network, that's where human expertise comes into play. It's very good and there is a lot of debate. It's an open question as to what type of system each type will be. This approach would be successful: a fundamentally learning-based approach, as is the case with level two with Tesla's Autopilot system, which learns all the different tasks you invite it to participate in driving and, as it gets better and better , less and less human supervision is required. The advantage of that approach is that camera based systems have the highest resolution so it is very easy to learn, but the disadvantage is that it requires a lot of data and when no one knows how much data, the other disadvantage It's human. psychology is the driver's behavior that the human being must continue, that is, remain alert at the approach level that takes advantage of in addition to cameras and radar, etc., it also takes advantage of the lidar map.
The advantages that it is a reliable and much more consistent explainable system so that the detection is accurate. Depth estimation of detecting different objects is much more accurate with less data. The downside is that it is expensive, at least for now, it is less amenable to learning methods because there is much less low-resolution data, and it should require at least for now some support, whether that is the case. the safety driver or teleoperation the open questions for deep learning level of tesla autopilot approach is how hard is it to drive this is actually the open question for most disciplines in artificial intelligence how hard is it to drive how much education do you have to drive can we learn to generalize about those extreme cases without solving the common sense reasoning problem, is your kind of task without solving the artificial intelligence problem at human level and that means perception, how difficult is the Perceptual detection, intentional modeling, human mental model, trajectory prediction modeling, then the action side. game theory action side of balance as I mentioned fun and enjoyment of the ability with the security of the systems because these are systems critical to human life and supervision the surveillance side how good can Auto Poli be before the images decrease significantly and that's why people fall asleep and get distracted stop watching movies etc, the things that people do naturally, the open question is how good can autopilot be before it becomes a problem serious and whether that decrease negates the safety benefit of using autopilot, which is an autopilot AI system when the sensors are working well it is perfectly vigilant they have AI is always paying attention the open questions for the light are based on the level of the way I approach it is when we have lidar maps and geo-fenced routes that are taken how difficult it is to drive the traditional approach to robotics DARPA's challenge to this day for most autonomous vehicle companies is to make HTML for use lidar for really precise localization along with GPS and then the perception problem becomes the icing on the cake because you already have a very good idea of ​​where you are. with obstacles in the scene and perception is not a safety-critical task, but a task of understanding the interpretation of the environment to have more, yes, it is naturally safer by nature, but how difficult it is, however, that problem if perception is the difficult problem then.
Lighter approaches are good if action is the hard problem then both Tesla and William will have to solve the action problem without sensors not mattering the hard problem is the theoretical game planning the human modeling mental models . and the intentions of other human beings, pedestrians and cyclists, is the hard problem and then, on the other side, the ten billion miles of simulation, the open problem of learning byreinforcement, deep learning in general is how much we can learn from the simulation, how much of that knowledge. Can we transfer back then with real world systems? My hope in the autonomous vehicle space.
Can I help in a driving space? It's seeing more innovation in applied deep learning, as I mentioned, these are really interesting areas, at least to me, of active learning, multitasking learning and lifelong learning online learning iterative learning there are a million terms for it, but basically continuous learning and then multitasking learning to solve multiple problems over the air updates I would love to see in terms of the autonomous vehicle space this is common because it is a prerequisite for online learning if you want a system that continually improves from data you want to be able to implement new versions of that system, the test is one of the only vehicles I know of in the tier two space that deploys software updates regularly and built an infrastructure to deploy those updates, so updating its networks seems like a prerequisite to me to solve the problem of autonomy in level 2 space.
Any space is to implement updates, and for research purposes, public data sets continue, there are actually a few public data sets. of extreme cases I would love to continue to see that from automotive companies and autonomous vehicle companies and simulators Carla and video draft constellation voyage deep drive there are a lot of simulators coming out that are allowing people to experiment with perception with planning with learning algorithms booster. I would love to see more of that and less advertising, of course, less advertising. One of the most hyped spaces, besides the AI ​​type, in general is autonomous vehicles and I would love to see in-depth, nuanced and really balanced reports from journalists and companies on the successes and challenges of autonomous driving if we We skip some section it would be political, but I can mention it briefly, someone said Andrew Yang Yang, so it's exciting for me to see exciting, funny and uncomfortable to see how artificial intelligence is talked about in politics, so one of the presidential candidates discuss. artificial intelligence in an uncomfortable way, so there are interesting ideas, but there is still a lack of understanding of the fundamentals of artificial intelligence, there are many important issues, but it is bringing artificial intelligence into public discourse, that is nice to see, but it's in the early days and, as a community, that informs me. that we need to communicate better about the limiting capabilities of artificial intelligence Automation in general, this year the American AI initiative was launched, which is our government's best attempt to provide ideas and regulations on what the future of artificial intelligence will look like in our country again uncomfortable but it's important to have these early developments, early ideas from the federal government about what the dangers are and what the hopes are, the funding and education needed to build a successful infrastructure for artificial intelligence.
The fun part is that there is a lot of technology. companies coming before the government is really interesting in terms of power, some of the most powerful people in our world today are the leaders of technology companies and the foundations that technology companies work on are systems of artificial intelligence, really recommendation systems, ad discovery from Twitter to From Facebook to YouTube they use recommendation systems and now they are all fundamentally based on deep learning algorithms, so these incredibly rich and powerful companies that are using the deep learning are presented to the government that is clumsily trying to see how we can regulate and it is I think the role of the AG community in general is to inform the public and inform the government about how we talk about how we think about these ideas and I also think that It is the role of companies to publish more very little has been published about the details of recommendation systems behind Twitter Facebook YouTube Google, so all those systems are very few published, maybe it is understandable why, but nevertheless , when considering the ethical implications of these algorithms, there needs to be more posts, so here is just a harmless example of a deep mind talking about recommendation. system behind play store app discovery, so there is a lot of discussion about the type of neural network that is used to propose candidate generation, so after installing some apps, the candidate generation shows that you ranked the next app which you probably enjoy the installation, so they tried lsdm and transformers and then reduced it to a more efficient model that can run faster, it's an attention model and then there's a new harmless D bias, harmless in terms.
From topics, the model learns to bias itself in favor of apps that are shown and then installed more frequently compared to the ones you want, so you have to wait to adjust the bias towards apps that are popular to allow the possibility of you installing them. apps that are less popular so that kind of process and publication and discussion in public I think is really important and I would love to see more of that so my hope for this in the political space in the public discourse space by 2020 it's less fear of AI and more discourse between the government and experts on issues of privacy, cybersecurity, etc., and then transparency and recommendation systems.
I think the most exciting and powerful AI systems space for the next few decades is Recommendation systems seem to be talked about very little, but they are going to have the biggest impact on our society because they affect how the information we see, how we learn, what we think, how we communicate, these algorithms control us and we have to really think. deeply as engineers about how to talk and think about its social implications, not just in terms of biases, etc., which are ethical considerations that are really important, but things that are like the elephant in the room that is hidden and controls how we think. . how we see the world the moral system under which we operate quickly to mention and conclude with a few minutes of questions, if any, this year's deep learning courses before the last few years there have been many incredible courses on deep learning about learning for reinforcement, what I would highly recommend to people is the quick day course.
I'm Jeremy Howard, using the PI Torch surround him. To me, it's the best introduction to deep learning for people who are here or might be listening elsewhere. learn more about deep learning, that is the best course for me, i also paid, but everyone loves to enjoy is AI deep learning. The Coursera course on deep learning is great, especially for complete beginners, for beginners and then Stanford has two great courses. on visual recognition, convolutional networks originally taught by andrew karpati and natural language processing, excellent courses and of course here at MIT there are a lot of courses, especially on fundamental mathematics, nonlinear algebra statistics and I have some lectures on line you should never look then on the side of reinforcement learning David Silver is one of the best people in understanding reinforcement learning from the deep mind.
He has an excellent introductory course on reinforcement learning that turns on and applies the millennium openness more deeply. I highly recommend here just for the slides I'll be showing. share online, there have been many tutorials, one of my favorite list of tutorials, which is, I think, the best way to learn machine learning, deep learning, natural language processing in general, is just code, just build it yourself, build the models, many times from scratch, here is a list of tutorials with that link, over 200 tutorials on topics ranging from deep RL to optimization and backprop l. STM commotion in Kurnool networks.
All 200+ of the best machine learning NLP and Python tutorials from Robbie Allen. You can search for it on Google or you can click on the link. I love it, I highly recommend all three books. I recommend, of course, the learning book of yoshua bengio and good friend and erinkoval, which is rather the fundamental thought of the philosophy of the specific techniques of deep learning and practical assimilation deep learning that Andrew Trask will do. I'll be here on Wednesday his book Grokken Deep Learning I think it's the best book for beginners on deep learning. I love it, it implements everything from scratch.
It's extremely accessible in 2019. I think it was published maybe on the 18th, but I loved it, and then Francois Sholay, the best book on Keros. on tensorflow and really deep learning, as well as deep learning with python, although you shouldn't buy it, I think, because it's supposed to introduce version 2, which I think will cover tensorflow 2.0, you know, it's a great book and when it's here on Monday . you should torture him and tell him to finish writing, he was supposed to finish writing in 2018. Well, my journal waits, as I mentioned, for 2020. I love seeing common sense reasoning and not necessarily entering the world of deep learning, but being a part . of artificial intelligence and the problems that people say you should address, as I have been hosting active learning, is for me the most important aspect of applying deep learning in the real world.
There is not enough research. There should be much more research. I would love to see active learning lifelong learning that's what we all do as human beings that's what AI systems should do continually learn from your mistakes over time start dumb become brilliant over time open domain conversation with the chosen ones surprise I'd love to see progress there, Alexa friends I think it's still two or three decades away, but that's what everyone says before the big breakthrough, so I'm excited to see if there are any bright grad students who come up with it something, its applications in autonomous vehicles, the algorithmic ethics of the medical space, of course, ethics has been a lot of excellent work on equity, privacy and so on, robotics and, as I said, recommendation systems are the most important in terms of impact, part of artificial intelligence systems.
I mentioned soup in terms of progress, there's been a bit of tension, a bit of love online. In terms of deep learning, I just wanted to say that the kind of criticism and skepticism about the limitations of deep learning is really healthy in moderation. Jeff hid one of the three people who received the Turing award, because as many people know, it is said that the future depends. about some grad student who is deeply suspicious of everything I've said, so suspicion, skepticism is essential, but in moderation, just a little, the most important thing is perseverance, which is what Geoffrey Hinton and the others have had over the winters of believing in neural networks. and an opening - to return to the world of symbolic AI of complexity expert systems and cellular automata of old ideas in AI and bring them back and see if there are ideas there and of course you have to have a little bit of madness that no one ever achievement. something brilliant without being a little crazy and the most important thing is a lot of hard work it's not the good thing nowadays but hard work is everything I like what JFK said how about we go to the moon?
I was born in the Soviet Union, see how I just conveniently said let's go to the moon: we do these things not because they are easy but because they are difficult and I think that artificial intelligence is one of the most difficult and exciting problems before us, so that with that I would like to thank you and see if there are any questions in the 1980s. Dangerous distribution processing books came out. They had most of the material at that time. What is your opinion on the obstacles? The biggest obstacles, other than maybe funding, I think fundamentally, I'm talking about them.
If they are well known as limitations it is that they are really inefficient in learning and they are not, so they are really good at extracting representations from raw data, but they are not good at learning knowledge bases or how to accumulate knowledge over time. time. The fundamental limitation I export is that the systems are really good at accumulating knowledge, but very bad at doing it and automating it in the form of symbolic AI, so I don't know how to overcome it. Many people say that there are hybrid approaches. I think that the more data, the larger networks and the better selection. of data will take us much further Hi Lex.
I wonder if you remember what the initial spark or inspiration was thatprompted you to work in AI. Was it when you were quite young or was it in more recent years? So I want to become a psychiatrist I wanted to. I thought of it as a kind of engineering the human mind by manipulating it. I thought that's what I thought of psychiatry, is using words to explore the depths of the mind and being able to adjust it, but then I realized that psychiatry can't actually do that and that modern psychiatry is more about kind of bioengineering like drugs and then I thought the way to really explore engineering the mind like the other side is to build it somehow and that's also when C++ really became cool so I learned to program at 12 years and then never looked back hundreds of thousands of lines later.
I just love programs, I love building and that's for me the best way to understand it. Build it. Speaking of developing the mind, do you personally believe that machines will ever be able to think? And the second question: will they ever be able to feel emotions? One hundred percent, yes, one hundred percent, you will be able to think and you will be able to feel emotions. because those concepts of thinking and feeling are human concepts and for me they will be able to pretend, therefore they will be able to do it like I did. I've been playing with Roombas a lot recently.
Roomba vacuum cleaners so now I started having Roombas Scream kind of like moans of pain and they became I feel like they are having emotions so pretending creates the emotion yeah so the show of emotion is emotion for me and then the exhibition of thought is thought I guess that's the kind of thing that everything else is impossible to pin down. I'm asking, so what about the ethical aspects? I ask because I was also born in the Soviet Union and One of my favorite recent books is Victor Phil Evans I and it's about AI that feels emotions and suffers from them, so I don't know if you've read that book, what do you think about the AI that feels emotions in that context or generally ethical? aspects yes, it is a really difficult question, yes, I think the AI ​​will suffer and it is unethical to torture.
I act, but I believe that suffering exists in the eye of the beholder, something like if a tree falls and no one is around to see it, it never suffered. It is we, humans, who see the suffering in the tree, in the animal and in our fellow humans, and in that sense, the first time that a programmer with a serious face delivers a product that says it is suffering, it is the first time that it is unethical to torture AI systems. and I can do it, we can do it today like I already built the Roombas.
They won't be sold currently, but I think the first time a Roomba says please don't hurt me is when we start having serious conversations about ethics and it sounds ridiculous, I'm glad it's being recorded because it won't be ridiculous on its own. a few years, yes, learning law enforcement is a good candidate to achieve general artificial intelligence in the and there are other good candidates available, so For me, the answer is no, but it can teach us some valuable gaps that can be filled with other methods, so I think simulation is different from the real world, so if the real world could be simulated, then DRL, so any type of reinforcement learning with deep representations could achieve something amazing, but for For me the simulation is very different from the real wall, so you have to interact in the real world and there you have to be much more efficient with learning and, to be more efficient, you have to have learning. ability to automatically construct common sense as common sense reasoning seems to include a large amount of information that accumulates over time and is more like programs than functions.
I like how Ilya sutskever talks about deep learning learn functions approximator z' deep RL learn an approximator for policies or whatever, but not programs, it's not learning something that's capable of that's essentially what reasoning is a program, no it's a feature, so I think not, but it will continue to inspire us and inform us about what the The real gaps are, I think, capability, but I'm very human-centered, but I think the approach of being able to take knowledge and put it together , in a way, it is built on increasingly complicated pieces of information, concepts to be able to reason in that way.
There are a lot of old school methodologies that fall under the symbolic AI ideas of doing that kind of logical reasoning by accumulating knowledge bases that will be an essential part of general intelligence, but also the essential part of general intelligence is the roomba that She says I'm intelligence if you don't believe me as someone very confident like because right now like Alexa she's very nervous like oh what can I do for you but once the Lexus says like you know she gets upset because you I would like to reject her or treat her like a maid or saying that he is not intelligent, that is the intelligence that is emerging because they think that he was quite stupid and, in general, we are all like intelligence is a kind of relative human construct. what we've convinced each other of, and once the I systems are also playing that game of creating constructs and that human communication, that's going to be important, but of course that still requires having a pretty good conversation. and ingenious. that you need to do the symbology I think I wonder about autonomous vehicles if they respond to environmental sounds.
I mean, like noticing in your cart, the self-driving vehicle that drives erratically doesn't respond to my beeping. That's a really interesting question in terms of I know, no, I think Mo hinted at looking at the sound a little bit. I think they should, so there's a lot of things that come from the audio that are really interesting. The way Moe has said that they use audio for sirens, to detect sirens from very far away, yes, I think audio is a lot of interesting information, like the sound that car tires make on different types of roads, It's very interesting, we also use that information depending on, for example, a wet road off-road when it's not raining sounds different than the dry road, there's a lot of subtle information, pedestrians screaming and that kind of thing, it's actually very difficult to know how much we get from the audio. most robotics people think audio is useless.
I'm a little skeptical, yes, but no one has been able to identify why audio might be useful, so I have two questions. The first is: what do you think is the ultimate end point for supermachine intelligence? Will we be relegated to some dark part of the Earth as we have been? We've done some next primates and next intelligent primates and my second question is: Should we have the same rights for beings made of silicon versus carbon, for example, like robots, you know, do they have separate rights or do they say they could have the same rights as humans? humans, yes, so the future of superintelligents I think I have much less work.
I see far fewer paths for AGI systems to kill humans than I think jazz systems live among us, so I think I see futures that are exciting, exciting or not so exciting, but not harmful. I think it's very difficult to create AI systems that kill people that aren't literally weapons of war. There will always be people killing people like the things that we should care about as other people, that's what amuses us, so there's a lot of ways that you know nuclear weapons, there's a lot of existential threats to our society that are fundamentally human at the core. and AI could be tools for that, but there will also be tools to defend yourself.
I also see AI proliferating as companions. I think the fellowship will be really interesting, as we will increasingly live as we already do in the world. In the digital world, like if you have an identity on Twitter and Instagram, especially if it's anonymous or something, you have this identity that you've created and it's going to continue to grow bigger and bigger, especially for people born now that's kind of an artificial identity. the one we live. much more in the digital space and that digital space, as opposed to the physical space, is where AI can thrive much more currently, you will thrive there first and you will live in a world with many smart first assistants, but also just smart ones. agents and I think they should have rights and in this contentious time when ethnic groups are fighting for rights, I feel very bad saying that they should have the same rights, but I think I have spoken to them if you read Peter Singer's work about looking. my favorite food is steak, I love meat but I also feel terrible about the torture of animals and that's the same for me, the way our society thinks about animals is very similar to the way we should Think about robots, well, that's how we will be.
I'm thinking about robots and I would say that in 20 years what will be the final class. Yes, will they become our teachers? No, they will not be our teachers. What really worries me is who will become our teachers. It's the owners of big tech companies that use these. tools to control humans first unintentionally and then intentionally, so we need to make sure that we democratize AI, it's the same kind of thing that we did with the government, we make sure that we, at the head of the technology companies , if maybe the people in the store will be heads of tech companies, someday we will have people like George Washington who gave up power by discovering this country, forget, forget all the other horrible things he did, but gave up power unlike of Stalin and all the other horrible human beings who Instead, we have sought absolute power, which will be the AI ​​of the 21st century, will be tools of power in the hands of 25-year-old nerds.
We should be very careful about that future so that humans become our masters, not AI, AI will save us, etc. that's not thank you very much

If you have any copyright issue, please Contact