YTread Logo
YTread Logo

Where AI is today and where it's going. | Richard Socher | TEDxSanFrancisco

May 31, 2021
I am very excited to talk to you

today

about the present and future of artificial intelligence. Whenever there is a buzzword and a complex topic, it is usually good to start with a definition, but it is actually a bit complicated because the definition of artificial intelligence seems to be constantly moving every time we solve a problem we no longer call it artificial intelligence it started With chess, a lot of smart researchers looked at other smart people and thought, well, we're very good at math and logic and playing complex games like chess and so they started working on those kinds of problems thinking that once they solved them many other Things would just fit together, but they didn't because they were simulated environments that weren't entitled to the same kind of noise that we have in the real world, so now the research has largely shifted away from games, which are still an important area. and they can introduce some things, things that we didn't used to consider like a lot of high intelligence just understanding spoken words seems relatively simple. we can all do it, but actually that was a really difficult problem until 2010 when deep learning changed it and can advance this much further and now we don't call it AI anymore, it's just Siri, it's just voice recognition software. but that was a really difficult problem that we couldn't solve and there are still some difficult questions in the research.
where ai is today and where it s going richard socher tedxsanfrancisco
Another area

where

deep learning has made great progress in recent years is computer vision, that is, image classification and this is For once I will try to explain to you a little what these types of models do and how they work. One of the biggest ideas in recent years in AI is to have so-called end-to-end trainable models

where

we take raw inputs, for example the pixels of an image and want to predict the final result, for example if there is a cat, a dog, a house or a clock in that image, and when we put that raw input, the pixels in these models are maintained. trying to learn more and more complex representations so that when they start looking at the pixels, the first layer only identifies simple edges and blobs, which actually turns out to also correlate well with the early visual cortex in the human brain, but then as they go on, in the next layer they combine these spots, colors and edges into more complex textures and then as they go further and further into these different layers they will identify parts of objects and eventually they will combine those parts of objects to identify complete objects and that was actually very difficult and Initially people were trying to manually identify, oh, if there is a cat, then maybe its whiskers here, this could be improved to increase the probability that there is a cat and things like that, and now all this process, all these visualizations that you see here on this slide, they are all. learned automatically just by giving it a lot of supervised data here's an image and its pixels here's the result we care about now we've actually been able to combine in computer vision even with some language processing and we can do some pretty amazing things in In recent years, here's a visualization from a recent paper by my collaborators in which we color-coded where the algorithm pays attention as it tries to generate a description of an image, so you have a girl sitting on a bench holding an umbrella and you see that in fact when he generates the word girl he is looking at the girl, when he generates the bench he is focusing his attention on the bench or on a zebra standing next to a zebra in a field of dirt .
where ai is today and where it s going richard socher tedxsanfrancisco

More Interesting Facts About,

where ai is today and where it s going richard socher tedxsanfrancisco...

These are very objective descriptions that we are. I won't get some very interesting ones from them, but actually the first zebra you focus your attention on is the one in the foreground and then the next zebra is actually the one in the background and you also see that color coding, so computer vision algorithms have become much more sophisticated, and it actually also tells us a little bit about where they're paying attention as they're trying to translate visual data into text. And it goes even further. We can also answer so-called visual questions. This is an interesting task. where basically as training data you give an image a question and an answer and now you want to test the algorithm you can ask many types of questions about an image and see if it still dries or not so the example here in the top what color are the bananas is a good question because if you didn't know the image in 90% of the cases you would probably be correct just saying yellow without the image, but here we also have the visualization of where the bananas are.
where ai is today and where it s going richard socher tedxsanfrancisco
The algorithm is paying attention while trying to answer this particular question and it is actually focusing its attention on the brighter areas of that image and it notices that those bananas are actually green and gives the correct answer, another fun question, What is the pattern on the cat's fur? his tail actually focuses most of his attention back on that shiny area of ​​the cat's tail and correctly identifies it as striped and another fun example of what the kid holding is actually finding out based on the question he Now you must focus your attention. the arm and the object under the arm and correctly classifies it as a surfboard, so those are some of the great applications that we can now do as long as we have enough training data about a certain domain, if it is never shown in an image of baseball. and it's training time, you won't be able to answer any questions about baseball, they're actually even more powerful applications that we can now do with computer vision, one of which I'm really excited about, because in the medical field, particularly oncology , this is a small startup, a felis, that is actually automating blood cell counting, so you can make it very small by pricking your finger and you can count blood cells with the same kind of architecture as the one I showed you before, It's a convolutional neural network and now you make this so cheap it used to cost a few hundred dollars for people to sit there and for each blood sample count how many red or white blood cells there are now you can make this much cheaper, you can demonstrate that you can receive cancer care. identify infections and help leukemia patients, etc. in general.
where ai is today and where it s going richard socher tedxsanfrancisco
I think radiology will also have a big impact with AI. The problem with radiology is that we need a lot of trained data because, unlike a blood test or a pathology analysis, what you are looking for is a thousand different things that could be wrong on a CT scan of the head and it will take us a while. before we can automate the whole process, so for a long time AI will be working together with radiologists to improve that process and in fact, we already know that. that we can identify certain things that can kill you very quickly, for example a stroke or so-called intracranial hemorrhage, brain bleeds, we can identify them very quickly and then, without knowing all the other things that could be wrong on a CT scan of the head , we can put them at the top of the line in an emergency room and that can already save lives.
Now we talk about computer vision and speech recognition as two successes of AI. In fact, there are still some areas that we are struggling with and that is motor control. It's a DARPA Grand Challenge and a bunch of examples of some very expensive robots trying to walk around open doors, turning levers and stuff like that and as you can see we're still pretty far away as a community, in fact you could say we're not even We are still at Abby's level when it comes to motor control, Abby is quite complex, she has a million neurons, she needs to identify many different pathways, etc., and we are not there yet, so it is a very active area of research one of the most interesting manifestations of human intelligence, I think it is language and a lot of progress is being made in language right now, but there is still a long way to go, so here is an example.
I think we could do better now, but this is 2011, when people realized that every time the famous actress Anne Hathaway won a couple of Oscars and starred in a movie, the reviews suddenly came out and the company's stock company Berkshire Hathaway were up a significant amount, so it was already clear in 2011 that people were trying to use natural language processing for algorithmic trading and in this case they made the mistake of so-called entity disambiguation, they disambiguated Hathaway to the company instead of the actress and then they made some pretty substantial monetary decisions that we've gotten better at, there's actually NLP. just text classification, in fact here are a couple of examples of sentences that until two years ago almost all algorithms would have classified incorrectly the first sentence is in its irregular, cheap and simple form, the movie works, so the algorithms traditional people would have said well it's jagged and cheap, so it's probably a negative sentence because they haven't had the ability to capture all the context, but now what you see here is that we actually have two passes over that sentence and on the second pass the algorithm focuses much more on the works. the movie really works despite its flaws, so I correctly classified this as a positive and the second is the opposite type of example.
The best way to have any chance of enjoying this movie is to lower your expectations again, a type of example that is quite complicated. because algorithms in the past would just look at individual words and have the best and hope and chance and enjoy such a positive sentence, but you can only get there if you already think the movie is pretty bad, so those are examples that now We can do a lot because Of the advances also in deep learning there are some active areas of research that we are still working on and one of them is text summarization.
It's actually a really complicated problem. Virtually every natural language processing in a model you've seen in the past can only generate at most one coherent sentence once we try, as a community, to generate longer sequences fully automatically in these end-to-end deep learning models. another, which generally didn't work very well, so this is the result from just a couple of months ago. where our group worked on the summary and you see here at the bottom a longer document and at the top you see the summary and the fascinating thing here is that actually the summary algorithm learned in some cases to copy and paste particular words, sometimes full sentences, but sometimes you also pick and choose which words to choose from each area of ​​the longer document to generate the summary.
In many cases, it actually generates coherent summaries of longer document summaries, and as the summary correctly says, doing this really well remains an open research problem. The areas of NLP that I'm personally most excited about are question answering because you can actually think of answering questions as a task that encompasses virtually all other NLP tasks. You can ask what the translation of the sentence is in French. You can ask what. it's the feeling that you can ask what the right summary is somehow it all becomes a question answering problem if you have a really powerful question answering model and that's why here we worked on this rescoring dataset that Stanford collected called squad with the Stanford question answering dataset that requires a lot of Wikipedia articles and then asks collective workers to collect questions and then also different on-call workers to collect the answers and the models that you now see coming into focus In this task they are much more complex than a model that I showed you at the beginning.
It had the same kind of layer, just learning a language of more abstract representations on its own also seems to require a lot of distributed computing in our brain that requires a lot of different parts and elements, which I'm really excited about now and I'm looking forward to the next couple. . of years is the number of people who are now entering the field, there is a lot of excitement in AI and just in one class that I talked to Chris Manning about earlier this year, we had over 660 students at Stanford who attended that class even though it is a postgraduate degree. level class and there are hundreds of thousands of views on YouTube of quite technical material and that is very exciting, but as we see that AI really works, we have to also recognize that it is just a tool and will continue to be a tool for the foreseeable future.
I don't really have to worry about Skynet or Terminator scenarios, but the important thing is to understand that the tools can be used in good and bad ways. In some ways, AI is like the Internet, a hammer, or a car, and you canuse them. as weapons or you can use them to transport sick people and it's important for us to recognize that tools are only as good as the people and the political systems that end up using them, in fact if we use them well, I think AI and especially AI. The powerful capabilities of language will allow us to improve our communication.
I'm pretty sure we can eventually have the Babel Fish in the next few years, where we speak in one language and listen in another, which will be activated on the other end. We can improve access to information. responding is a great example of that and generally making work much more efficient. In fact, I think we will be able to automate most basic human needs, like food, we can automate agriculture with computer vision and some simpler robotic control, we can build houses. automatically and so on. I think that ultimately as human intelligence and productivity improves, I hope that will lead us to a future where we can focus on unique and creative tasks and those kinds of tasks that require empathy and where we take care of each other. others. and we can basically automate a lot of the boring backstabbing out there.
What's important to recognize is that AI is only as good as the training data we give it, if your training data is sexist or racist then I will pick up on those patterns. and in some cases repeat or even amplify them as we apply AI to more and more different areas in simple things like loan applications, but also more complex things like the financial system or the judicial system and medical applications and political advertising, etc. . I will eventually be in all of these areas. I think it is important that we think about regulations or at least guidelines to prevent the negative effects that could occur and that may have already been in our training data and lastly, it is not just the data that could have biases, but also the communion itself and the IEEE community at this point are quite aware that we have a diversity problem and that is something we continue to work on well, thank you

If you have any copyright issue, please Contact