YTread Logo
YTread Logo

Understanding Artificial Intelligence and Its Future | Neil Nie | TEDxDeerfield

Jun 06, 2021
thank you, thank you, thank you, so during World War II the first computer was invented, they broke German communications and ensured a successful landing in Normandy. The father behind this unprecedented machine Alan Turing wrote the article Computing Machinery and Intelligence in 1950 and the article begins with the words I propose to consider the question: can machines think well today? Inspired by his thoughtful question, we will try to answer the following: how can we create an intelligent computer and what will the

future

be like with intelligent machines? Well, in fact, AI has been growing. exponentially in the last decade has already been affecting our lives in ways you may not notice, for example, every time you perform a search on Google, some type of AI is used to show you the best results every time you ask Siri a question in natural language.
understanding artificial intelligence and its future neil nie tedxdeerfield
Speech processing and recognition are being used, so

artificial

intelligence

will likely be one of the biggest scientific advances of the 21st century. It will give us the power to explore the universe and our humanity with a different approach. AI has the potential to forever change our humanity. The backbone of

artificial

intelligence

is machine learning and I think the term is self-explanatory: we want machines to learn based on their knowledge and make decisions. Machine learning can be understood in two main components: one is to use algorithms to find meaning from random and messy data and the second part is to use learning algorithms to find the relationship between that knowledge and improve that learning process, so the General goal of machine learning is quite simple: improve the performance of machines on certain tasks and that can be predicting stock. from the market to the complicated ones, like translating articles between languages ​​and the screenshot you see now is actually a representation of Google Translate's neural network.
understanding artificial intelligence and its future neil nie tedxdeerfield

More Interesting Facts About,

understanding artificial intelligence and its future neil nie tedxdeerfield...

Speaking of translation, anyone here speaks the second or third language, that's amazing. Well, I was born in China and I speak Chinese and also English, plus a couple of programming languages ​​here because when my family and I travel the world, we often need something called Google Translate and by looking at the artificial intelligence of Google Translate we can get a greater

understanding

of how most AI works well, first of all, have you ever wondered how much data Google has? Well, it turns out that Google carries around 10 to 15 exabytes of data. What does that mean? Let me put that into perspective for you if a personal computer is 500 gigabytes then Google's 15 exabytes would be 30 million personal computers and data happens to be one of the fuels that powers the magical technology of Google Translate, so in On the surface, Google Translate hasn't changed since 2007 when it was first released, but will you notice? is that the translator is increasingly faster and more accurate, so it turns out that Google Translate's learning process is inspired by ours.
understanding artificial intelligence and its future neil nie tedxdeerfield
We, as human beings, get better at doing things by practicing, just like our math and music teachers always tell us happens. Google Translate can get better at translating by reading more articles, so how do computers learn? We can actually create this flowchart that will give us a summary and give us a good idea of ​​how artificial intelligence actually works, so it turns out that we have to use it. some training inputs and put them into a learning algorithm that will give us some knowledge and that knowledge will be in a computer that knows about that specific topic and you and I, where the user is right, the user will give the computer some inputs and, hopefully, some alpha. will come out, so in our case Google, the 15 This whole process is actually the learning algorithm, this is what drives computers to learn and be smart, so today we will focus on two parts, one is image processing and the second part is neural networks, so Let's start by talking about image processing.
understanding artificial intelligence and its future neil nie tedxdeerfield
We can talk about computer vision without talking about human vision, and the visual signal from our retina is transmitted through our brain to our primary visual cortex in the back of our brain, which is right here, and the virtual information is separated and processes in three different processing systems, one The system mainly processes information about color, the second about shape, the third about location and organization of movement, so with all that in mind, today we will try to create an application that can identify a Coca-Cola logo, so first of all I have to understand that most of the images we see on a computer screen I refer to as pixels, tiny little things that represent color, which is why Steve Jobs calls to his company Pixar, since every person in that world is made of pixels, which is cool, so the computer is trying.
To understand this image, you will first separate them into different

future

objects that we can easily see in the still image and then each of these features will provide the computer with some information about that image and today we will mainly focus on the area parameter and the skeleton and some details about these features, so now the computer has those things in memory, so that when the user gives some input to the computer, it will be able to process an import and compare it with what is in memory and then give it some result whether the image matches the template or not, here is that technology in action, so I have created an application on this iPad that will be able to identify the Coca-Cola logo and this application actually works with open computer vision and thanks to a great framework, so today We're going to learn a Coca-Cola logo, so let's click on that.
We just learned this wonderful image and as you can see, the image above has little green rectangles and squares around it and those are regions that the computers are processing and in the image below. is one of the most important features in that image and in a table, as you can see, there are details that the computer remembers, so let's discard it and click start tracking, oh look, that's quite sensitive, it successfully followed the paper just right in front of me. it has a cool coke logo and also that's life, so you know, I'm not thinking about anything, so wonderful, thank you, so now let's recap, we can summarize everything we did with this simple flowchart, we had some input data and We use some algorithm to find some meaning in that data and in future we will use new networks to improve this whole process and hopefully learn more and more images and the pixel in our case or the input data and the meaning of things like skeleton area parameters, those that you know, detail what the computer focused on and hopefully in the future we will be able to classify any image we want.
Remember at the beginning we talked about there are two parts of learning algorithms, the second part is close to networks, so let's talk a little about that, our brain is made of millions and millions of neurons and those little things communicate with each other , they process information and that is how we become intelligent. It took thousands and thousands of years of evolution and it is an incredible price. The Times thought what would happen if we actually converted that and put it into a computer, so first of all, Russ will understand the differences and similarities between an artificial neuron and a biological one, so on his left this is a biological neuron and It has cell bodies, axons, and axon terminals. and dendrites and things like that and those parts will take information and process it and give it an output similar to our right, as you can see, we have a bunch of axes and from our algebra class you might know that X is our input in our case . and f of and get a better

understanding

of things and synapses are represented as lines to our right, so this is an animated version of what scientists think our neurons would be like in the old days, in the 1970s and before most of us were born.
When scientists wanted to do something like image recognition or speech recognition, what they had to do was sit around a table and they had to put papers and pens and start doing math, they had to create lookup tables and this was a bother because they were labor-intensive and time-consuming, so scientists thought what would happen if we gave computers their own power to learn. That would be magic because lookup tables would never exist if we could make computers learn on their own. We will have computers with all the knowledge about a subcivic topic and this is what this diagram represents, the computers own knowledge about something and this is really empowering because scientists no longer have to create lookup tables for days and years, which All they have to do is just write a simple program, train the computer and then it will be able to do things like image recognition and speech recognition in a matter of seconds, so with the help of the Google cloud platform we are going to do another demonstration showing the power of combining image processing and neural processing. networks so once again this is life and we have a big audience here tonight and we're going to take a photo, take a photo of my phone, let's say, and see what computer stuff, oh, it's a mobile phone, it's a product on a device that is wonderful, so what if we took a photo of the audience?
It's a performance, there is an audience and we wave to the camera. great, thanks, so all the things we just talked about are intangible, like art and music and language and all that, except technology like it plays such an important role in our daily lives, for example, in the Google's autonomous vehicle project uses image processing to be able to identify the difference between a police vehicle and a normal passenger vehicle and this is another image of Google's autonomous vehicle. In the project they combine image processing and also laser and ultrasonic sensors to be able to form three-dimensional models of the cars around them, so that the car can navigate safely and without delays, and this might surprise you.
In the 1990s, scientists implemented this technology on fishing boats. A well-trained computer can identify the difference between a tuna and a cod, so the next time you eat how the fish is served, you can appreciate the technical journey the bass fish required to reach your plate. So what's next? Let's try to answer this question. what the future looks like with AI, let's actually go back in history and talk about one of the biggest breakthroughs we had with AI and many of you will remember this historical event between Garrick House Burrell and the IBM computer that exploded.
The IBM computer became the first. Once programmed to defeat a world chess champion's chest under tournament rules in a classical game was a very significant victory, it was a milestone, however later analysis actually downplayed the intellectual value of chance as a game that can be defeated simply by brute force, which means that if you have enough calculations and enough computing power, chess can be defeated, which means that calculation does not equal intelligence and this is a very understanding. Importantly, however, Google took a different approach and created alphago, a program to learn a game of Go. he says, i don't mean to make any puns, um go is a program with much fewer rules but requires much more intuition, you can't just calculate what the chances of go are, which is why google's alphago was able to defeat the south korean go champion, lee sedol, in a 2016 game and This was a breakthrough, another breakthrough because the program used reinforcement learning as well as neural networks that resemble our own decision-making process, so the AI ​​will not only change our lives in small ways, as we talk about evolving, but it will probably bring us tremendous change. as we saw 200 years ago with the Industrial Revolution, when humans first harnessed the power of CO2 and steam engines, a change as we saw in the 1990s, when millions upon millions of computers came into homes everywhere. the world.
AI will give us an unprecedented amount of energy, as well as the opportunity to change imagine imagine 10 years from now when we are autonomously building aspace station on Mars your car takes you to work well you are talking on the phone with a friend who works on Wall Street and he doesn't have to We no longer worry about stock losses because AI will ensure a fair and safe trading environment also on hospitals around the world. Scientists are using AI to find mutations in human DNA databases and also cures for diseases, and these are just some of the possibilities and the sky is not there.
It is no longer the limit, the power and freedom we have from artificial intelligence strengthens us but also humiliates us as human beings. We are able to create machines that can learn and think like us. In the long term, AI will not replace biological intelligence, but it will improve our lives, it would improve our future and I think most AI researchers will agree with me on that, so after all, you and I and we are all Together on this journey, we all have the opportunity to witness and also decide how artificial intelligence works. will shape our future thanks

If you have any copyright issue, please Contact