YTread Logo
YTread Logo

Elon Musk: Tesla Autopilot | Artificial Intelligence (AI) Podcast

Feb 27, 2020
- The following is a conversation with Elon Musk. He is the CEO of Tesla, SpaceX, Neuralink and co-founder of several other companies. This conversation is part of the Artificial Intelligence Podcast. This series features leading researchers from academia and industry, including CEOs and CTOs of automotive, robotics,

artificial

intelligence

and technology companies. This conversation occurred after the publication of our group's paper at MIT on functional driver monitoring while using Tesla Autopilot. The Tesla team reached out to me and offered me a

podcast

conversation with Musk. I accepted with full control of the questions I could ask and the choice of what is made public.
elon musk tesla autopilot artificial intelligence ai podcast
I ended up editing nothing substantial. I had never spoken to Elon before this conversation, either publicly or privately. Neither he nor his companies have any influence on my opinion, nor on the rigor and integrity of the scientific method that I practice in my position at MIT. Tesla has never financially supported my research and I have never owned a Tesla vehicle, nor have I ever owned Tesla stock. This

podcast

is not a scientific article, it is a conversation. I respect Elon as I do every other leader and engineer I've talked to. We agree on some things and we disagree on others.
elon musk tesla autopilot artificial intelligence ai podcast

More Interesting Facts About,

elon musk tesla autopilot artificial intelligence ai podcast...

My goal, as always in these conversations, is to understand the way the guest sees the world. A particular point of disagreement in this conversation was the extent to which camera-based driver monitoring will improve outcomes and how long it will remain relevant for AI-assisted driving. As someone who works in and is fascinated by human-centered

artificial

intelligence

, I believe that, if implemented and integrated effectively, camera-based driver monitoring is likely to be beneficial in both the short and long term. Instead, Elon and Tesla focus on improving Autopilot so that its statistical safety benefits override any concerns about human behavior and psychology.
elon musk tesla autopilot artificial intelligence ai podcast
Elon and I may not agree on everything, but I deeply respect the engineering and innovation behind the efforts he leads. My goal here is to catalyze a rigorous, nuanced and objective debate in industry and academia about AI-assisted driving, ultimately contributing to a better, safer world. And now, here is my conversation with Elon Musk. What was the vision, the dream, of Autopilot at the beginning? The overall level of the system when it was first conceived and began to be installed in 2014, the automotive hardware? What was the vision, the dream? - I wouldn't characterize it as a vision or a dream, it's just that there are obviously two big revolutions in the automobile industry.
elon musk tesla autopilot artificial intelligence ai podcast
One is the transition to electrification and the other is autonomy. And it became clear to me that, in the future, any car that didn't have autonomy would be as useful as a horse. Which isn't to say it's useless, it's just weird and somewhat idiosyncratic for someone to have a horse right now. It is obvious that cars will drive themselves, it is just a matter of time. And if we did not participate in the autonomy revolution, then our cars would be of no use to people, compared to autonomous cars. I mean, you could say that an autonomous car is worth five to ten times more than a car that is not autonomous. - Long-term. - It depends on what you mean by long term, but let's say at least for the next five years, maybe 10 years. - So there are a lot of very interesting design options with Autopilot right from the start.
It is first displayed in the instrument cluster, or in Model 3 and on the center stack display, what the combined sensor suite sees. What was the thinking behind that choice? Was there a debate, what was the process? - The objective of the visualization is to provide control of the vehicle's perception of reality. The vehicle then receives information from a series of sensors, mainly cameras, but also radars and ultrasounds, GPS, etc. And then that information is represented in a vector space with a bunch of objects, with properties like lane lines, traffic lights, and other cars. And then in vector space, that is represented back on a screen so you can confirm whether the car knows what's going on or not by looking out the window. - Well, I think it's an extremely powerful thing for people to understand, to become one with the system and to understand what the system is capable of.
Now, have you considered showing more? So if we look at computer vision, such as road segmentation, lane detection, vehicle detection, object detection, underlying the system, there is some uncertainty at the edges. Have you considered revealing the parts that affect the uncertainty in the system, the type of... - Probabilities associated with, say, image recognition or something like that? - Yes, right now it shows the vehicles in the vicinity, very clear and sharp image, and people confirm that there is a car in front of me and the system sees that there is a car in front of me, but to help people to build an intuition of what computer vision is, by showing part of the uncertainty. - Well, in my car I always look at this with the debug view.
And there are two debug views. One is augmented vision, which I'm sure you've seen, where we basically draw boxes and labels around the objects that are recognized. And then we have what we call a visualizer, which is basically a representation of the vector space, which summarizes the input from all the sensors. This does not show any images, which basically shows the world view of the car in vector space. But I think this is very difficult for normal people to understand, they wouldn't know what they are looking at. - So it's almost an HMI challenge as the things currently displayed are optimized for the general public to understand what the system is capable of. - If you have no idea how computer vision works or something, you can still look at the screen and see if the car knows what's going on.
And then if you're a development engineer, or if you have the development build like me, you'll be able to see all the debugging information. But this would be complete gibberish to most people. - What is your opinion on how to best distribute effort? So I would say there are three technical aspects of Autopilot that are really important. So, there are the underlying algorithms, such as the neural network architecture, there is the data it is trained on, and then there is the hardware development and perhaps others. So, look, algorithm, data, hardware. You have a money limit and a time limit.
What do you think is most important to allocate resources to? Or do you see it distributed fairly evenly between those three? - We automatically obtain large amounts of data because all our cars have eight external cameras and radar, and generally 12 ultrasonic sensors, GPS obviously and IMU. And we have about 400,000 cars on the roads that have that level of data. Actually, I think you're following it pretty closely. - Yes. - Yes, we are approaching half a million cars on the roads that have the full suite of sensors. I'm not sure how many other cars on the road have this set of sensors, but I would be surprised if it was more than 5,000, which means we have 99% of all the data. - So there is a huge influx of data. - Of course, a massive influx of data.
And then it took us about three years, but now we have finally developed our full autonomous computer, which can process an order of magnitude as large as the NVIDIA system that we currently have in cars, and to use it, you disconnect the NVIDIA computer and connect the Tesla computer and that's it. In fact, we are still exploring the limits of its capabilities. We can run the cameras at full frame rate, full resolution, not even crop the images, and there is still headroom even on one of the systems. The fully autonomous computer is actually two computers, two systems on a chip, that are completely redundant.
So you could run a ship through basically any part of that system and it would still work. - The redundancy, are they perfect copies of each other or... - Yes. - Oh, so it's purely for redundancy instead of a discussion machine type architecture where both make decisions, this is purely for redundancy. - Think of it more like a twin-engine airliner. The system will work best if both systems are running, but is capable of running safely on one. So as it is now, we can just run, we haven't even reached the performance limit, so there is no need to distribute functionality between both SOCs.
In fact, we can run a complete duplicate of each one. - Then you really haven't explored or reached the system limit. - Not yet, not the limit, no. - So the magic of deep learning is that it improves with data. You said there is a huge influx of data, but driving... Yes. - The really valuable data to learn from is the extreme cases. I heard you talk somewhere about

autopilot

disconnects being an important time to use it. Are there other edge cases or maybe you can talk about those edge cases, what aspects of them might be valuable, or if you have other ideas, how to uncover more and more edge cases in driving? - Well, there are many things that are learned.
Certainly, there are extreme cases where, let's say someone is on

autopilot

and takes control, and then there's a trigger that goes out to our system and says, okay, did you take over for convenience or did you take control because the pilot automatic was not there? working properly? There's also, let's say we're trying to figure out what the optimal spline is to traverse an intersection. So those in which there are no interventions are the correct ones. So you say, okay, when it looks like this, do the next thing. And then you get the optimal spline for navigating a complex intersection. - So there is the common case, where you try to capture a large number of samples from a particular intersection when things went well, and then there is the extreme case where, as you said, not for convenience, but something worked .
It's not exactly going well. - So if someone were to initiate manual control from autopilot. And really, the way to look at this is to view all inputs as errors. If the user had to make an entry, there is something, every entry is an error. - It's a powerful line to think about it that way because it very well could be a mistake, but if you want to get off the highway, or if it's a navigation decision that Autopilot is not currently designed to make, then the driver takes control, how do you do it? do you know the difference? - Yes, that's going to change with Navigate on Autopilot, which we just launched, and without confirmation of stalking.
Taking control to change lanes, exiting a freeway or making a freeway interchange, the vast majority of that will disappear with the release that just came out. - Yeah, so I don't think people really understand how big of a step that is. - Yes, they don't. If you drive the car, then you will. - Therefore, you should still keep your hands on the wheel when performing automatic lane change. There are these big leaps through the development of Autopilot, through its history, and what stands out to you as the big leaps? I'd say sailing on autopilot without having to confirm is a big leap. - It's a huge jump. - What are the...
It also automatically overtakes slow cars. So it's as much about navigation as it is about finding the fastest lane. So it will overtake slow cars, exit the motorway and take motorway junctions, and then we will have traffic light recognition, which was initially introduced as a warning. I mean, in the development version I'm driving, the car comes to a complete stop and stops at the traffic light. - So those are the steps, right? You just mentioned some things that are an indication of a step towards full autonomy. What would you say are the biggest technological obstacles to fully autonomous driving? - Actually, the full self-driving computer that we just did, the Tesla, what we call the FSD computer, which is now in production, so if you order any Model S or X, or any Model 3 that has the driving package complete autonomous, You will get the FSD computer.
It is important to have sufficient base calculation. Then refining the neural network and control software. All of that can be provided simply as an over-the-air update. What is really profound, and what I will emphasize at the investor day where we are focusing on autonomy, is that the car that is currently being produced, with the tough word that is currently being produced, is capable of driving completely autonomously. . - But capable is an interesting word because... - Hardware is. - Yes, the hardware. - And as we improve the software, the capabilities will increase dramatically, and then the reliability will increase dramatically, andit will then receive regulatory approval.
Basically, buying a car today is an investment in the future. I think the bottom line is that if you buy a Tesla today, I think you're buying an appreciating asset, not a depreciating asset. - That is a very important statement, because if the hardware has enough capacity, it is usually the most difficult thing to upgrade. - Yes, exactly. - Then the rest is a software problem... - Yes, software really has no marginal cost. - But, what is your intuition regarding software? How difficult are the remaining steps to get to a point where the experience, not just the safety, but the entire experience, is something that people enjoy? - I think people really enjoy it on the roads.
It's a total change in quality of life, using Tesla Autopilot on the roads. So it's really just a matter of extending that functionality to city streets, adding traffic light recognition, navigating complex intersections, and then being able to navigate complicated parking lots so the car can pull out of a parking space and come back. to look for you, even if it's in a complete maze of a parking lot. And then you can leave it and find a parking spot on your own. - Yes, in terms of enjoyment, and something that people would really find very useful, parking, it's very annoying when you have to do it manually, so there are a lot of benefits to be had from automation there.
So let me start injecting a little bit of the human into this discussion. So let's talk about full autonomy, if you look at the current level four vehicles that are tested in a row like Waymo and so on, they are only technically autonomous, they are actually level two systems with a different design philosophy, because there is always a safety driver in almost all cases, and are monitoring the system. - Good. - Do you think that Tesla's fully autonomous driving will continue to require human supervision for some time? So are their capabilities powerful enough to drive but still require a human to still supervise, just as a safety driver is in other fully autonomous vehicles? - I think it will be necessary to detect hands on the wheel for at least six months or so from here.
It's really a question of, from a regulatory standpoint, how much safer the autopilot needs to be than a person, so that it's okay not to monitor the car. And this is a debate that you can have, and then, but you need a lot of data to be able to show, with great confidence, statistically speaking, that a car is dramatically safer than a person. And adding the monitoring person does not materially affect security. So you may need to be 200-300% safer than a person. - And how do you prove it? - Incidents per kilometer. - Incidents per kilometer. - Yes. - So, accidents and deaths... - Yes, deaths would be a factor, but there are simply not enough deaths to be statistically significant, at scale.
But there are enough accidents, there are many more accidents than deaths. So you can evaluate what the probability of an accident is. Then there is another step which is the probability of injury. And the probability of permanent injury, the probability of death. And all of them must be much better than a person, at least, perhaps, by 200%. - And do you think there is a possibility of having a healthy discourse with the regulatory bodies on this issue? - I mean, there is no doubt that regulators paid disproportionate attention to what the press generates, this is just an objective fact.
And it also generates a lot of press. So in the United States there are, I think, almost 40,000 automobile deaths a year. But if there are four at Tesla, they'll probably get a thousand times more press than anyone else. - So the psychology of that is really fascinating, I don't think we have enough time to talk about that, but I have to talk to you about the human side of things. So our MIT team and I recently published a paper on functional monitoring of drivers while using Autopilot. This is the work we've been doing since Autopilot first launched publicly more than three years ago, collecting videos of drivers' faces and bodies.
I saw you tweeted a quote from the summary, so I can at least assume you read it. - Yes, I read it. - Can I tell you about what we found? - Sure. - Well, it looks like in the data that we've collected, drivers are maintaining functional vigilance so we're looking at 18,000 autopilot disengagements, 18,900, and we're noting whether they were able to take control in a timely manner. So they were there, present, looking down the road to take control, okay? So this goes against what many would predict from the automation surveillance literature. Now the question is: do you think these results hold across the entire population?
So ours is just a small subset. One criticism is that there is a small minority of drivers who can be very responsible, whose decreased vigilance would be increased by the use of autopilot. - I think all this is really going to go away, I mean, the system is improving so much, so fast, that this will be a moot point very soon. When there is surveillance, if something is many times more secure than a person, and then adding a person is, the effect on security is limited. And in fact, it could be negative. - That's really interesting, so the fact that a human being, a percentage of the population, can exhibit a decrease in vigilance, won't it affect the overall statistics or security figures? - No, in fact, I think it will be, very, very quickly, maybe even towards the end of this year, but I would say that I would be surprised if it were not next year at the latest, that a human intervenes. security will decrease.
It decreases, as you imagine if you are in an elevator. Now there used to be elevator operators. And you couldn't just get on an elevator and operate the lever to move between floors. And now no one wants an elevator operator, because the automated elevator that stops at the floors is much safer than the elevator operator. And in fact it would be quite dangerous to have someone with a lever who could move the elevator between floors. - So, that's a really powerful statement and really interesting, but I also have to ask from the user experience and from a security perspective, one of the passions for me algorithmically is camera-based detection of just detecting the human, but Detecting what the driver is looking at, the cognitive load, the body posture, from a computer vision point of view, is a fascinating problem.
And there are many in the industry who believe it is necessary to have camera-based driver monitoring. Do you think there could be any benefit to tracking drivers? - If you have a system that has an equal or lower level of human reliability, then driver monitoring makes sense. But if your system is vastly better and more reliable than a human's, then driver monitoring doesn't help much. And, like I said, if you're in an elevator, do you really want someone with a big lever, some random person operating the elevator between floors? I wouldn't trust that. I would rather have the buttons. - Well, you are optimistic about the rate of improvement of the system, from what you have seen with the computer of the fully autonomous vehicle. - The rate of improvement is exponential. - So one of the other very interesting design choices early on that connects to this is the Autopilot operational design domain.
So where can you activate autopilot? In contrast, another vehicle system we were studying is the Cadillac Super Cruise system which, in terms of ODD, is very restricted to particular types of roads, is well mapped and tested, but is much narrower than the ODD of Tesla vehicles . - It's like ADD (both laugh). - Yeah, that's good, that's a good line. What was the design decision in that different thinking philosophy, where there are pros and cons? What we see with a wide ODD is that Tesla drivers are able to explore more of the limitations of the system, at least early on, and understand that along with the instrument cluster display, they begin to understand what the capabilities are, so that's a benefit. .
The downside is that you allow drivers to use it basically anywhere... - Anywhere it can confidently detect lanes. - Lanes, was there a philosophy, design decisions that were challenging, that were being made there? Or was it done on purpose from the beginning, with intention? - Frankly, it's crazy to allow people to manually drive a two-ton deadly machine. That's crazy, in the future people will say: I can't believe someone was allowed to drive one of these two-ton death machines, and they could just drive wherever they wanted. Like elevators, you can move the elevator with that lever wherever you want, and you can stop it halfway between floors if you want.
It's pretty crazy, so in the future it will seem crazy for people to drive cars. - So I have a lot of questions about human psychology, about behavior and so on... - That's debatable, it's totally debatable. - Because you have faith in the AI ​​system, not faith, but, both on the hardware side and the deep learning approach of learning from data, it will make it much safer than humans. - Yes, exactly. - Recently, there were some hackers who tricked Autopilot into acting in unexpected ways on adversarial examples. We all know that neural network systems are very sensitive to minor perturbations, these adversarial examples, at the input.
Do you think it is possible to defend against something like this for the industry? - Of course (both laugh), yes. - Can you explain the confidence behind that response? - A neural network is basically a bunch of matrix math. But you have to be very sophisticated, someone who really understands neural networks, and basically reverse engineer how the array is constructed, and then create a little thing that exactly makes the math of the array slightly off. But it's very easy to block that by having what would basically be a negative recognition, it's like the system sees something that looks like a matrix hack, it excludes it.
It is something very easy to do. - So learn about both valid and invalid data, basically learn about contradictory examples so you can exclude them. - Yes, basically you want to know what is a car and what is definitely not a car. And you train for this, this is a car, and this is definitely not a car. Those are two different things. People really have no idea about neural networks. They probably think neural networks involve a fishing net or something (Lex laughs). - So, as you know, going one step beyond Tesla and Autopilot, current deep learning approaches still seem, in some ways, to be far from general intelligence systems.
Do you think current approaches will lead us to general intelligence or will it be necessary to invent entirely new ideas? - I think we are missing some key ideas for artificial general intelligence. But it will come to us very quickly and then we will have to decide what we will do, if we have that option. It's amazing how people can't differentiate between, say, the narrow AI that allows a car to figure out what a lane line is and navigate streets, versus general intelligence. As if these were very different things. Like your toaster and your computer are machines, but one is much more sophisticated than the other. - Are you sure that with Tesla you can create the best toaster in the world... - The best toaster in the world, yes.
The best autonomous driving system in the world... yes, to me right now this seems like a game, set and match. I mean, I don't want us to be complacent or overconfident, but that's what it is, that's literally what it looks like right now, I could be wrong, but it seems like Tesla is way ahead of everyone. - Do you think we will ever create an AI system that we can love and that will love us in a deep and meaningful way, like in the movie Her? - I think the AI ​​will be able to convince you very well to fall in love with it. - And that's different from us humans? - You know, we begin to enter into a metaphysical question: do emotions and thoughts exist in a realm other than the physical?
And maybe yes, maybe no, I don't know. But from a physics point of view, I tend to think of things, you know, like physics is my main type of training, and from a physics point of view, essentially, if he loves you in a way that he doesn't you can tell if it's real or not, it's real. - That is a physical vision of love. - Yes (laughs), if you cannot prove that it is not like that, if there is no test that you can apply that allows you to tell the difference, then there is no difference. - Right, and it's similar to seeing our world as a simulation, they may not be a test to differentiate the real world - Yes. - and the simulation and therefore, from a physics perspective, it could very well be the same. - Yes, and there may be ways to check if it is asimulation, there may be some, I'm not saying there aren't any.
But you could certainly imagine that a simulation could correct, that once an entity in the simulation found a way to detect the simulation, it could pause the simulation, start a new simulation, or do one of many other things that then correct that. mistake. - So when maybe you or someone else creates an AGI system and I can ask you a question, what would that question be? - What's outside the simulation? - Elon, thank you very much for speaking today, it is a pleasure. - Alright. Thank you.

If you have any copyright issue, please Contact