YTread Logo
YTread Logo

Tesla FSD Computer (Hardware 3) Detailed for Full Self Driving & Autopilot - Autonomy Day

Feb 27, 2020
Welcome to our first day of

autonomy

analysts. I really hope that this is something that we can do a little more regularly now to keep you informed about the development that we are doing regarding autonomous

driving

, so about three years ago we wanted to We wanted to find the best possible chip for

full

autonomy

and We discovered that there were no chips designed from scratch for neural networks, so we invited my colleague Pete Bannon, vice president of silicon engineering, to design such a chip for us. They hired me. In February 2016 I asked Elon if he was willing to spend all the money necessary to do a completely custom system design and he said, "Well, we're going to win." We started, hired a group of people, and started thinking about what a custom-designed chip for

full

autonomy would look like.
tesla fsd computer hardware 3 detailed for full self driving autopilot   autonomy day
We spent eighteen months doing the design and in August 2017 we released the design for manufacturing. We get it back. In December it was turned on and it actually worked very, very well on the first try. We made some changes and launched a Rev B zero in April 2018. In July 2018, the chip was qualified and we started full production of quality parts in December 2018. In 2018, we had the autonomous

driving

stack running on the new

hardware

and we were able to start retrofitting employee cars and testing the

hardware

and software in the real world. Last March, we started shipping the new

computer

in the Model S and three cars, lasts just over three years and is probably the fastest systems development program I've ever had.
tesla fsd computer hardware 3 detailed for full self driving autopilot   autonomy day

More Interesting Facts About,

tesla fsd computer hardware 3 detailed for full self driving autopilot autonomy day...

I've been a partner at some point and you really talk a lot about the benefits of having a tremendous amount of vertical integration that allows you to do simultaneous engineering and accelerate implementation. I will introduce you to the complete autonomous

computer

and tell you a little about it. On how it works, we'll dive into the chip and go over some of those details. I'll describe how the custom neural network accelerator we designed works and then show you some results. One of those goals was to keep the power under 100 watts so we could fit the new machine into existing cars.
tesla fsd computer hardware 3 detailed for full self driving autopilot   autonomy day
We also wanted a lower part cost so we could enable redundancy for safety at a time when our thumb was in the wind. I presented that at least 50 billion would be needed. operations one second of neural network performance to drive a car, so we wanted to get at least that and really as much as possible in terms of doing the chip design, as Elon alluded to earlier, there really wasn't a basic neural network. accelerator that existed in 2016, everyone added instructions to their CPU, GPU or DSP to improve inference, but no one really did it natively and then for other components of the chip, we bought industry standard IP for CPU and GPU that it allowed us to minimize the design time and also the risk to the program, this is how it looks over there on the right, you can see all the connectors for the video that comes from our cameras that are in the car, you can see the two cars- driving computers in the middle of the board and then to the left is the power supply and some control connections, so I really love when a solution is stripped down to its most basic elements: it has video computing and power, and it's straightforward and simple, here is the original 2.5 hardware case that the computer was in and that we have been shipping for the last two years.
tesla fsd computer hardware 3 detailed for full self driving autopilot   autonomy day
Here is the new FSD computer design. It's basically the same and of course it's driven by the limitations of having a retrofit program for cars. I would like to point out that this is actually a fairly small computer that fits behind the glove box, between the glove box and the firewall of the car, it doesn't take up half the trunk, as I said before, there are two totally independent computers on the dashboard, you can see them, they are highlighted in blue and green on either side of the large SOC. You can see the DRAM chips that we use for storage and then on the bottom left you see the flash chips that represent the file system, so these are two independent ones. computers that boot and run their own operating system, yes if I can add anything, that's the general principle here is that if any part of this could fail and the call will continue to work, so you will have to rescue the cameras, you could have circuits of energy. one of Tesla's full

self

-driving computer chips fails the car keeps driving the probability of this computer failing is substantially less than the probability of someone passing out, that's the key metric at least an order of magnitude to have An autonomous car or Robo taxi, you really need redundancy throughout the vehicle at the hardware level, so as of October 2016, all cars made by Tesla have redundant power steering, so we end up with the steering motors assisted, so any engine failure fails the car.
You can still drive all power and data lines have redundancy, so you can cut any power line or any data line and the call will still drive the auxiliary power system even if the main pack loses all power on the main pack of the car. capable of turning and braking using the auxiliary power system so that you can completely lose the main package and therefore the car is safe, so in terms of driving the car, the basic sequence is to collect a lot of information from the world that surrounds it, not only do we have cameras, we also have radar, GPS maps, the IM uses ultrasonic sensors around the car, we have the angle of the wheel facing outward, we know what the acceleration and deceleration of the car is supposed to be, all of that is integrates, so form a plan once we have a plan. the two machines exchange their independent version of the plan to make sure it is the same and assuming we agree then we act and drive the car now once you have driven it with some new control you will have what Klaus wants to validate, so validate that what we transmitted was what we intended to transmit to the other actuators in the car and then you can use the sensor suite to make sure that this happens, so that if you ask the car to accelerate, brake or turn to the right right or left, you can look at the accelerometers and make sure you're actually doing it, so there's a tremendous amount of redundancy and overlap in both our data acquisition and our data monitoring capabilities.
Here, let's talk a little about the full

self

-driving chip that it includes. In a 37 point five millimeter BGA with 1600 balls, most of them are used to feed ground, but they are also enough for signal. If you remove the lid it looks like this, you can see the substrate of the package and you can see the dye in the center. there, if you remove the dye and turn it over, it looks like this, there are 13,000 C, four bumps scattered on top of the dye and then under the mesh under the r12 metal layers and if you are darkening all the details of the design, so if you remove it, it looks like this, so we move on to talk about the neural network accelerator.
On the left, there is a cartoon of a neural network just to give you an idea of ​​what is happening. The data appears at the top and visits each of the boxes. and the data flows along the arrows to the different boxes the boxes are typically convolutions or d-convolutions with real bleed the green boxes are pooling layers. We had the goal of staying under 100 watts. This is measured data from cars running the full

autopilot

stack. and we're dissipating 72 watts, which is a little bit more power than the previous design, but with the dramatic improvement in performance, it's still a pretty good answer that 72 watts are consumed, about 15 watts when running the neural networks in terms of cost.
The silicon cost of this solution is about 80 percent of what we paid before, so we are saving money by switching to this solution and in terms of performance, we take the narrow camera, a neural network that I have been speaking that it has 35 billion operations. in it we run it on the old hardware in a loop as fast as possible and deliver 110 frames per second, we take the same data, the same network compiled it for the hardware of the new FST computer and using the four accelerators we can get 2300 frames per second processed, so a factor of 21 is processed.
I think this is perhaps the most significant slide, it's day and night. I've never worked on a project where the performance increase was more than three, so it was quite fun if you compare it to let's say in the videos Xavier's solution, a single chip offers 21 tears, our complete standalone computer with two chip has 144 tears, so to conclude, I think we have created a design that offers exceptional performance, 144 tears for a neural network that processes it. It has exceptional power performance, we managed to include all that performance in the thermal budget we had, it allows for a fully redundant computing solution, it has a modest cost and really the important thing is that this fsd computer will enable a new level of security and autonomy. and Tesla vehicles without affecting their cost or range, something I think we're all looking forward to.
The strategy here in Italy started, you know, a little over three years ago, where a computer was designed and built that is completely optimized and that aims to be completely autonomous. -Driving then writes software that is designed to run specifically on that computer and get the most out of it, so it adapts to the hardware, i.e. it is a master of a craft, autonomous driving, nvidia is a great company, but it has many . customers and therefore when they, by applying their resources, need to make a widespread solution, we care about one thing which is autonomous driving, so it was designed to do it incredibly well, the software is also designed to run on that hardware incredibly well and the I think the combination of software and hardware is unbeatable.
We finished this design about a year and a half ago and started the design of the next generation. We are not talking about the next generation today, but we are halfway there. Will all the things that are obvious for a next-generation chip that we're doing be at least say three times better than the current system? At first it seems unlikely how it could be that Tesla has never designed a chip before. They were designed the best chip in the world, but that's objectively what happened it's not the best by a small margin better by a huge margin it's in cars right now old Tesla's are being produced right now we have this computer that we changed from the Nvidia solution for SMX about a month ago I switched to the model 3 about 10 days ago, all the cars being produced have all the hardware needed for fully autonomous driving.

If you have any copyright issue, please Contact