YTread Logo
YTread Logo

Dojo: Secret Tesla Program for Full Self Driving

Feb 27, 2020
We don't have enough time to talk about today. I'm not ready to talk about what counts in that project. Still a major

program

at Testicle Dojo. A project we call Dojo. A super powerful training computer. Do you know what training computer is on the chip? Hello. This is Warren Redlich. He wanted to give a talk today about

full

autonomous

driving

at the

dojo

. There was a presentation with Elon Musk and Andrei Carpathia. This great presentation of autonomy day, along with autonomy day, there was another presentation, but the two of them. gave some really interesting hints about what the

dojo

is and what

full

self

driving

does and what's coming with full

self

driving version 4 so in this video I'm going to talk in depth about what the dojo is, what's coming with full autonomous driving. drive 4 and what exactly the Tesla software does to learn driving, are you ready?
dojo secret tesla program for full self driving
This is going to be fun. The first thing I think we need to talk about is how many miles Tesla is learning from what you'll see. part of the discussion, whether it's a hypershift or I've seen a couple other people talking about this, they'll talk about how many miles Tesla has driven on autopilot and that the system the software is learning from each mile driven with the autopilot. I think that's wrong. Tesla is learning from every mile driven by every Tesla that has been built with fully autonomous hardware in its version 2 or version 3 if you bought a Tesla and did not pay for fully autonomous driving. option, the hardware is there anyway, every time you drive the software watches how you drive, so I think the idea people have is that autopilot drives and when you turn the wheel or hit the brake, that's something important that the software sees.
dojo secret tesla program for full self driving

More Interesting Facts About,

dojo secret tesla program for full self driving...

To say, okay, we have to learn from this the moment you turn off autopilot for some reason because the software is not working properly. Tesla is reanalyzing that footage, you know you turn off autopilot and that's really a Tesla also using this moment, it's really important. window into difficult cases and I think that underestimates the learning opportunities and I think Tesla knows this if you look very closely, they talk about this in a couple of videos, whether it's Elon or Karpov II, essentially like everyone is training the network all time. It's about whether it's water to apologize on or off the grid, every mile you drive for the car is being trained, but that's hotter or higher, the grid is being trained, the software is running, whether you have an autopilot or not, if you are driving your Tesla, the autopilot software is running and it predicts a route, it says this is the route you would take if you were driving and it compares that route to the route you choose and if you notice a significant difference if you brake when you would.
dojo secret tesla program for full self driving
You wouldn't have braked if you had turned significantly when you wouldn't have turned, that's another thing you can learn from, you don't just learn from when it interferes with the operation of the autopilot when you disengage, that's not the only thing you're looking at and there's more. Keep in mind that with hardware three there are two chips at work and each chip is capable of running the car on its own, so now imagine there is an update they are working on. Tesla downloads the updated version of the self-driving software to your The computer you're driving runs the current version of Autopilot on one chip, runs the updated version on another chip, and can now compare and you're driving.
dojo secret tesla program for full self driving
Keep in mind that you are driving. Now you can compare how you drive. to how existing software works to how unreleased updated software works and is able to look for variations and then can make comparisons, they test the software in shadow mode, which means they see how the new software compares to the current software by running it in parallel on cars and if there are disagreements then this is a tremendous difference in understanding how Tesla is using this is not a prediction this is not an assumption if you look at what Elon and Audrey karpati say this is what they are doing They don't just look autopilot commitments, but they look for any opportunity to learn and that feeds the dojo.
Everything that fully autonomous hardware sees is an opportunity for the dojo to learn, so that's the most important thing I think we need to understand. if you bought a Tesla, whether you paid for full self-driving or not, whether you paid for enhanced autopilot or not, whatever you paid for, if they were able to put the full self-driving hardware in the car, it's there , is watching him. drives if it's running autopilot then it's comparing if it's looking if it disconnects but while it's running autopilot the second chip is running a possible update, it may be running a possible update by comparing how the current software works with how the potential works. update the units and see if it turns the software on or off, so many learning opportunities and all of that will influence how Tesla will use the hardware and software at the Fremont headquarters where the autopilot equipment is located. is working where the software is working to learn from how you drive, how anyone who drives a Tesla drives, all of that feeds and that brings us to another topic, so if you look at Andre Carpathia's talk at PI torch and what he said at the autonomy The day that car watches you drive it drives itself and while driving it looks for things that are out of the ordinary, so on the day of autonomy it talked at length about a bicycle in the back of a car and originally the software saw a bicycle and saw. a car and they tracked them as two separate objects and that wasn't the right way for the software to think about it, so they asked the fleet to look for similar images and they used those images to train the software to recognize that when there is a bike . that's on a car that's connected to a car that's just a car with a bike or just a car it doesn't need to track two separate objects the neural network actually worked when I joined it would actually create two deductions it would create a deduction of car and a bike .
We take this image and we have a mechanism, a machine learning mechanism by which we can ask the fleet to give us examples that look like this. As an example, these six images could come from the fleet. They all contain bicycles on the back. of cars, Carpathia also talked about that it is not just about learning images from the background of Carpathia, this image recognition and the use of neural networks to improve image recognition, but it is not just about image recognition, but observing situations presented to the car and making decisions and not making decisions simply on a still image, carpathia talked about one point about having a series of frames from eight cameras and added up to 4096 images, maybe we have eight cameras that we unroll for 16 steps of time, which is a bad size, let's say 32, so they are going to keep 4096 images in memory and all their activations in a single step forward and how difficult it is to process all that, but that is what they are designing this hardware for, this hardware It is designed to process full video from all eight cameras. the radar plus sonar is using all that information to make decisions and instead of looking at a static moment in time, what do I see on all eight cameras right now?
You're looking at a series of frames together, maybe it's 12 frames, maybe. it's three seconds of frames that we don't really know how much they're able to process and then when there's a disconnect, when you're driving and you're driving, you know, autopilots just watch when you're driving a car. The driver notices a substantial difference between how you drive and how you drive. He can check the fleet so you're driving down the road. There is someone to the left or right cutting in front of you in your lane. So here is a video that shows the autopilot detecting that this car is invading our lane.
We ask the fleet to send us data every time they see a car go from the right lane to the center lane or from the left to the center lane and then what we do is rewind. time back and we can automatically note that, hey, that car will turn, in 1.3 seconds it will cut you off in front of the unattractive and then we can use it to train in your lab, you send that your Tesla will upload data to the headquarters. They'll get that data and say, Okay, here's a time when something happened that we need to study and you can query the fleet and you can send it to all the cars in the fleet.
Have you had anything similar? This is what we can look at so you can compare multiple situations that are similar and see how the software handles it and how the human drivers handle it and one of the things, if you look, there's a post on Reddit where there are posts. on reddit where people talk about how many gigabytes their car is carrying so when Tesla queries the fleet it looks for certain things and if it finds something in your car your car will upload additional data to the fleet if your car has a lot of disconnection it has many unusual situations - your car is uploading that data to the salk to the head office and the head office then queries the fleet to look for situations similar to what you encounter, so there is a lot of data being downloaded and there are An excess.
Air updates and your Tesla is getting new software from head office, but at the same time head office is learning from you, it's learning from every car out there, you might not want to write down the older drivers in my today . you may want to just emulate the best drivers and there are many technical ways in which we actually specify that data and all that data is loaded, not all the data by the way your car generates a lot of data, but as that they have spoken. about that they don't need to load data from you following, you know, driving in the middle of the lane on a road that they already have, that they're only loading data from incidents where the human driver differs significantly from the software or when the software existing software diverges significantly from the other software they are testing on the second chip, I mean the crazy thing is that the network predicts routes it can't even see with incredibly high accuracy.
The big question I wanted to talk about. When I started thinking about making this video, what is the dojo to Ilan? We talked about the dojo. We have heard Andre Karpati talk about the dojo. There is a whole mystery about what the main

program

at Tesla is that we don't have enough time to talk about today. called dojo which is a super powerful training computer the hardware team is also working on a project we call dojo and a dojo is a neural network training computer and a chip so we hope to do exactly the same for the training we did to improve inference.
Basically efficiency by altering the reminder at a lower cost, but I'm not ready to talk about more details on that project yet, so I have some theories about what dojo my wild theory might be that I'm pretty sure is wrong. I have some more rational theories, but my wild theory, which I'm pretty sure I'm wrong about, is that they've developed a quantum computer to do neural network processing because the idea of ​​quantum computing is that you can test multiple ideas. multiple possibilities at once, so the idea of ​​being able to test multiple paths the software could take and then compare them to how the human driver does it would be very interesting now.
I don't think they are using quantum computing, maybe it's a crazy theory. It's something they will do in the future. One of the things I looked at was I looked at Tesla hiring. I looked to see what they hire for. I looked on LinkedIn to see people who work for Tesla. I couldn't find any examples of anyone working for Tesla or any job Tesla is hiring for that involves quantum computing, although I think it would be a really cool idea and in the future someone will come up with some kind of neural network that uses quantum computing. quantum and there will be some brilliant breakthrough and that will make a big difference in everything.
I don't think we're there yet and I don't think that's what Tesla is doing now. Another idea that I think Tesla might be doing is I think they might be. taking the 3 existing hardware chips, okay and just like there's a bunch of GPUs, there's a bunch of graphics processing units that they're using, they've been using this and they're using it to learn from how we drive, from how we drive, how you drive the car. how other drivers drive there is also reference to the dojo. I don't think the dojo is just a chip. Elon said no, sir, maybe it was Andre Karpov.
One of them said that the dojo is a neural network on a chip. A dojo is a regular training computer on a chip, but it's not just a chip. I think what is happening is that there is a group of networking chipsneural networks that form a larger neural network that is learning, so the simplest theory would be that they are taking existing hardware. 3 chips that are being used in the Tesla Model 3 and the current Tesla Model S and per second, if we use the 600 gigaflop GPU, the same network we would get 17 frames per second, the on-chip neural network accelerators can deliver 2100 frames per second, we run it on the old hardware in a loop as fast as possible and deliver 110 frames per second for the new FST computer we can get a 2300 frames per second process so it's a factor of twenty one because this is perhaps the most significant slide, it's day and night and they've built some sort of cluster which uses the neural networks on each chip, forms a larger neural network and is capable of processing at a much higher level, so I think it's a pretty reasonable theory about where Tesla could go with this and you know it fits, you know That is something very, it is not easy in terms of software programming, but in terms of hardware, we are already manufacturing all these chips, we will simply manufacture a thousand more, they are manufacturing hundreds. of thousands of these chips and you only make a thousand for the cluster and that's another interesting question that's difficult: how many chips do they have in the cluster?
Because this is supposed to be a super powerful computer. Can they find a way to do it with one thousand twenty four chips, maybe they are doing it with a hundred thousand chips, who knows they can make as many as they want, it would be very high power consumption, but you know that for the purpose they are trying to achieve. do. achieving it might make sense for them to do a lot, so I don't think we're talking about a group of a million chips, but there's a good chance that group has a thousand or ten thousand chips, so that's a theory and I think so time the most sensible theory we got from the crazy theory about quantum computing to the sensible theory that they just took a bunch of hardware, three chips.
We now know from the videos that Tesla has been developing hardware so we finished this design. maybe you wanted to have it two years ago and started designing the next generation. We are not talking about the next generation today, but we are halfway there. It will be at least, say, three times better in the current system, so another theory. is that they already have fully autonomous Tesla version four prototypes and they are using the first FS d4 version four chips, they are using those first chips to create a dojo group and they are using those first chips and that is capable of processing Note that Tesla Elon has said that version four of the hardware will be able to process approximately three times, it will be three times better than version three of the hardware in all important aspects.
I don't think the power consumption is one that I thought was important to improve three times, but the processing speed and the ability to process large scale video makes sense to me. I'm not saying it's more sensible than version 3 of the hardware because there is a question of whether version 4 of the hardware is ready, but yes, we are developing a dojo and the dojo is still this as Elan calls it in a major program. If they are developing a dojo for this purpose, it could be that they are using the dojo, on the one hand, to process more information quickly and, on the other hand, for testing.
I think it's optimistic to say that version 4 is so ready that they could do it, but at the same time they've been working on it for almost three years and in world chip development time, that's a long time, it may be ready, so that's another theory, regardless the idea is that whatever the dojo is, it is taking information at the video level, now this is something I wanted to talk about also if you see karpati talk there. There are times when Ilan has mentioned this as well, there is a referent, there are references to vector space and karpati uses a video where you are looking at a top down image of cars passing by.
There is an easy one with a smart summit and that is driving through a parking lot. Very much, we are connecting three cameras simultaneously to a neural network and the network predictions are no longer in image space, they are not in top-down space, now here we are making predictions on the image and then of course , we are broadcasting them. and put them together across space and time to understand a kind of layout of the scene around us, so here's an example of this occupancy grid. We show only the edges of the road and how they are projected to exist at the high level at which we operate.
At least we think we operate where we operate on what we see and process it directly into action. There is a view that Tesla takes the visual information that he gets from the cameras and reduces it to a space of two or three D and that. Every object in space becomes a vector with properties, so there's Tesla itself and the size of it and what direction it's going and how fast it's going. There is this object here. It's a stop sign. It does not move. It's red. There is a traffic light. There is a car. that's moving and this is the approximate size of that car and this is the direction it's moving and this is the speed.
I mean, these are all properties of the objects that the Tesla sees and they can be reduced to some kind of vector space and information. can be processed at the vector level, which makes sense to me from an efficiency standpoint, since it would be easier to do the processing and decision making at that lower level, but the dojo idea, if you take elan In what he said, the goal of the dojo will be to be able to absorb large amounts of data and train at the video level and do massive unsupervised training of large amounts of video with the dojo program, the dojo computer that is processing all this video-level information, the idea is that the software is now making decisions based on what it sees without taking the intermediate step of putting everything into vector space and operating in vector space.
I relate to this personally. I speak several languages. I speak Spanish in particular Spanish and Japanese, a little bit of French, but I'm not that good at French, but I speak Spanish and Japanese pretty well and I lived in Japan and I traveled through Europe and I live in Florida and I've lived in Texas in California as well that I have had many opportunities to speak Spanish. when you are in a country that speaks another language than yours and you know that language, when you do it for the first time you hear something in that language, let's say I hear something in Japanese, you mentally translate it into English and then you act or then you respond to something and you think how would I answer that in English and then you translate it back into Japanese, but at a certain point you reach a level where you're able to think in the other language and it becomes a lot easier and I think that's really what fluency in a language is, it's when you are able to think and operate in that language without having to translate into your native language and that is an analogy that I think makes sense for what Ilan said about the dojo is that instead of the computer processing the information it takes in by reducing it visually to the vector space by making decisions in the vector space and sending commands to the car turn left speed up break whatever if the computer is able to process one level of video and skip the vector space step and translate it directly to the action it could do the car being more responsive could reduce your reaction time could mean you make better decisions just like I think when I'm acting in Japan and I'm actually thinking in Japanese instead of thinking about translating, I think it's at least better decision making too , so again, that's a theory.
I don't know how else you would interpret what Ilan said about video-level information processing. I think that's the only way that makes sense, so I hope it's helpful. Thanks for watching. I hope you liked this video and I hope it was informative and gave you something to think about if you have your own ideas about how Tesla is learning how Tesla's software learns from the autonomous driving of people driving from the computers that they watch people drive and, in particular, If you have your own thoughts about what you think the dojo is, what the plan is with the dojo.
I'd love to see what your comments are. I'd love to hear other ideas about what the dojo could be, so in the comments below the video, be sure to let us know, tell me everyone will tell the world what you think the dojo is and what you think the hardware is for. There could be other ideas, how they're learning, how they decide who the best controllers they should be. Look, that was a really interesting clip at the beginning of the video where Karpati said that not only are they scoring all the drivers, but they're looking for the best drivers and they have ways of measuring that, so how are they doing it?
Let me know what you think. There's a lot in this video and I'd love to hear your thoughts and of course if you're not already subscribed, please subscribe, check out some of my other videos and let me know what you think of those videos. Thanks again for watching.

If you have any copyright issue, please Contact