YTread Logo
YTread Logo

Machine Learning Zero to Hero (Google I/O'19)

May 01, 2020
So the first question that comes up, of course, is that every time you see

machine

learning

or hear about

machine

learning

, it seems to be like a magic wand that your boss says: put machine learning in your application or, if you hear about new companies, put machine learning. somewhere in their speech and suddenly they become a viable company, but what is machine learning? What is it really about? and particularly for programmers. all of us are coders. I do talks like this all the time and sometimes I ask how many people are coders and three or four hands show up so it's funny that we get to show a lot of code today so I wanted to kind of talk about what machine learning is from a coding perspective picking a scenery.
machine learning zero to hero google i o 19
Can you imagine if you were writing a game to play rock, paper, scissors and he wanted to write something like that so you could move your hand like a rock on paper? or a pair of scissors, the computer would recognize it and could play with it. Think about what it would be like to write code, because you know you would have to take pictures from the camera and start watching. the content of those images and how to differentiate between a rock and a scissors or how you would differentiate between a scissors and a paper, that would end up being a lot of code that you would have to write and a lot of really complicated code and not just the difference in the shapes , think about the difference in skin tones and male and female hands, big hands and small hands, you know people with knobby knuckles like me and people with smooth, pretty hands like Carmel and then what is it like? that you would end up being able to write all the code to do this, it would be really very complicated and ultimately not very feasible to write and this is where we start to incorporate machine learning.
machine learning zero to hero google i o 19

More Interesting Facts About,

machine learning zero to hero google i o 19...

This is a very simple scenario, but you can think about it. There are a lot of scenarios where it's really difficult to write code to do something and machine learning can help with that and I always like to think about machine learning in this way, think about traditional programming and traditional programming, something that has been our daily bread. For many years, all of us here were programmers, do you know what it is? We think of expressing something by expressing rules in a programming language like Java, Kotlin, Swift or C++ and those rules usually act on the data and after that we get. answers like in rock, paper and scissors, the data would be an image and my rules would be all my fingers.
machine learning zero to hero google i o 19
End up looking at the pixels in that image to try to determine if something is rocket, paper, or scissors. Machine learning then turns it around. the axes on this and we say hey instead of doing it this way where it's like we have to think about all these rules and we have to write and express all these rules and code, what if we could provide a lot of answers and we could label those answers and then having a machine that sets the rules that map to each other, for example, in something like rock, paper, scissors, we could say, you know, these are the pixels of a rock and this is what a rock is. and we could get hundreds or thousands of images of people making a rock, so we have diverse hands, there are skin tones, that kind of thing and we say, hey, this is what a rock looks like, this is what a paper looks like and this is what scissors look like and if a computer can figure out the patterns between these and it can be taught and it can learn what the patterns are between these, now we have machine learning, now we have an application, we have a computer that has determined these things for us, so if we look at this diagram again and if we look at this again and replace what we've been talking about by creating rules and say, okay, this is machine learning, we'll feed in the answers.
machine learning zero to hero google i o 19
We are going to enter data and the machine will infer the rules. What will that look like at runtime? How can I run an application that looks like this? This week we will call the training phase, we have trained it. It will be called a model in this and that model is basically a neural network and I will talk a lot about neural networks in the next few minutes, but what is that neural network? You know, let's finish. Let's call it a model and then at runtime we'll pass in data and it'll give us something called predictions.
For example, if I trained it on a lot of rocks, a lot of paper, and a lot of scissors, and then I'm going to put my fists up to a webcam, it's going to get the data from my fists and it's going to return what we like to call a prediction, there's going to be something like, hey, there's an 80% chance it's a rock, there's a 10% chance it's a paper, and a 10% chance it's a scissors, something like that, so a lot of the terminology and machine learning is a little different from traditional programming, we call it training instead of coding and compiling, we call it inference and getting predictions from inferences, so when you hear us use terms like that's where everything comes from, it's pretty similar to things you've already been doing where traditional coding is a slightly different terminology, so now I'm going to start a demo where I'm going to train a model for rock, paper, scissors.
The demo takes a few minutes to train, so I'm going to start it before I get back to things, so I'm going to start it here and it will start and as it starts. to run I just want to show something as it goes, so if you can imagine a computer, I'll give it a bunch of rock, paper, scissors data and ask it to see if we can figure out the rules for rock, paper, scissors, so that any individual piece of information that I give you there's a one in three chance of you getting it right the first time if it was purely random and I said, what is this? there's a one in three chance that I'm going to get it Rock-correct, so when I start training, that's one of the things I want you to see here is the accuracy which, like the first time, was actually exactly 0.333.
Sometimes when I run this demo, it's a little bit more. but the idea is that once it started training it becomes random, okay I'm just throwing in random things. I'm guessing on this and it was like one in three. As we continue, we will see that it is actually becoming more. and more accurate the second time, it's now 53% accurate and as you go it will get more and more accurate, but I'll go back to the slides and explain what it's doing before I come back to watch the end. We can continue? Back to the slides please, okay so the code to be able to write something like this looks like this, this is a very simple piece of code to create a neural network and what I want you to focus on first is these things I've outlined in the red box, so these are the inputs to the neural network and the output coming from the neural network, so I love talking about neural networks in I/O because of the input and output of I/O and you will see that I talk a lot. about inputs and outputs and this, then the input to this is the size of the images.
Those are all the images that I am going to feed to the rock, paper and scissors neural network, they are 150 squares and there are three per color depth. and that's why you say 150 times 150 times three and then the result of this is going to be three things and because we're sorting for three different things, rock, paper, scissors, so what happens when you look at a neural network? those are really the first things to look at what are my inputs, what are my outputs, what do they look like, but then there's a bug in the middle where we've created this TF eros pair, the layers are dense and there's a 512 there and Many people wonder, what are those 512 things?
Well, let me try to explain it visually. What's happening is I'm going to consider what those 512 things are in the center of this diagram as 512 functions and all those functions. they have internal variables and those internal variables will just be initialized with some random states, but what we want to do is when we start passing the pixels from the images to these, we want them to try to figure out what type of output is based on those inputs they will give me the output desired at the bottom, so function 0 will take all those pixels, function one will take all those pixels, functions who will take all those pixels and if those pixels are in the shape of Iraq, then we weren't. the output of the function 0 1 and 2 up to 511 will be sent to the left box at the bottom so put a 1 in that box and similarly for paper you know if we say ok when the pixels look so, I want your outputs of F 0 F 1 and F 2 to go into this box and that's the learning process, so all that's happening, all that learning is when we talk about machine learning, is setting those internal variables to those functions so that we obtain the desired output. now those internal variables just to confuse things a little bit more than what machine learning language tends to call parameters and for me as a programmer at first it was difficult to understand that because for me parameters are something that I pass to a function , but in this case when you listen to a machine learning person talk about parameters, those are the values ​​within those functions that will be set and will change as they try to learn how I'm going to match those inputs to those outputs, so if I go back to the code and try to show this again in action now remember my input shape that I talked about earlier at 150 550 times 3 those are the pixels that I showed in the previous emulator, here with gray boxes, but those are the pixels that I showed in In the diagrams above, my functions are now that dense layer in the middle, those 512, so that's 512 randomly or semi-randomly initialized functions that I'm going to try to train to match my inputs to my and then of course the ones below , those three are the three. neurons that will be my outputs and I just said the word neuron for the first time, but ultimately when we talk about neurons and neural networks, you know it actually has nothing to do with the human brain, it's very rough simulation of how the human brain does things and these internal functions to try to figure out how to match the inputs to the outputs, you know, we call them neurons and in my output they're like those three at the bottom will also be neurons too and that's what gives you This is called neural networks.
It tends to sound a little mysterious and special when we call it that, but ultimately, think of them as functions with random initialization variables that we'll eventually try. change the value of those variables so that the inputs match our desired outputs, then this line is the point compiled line of the model and what is it going to do. It's a fancy term. It's not actually about compiling the way we convert code to bytes. Code as before but think about the two parameters of this and these are the most important part to learn in machine learning and these are the loss and the optimizer so the idea is that the work of these two remember before I said that It's going to be random. initialize all of those functions and if they are randomly initialized and I pass something that looks like a rock, there is a one in three chance that I will get it right, like scissors, like a rock, paper or scissors, so what is the function of loss?
What it does is it measures the results of all the thousands of times they do it, it finds out how well or badly it did and then based on that it passes that data to the other function which is called the optimizer function and the optimizer function then generates the following assumption, where the assumption is set as the parameters of those 512 little functions, those 512 neurons and if we keep repeating this, we will pass our data, we will take a look, we will gasp, we will see how good or how bad we did, based on From that we will optimize or make another guess and repeat, repeat, repeat until our guesses get better and better and that's what happens in model point fitting.
Here you can see our model. adjust points where epochs epochs epochs depending on how you pronounce it is hundred all you're doing is doing that loop a hundred times for each image take a look at the parameters adjust those parameters guess measure how to get to how bad you did and repeat and Keep going and the optimizer will get better and better, so you can imagine that you'll get it right the first time about one in three times. The next few times it will get closer and closer and closer and better and better, okay? Those of us who know a little about images and image processing, okay, that's good, but it's a little naive.
I'm just throwing in all the pixels in the image and maybe many of these pixels aren't even set up in a neural network. And trying to figure out from those pixel values, can I make it a little bit smarter than that? The answer is yes and one of the ways we can make it a little smarter is to use something called convolutions now. convolutions is a complicated term, excuse the pun, but the idea behind convolutions is that if you've ever done any kind of image processing in the same way that you can sharpen or smooth images with things like Photoshop, it's exactly whatsame, so with a convolution The idea is to take a look at each pixel of the image, for example, this image of a hand and I'm just looking at one of the images of the pixels of the finger on the nail and that pixel has the value 192 in the box on the left here, so if you take a look at each pixel in the image and look at its immediate neighbors and then you get something called a filter, which is the gray box on the right and you apply it, you multiply the value of the pixel by the corresponding value in the filter and you do it for all the neighboring pixels to get a new value for the pixel, that's what a convolution is now, many of us, if you've never done this before, you might be sitting there thinking why the hell or what do I do so well, the reason is that when Finding Convolutions and Finding Filters it becomes really good at extracting features in an image, so let me give you an example.
If you look at the image on the left here and I apply a filter like this, I'll get the image on the right. Now what has happened here is that in the image on the left I have removed much of the noise from the image and have been able to detect vertical lines, so by simply applying a filter like this, the vertical lines survive through their multiplication of the filter and then similarly if I apply a filter like this when the horizontal lines survive and there are many filters available, they can be initialized randomly and can be learned to do things like select elements in an image, like eyes or ears. or fingers or nails and things like that, so that's the idea behind convolutions.
Now the next thing is okay if I'm going to do a lot of processing my image like this and I'm going to train and I'm going to have to have hundreds of filters to try to pick out different features in my image there's going to be a lot of data that I'm going to have to deal with and Wouldn't it be nice if I could compress my images so that compression is achieved through something called binning and it's something very very simple, sometimes it seems like a very complex term to describe something simple but when we talk about binning I'm going to apply for example a 2x2 binning to an image and what that will do is to take the pixels two by two, like if you look to my left here, if I have 16 simulated pixels, I'll take the first four in the top left corner and of those four I will choose the largest value and then the next four in the top right corner of those four I will choose the largest value and so on, so what that will do is effectively throw away 75% of my pixels and just keep the maximums in each of these.
Small 2x2 units, but the impact of that is really interesting when we start combining convolutions, so if you look at the image I created earlier where I applied the filter to that image of a person walking up the stairs and then pulled it. get the image on the right, which is 1/4 the size of the original image, but not only does it not lose any vital information, it even enhances some of the vital information that came from it, so clustering is your best ally when you start using it. convolutions because if you have 128 filters, for example, that you apply to your image, you will have one hundred and twenty-eight copies of your image, you will have one hundred and twenty-eight times the data and when you are dealing with thousands of images that will slow down your training time very quickly, but then the combination really speeds it up by reducing the size of your image, so now when we want to start learning with a neural network, it's a case of hey, I've got my image on top I can start applying convolutions to it, like for example my image it could be a smiley face and one convolution will keep it as a smiley face, another could keep the circular outline of a head, another could change the shape of the head things like that and as I start applying more and more convolutions to these and get some smaller images, I mean, instead of now having a big thick image that I'm trying to classify, I'm trying to select features of To learn, I can have many small images that highlight features of that, for example, in stone, paper and scissors, my convolutions can show in some cases five fingers or four fingers and a thumb, and I know that it will be a paper or that.
It may not show any and I know it's going to be Iraq and then it starts to make the machine learning process a lot simpler so to show this quickly, by the way, I've been putting QR codes on these slides so I open up Got all the code that I'm showing here and we're talking and this is a QR code for a workbook where you can train a rock paper scissors model by yourself, but once we do convolutions and before on the slide, you saw I had multiple convolutions moving downwards and this is what the code would look like.
I just have one layer of convolution followed by pooling, another convolution followed by pooling, and another convolution followed by pooling, etcetera, etcetera, so the impact of that and Remember, first of all, at the top I have my shape input and I have my output at the bottom where the density is equal to three, so I'm going to go back to the demo now to see if it finished the training and we can see it and so we start. with 33% accuracy, but as we go through the ages, I just made this one. I think over 15 epochs it became more and more accurate, steadily and steadily, so after 15 loops of doing this it now has an accuracy of ninety-six point eight and three percent, so we can see a result Using these techniques using convolutions like this, we've been able to train something in just a few minutes to be about 97% accurate at detecting rock, paper, scissors and if I just do a quick graph here, we can see this is a graph of That accuracy, the red line shows the accuracy where we started at about 33 percent and are getting closer to 100%.
The blue line is. I have a separate rock-paper-scissors data set that I test it against. just to see how well it's working and it's pretty close to this, I need to do some work to modify it and I can actually try an example to show you so I'm going to upload a file. I'm going to choose a file. On my computer, I gave that file a nice name so you can guess that it's a document and if I open it and open it, it will load it and then it will give me an output and the output is 1 0-0 so you think that I have was wrong, it detected that it is a rock, but actually my neurons here, according to the labels, are in alphabetical order, so the alphabetical order would be paper, then rock, then scissors, so it actually classified it correctly giving me a 1, so it's actually a role and we.
I can try another one at random. I'll choose a file on my machine. I'm going to choose some scissors, I'm going to open it up, I'm going to run it and again, paper, rock, scissors so we can see that it's actually classified correctly, so this workbook is online if you want to download it. Try it and play with it to classify it yourself and see how easy it is for you to train a neural network to do this and then once you have that model you can implement that model in your applications and maybe play rock paper scissors . your applications, can we go back to the slides please?
So just to quickly show the idea of ​​how convolutions really help you with an image, here's what that model looks like when I defined it and at the top here it might look like a bit of a bug at first if you're not used to do this, but above remember we said my images come in 150 by 150, it actually says hey, I'm going to pass an image as 148 by 148. Does anyone know why this is a mistake. Now it's not a mistake, so the reason is that if my filter was 3 by 3 to be able to see a pixel, I have to throw it away when not, to highlight the image, I have to start with one pixel and one pixel down so that it has neighbors, so I have to throw away all the pixels at the top, bottom and both sides of my image, so I'm losing a pixel on all sides, so my 150 by 150 becomes a 148 by 148 and then when I pulled that out, each of the axes became 74 by 74 and then in the next iteration it becomes 36 by 36, then 17 by 17, and then 7 by 7, so if you think about all these 150 square images passing through all these convolutions are generating a lot of little 7 by 7 things and those little 7 by 7 things should highlight a feature, it could be a fingernail, it could be a thumb, it could be the shape one hand and then those features that come through the convolutions are passed to the neural network that we saw earlier to generate those parameters and then from those parameters, hopefully, you could make a really accurate guess and assumption about something being rocket, paper, scissors, so if I prefer an IDE instead of using colab, you can do that too.
I really like using pycharm for my development. Any PyCharm folks here out of interest, yeah, I fell for you, so there's a screenshot of PyCharm when he was writing this rock. -Paper-scissors before pasting it into collaboration, where you can run it from collaboration, so pycharm is really nice and you can do things like step-by-step debugging if we can switch to the demo trigger machine for a moment. I'll do a quick demo of pycharm doing a step by step debugging, so here we can see we're in rock-paper-scissors and for example if I hit debug, I can even set breakpoints, so now I have a breakpoint. interruption in my code, so I start by taking a look at what is happening in my neural network code.
This is where I'm preloading the data and I can go ahead and I could do a lot of debugging to really make sure that my neural network is working the way I want it to work. One of the things I hear a lot from developers when they get started with machine learning is that it seems like their models are largely a black box. It has all this. Python code to train a model and then you have to do some guesswork with tensorflow as open source. I can actually go into the tensorflow code in Python like I'm doing here to see how the training is going to help me debug.
My models and Karmel later will also show how something called a tensioner board can be used to debug models. Can we go back to the slides please? With that in mind, we've gone from not really understanding what it is or just beginning to understand what neural networks and basic hello world code are all about to looking at how we can use something called convolutions and there's something that sounds really complicated and really difficult , but once you start using them you will see that they are actually very, very easy. to use particularly for image and text classification and then we saw what it was like, in just a few minutes we were able to train a neural network to be able to recognize rock, paper, scissors with an accuracy of about 97 98 percent, so that's just the beginning , but now to show us how to stretch the frame and make it real and do some really cool, production-quality stuff, Carmel is going to share with us thanks, hello, raise your hand quickly to how many of you were so new and now you're paddling the most fast as they can to keep their head above water.
Alright, many of you. I'm going to go over now some of the tools and features that tensorflow has to take you from having your model all the way through production, don't worry, there's no test at the end, so for those of you who are just Trying to keep up right now, track down these words and store them somewhere in the back of your head so you know this is all available to the rest of you. you already have a model and you're seeing what you can form or what you can do with it, pay attention now, okay, so Lawrence went through a slide image classification problem.
We love image classification problems because they look good on the slides, but maybe your data is not an image classification problem, what if you have categorical data or text-based data? Tensorflow provides a number of tools that allow you to take different types of data and transform them before loading them into a particular machine learning model. for example, here we have maybe some user clickstreams, now we have a user ID, if we feed it directly into a deep learning model, our model would expect it to have a real, numerical value and it might think. that user number 125 has some relationship with user 126 although that's not actually true, so we need to be able to take data like this and transform it into data that our model can understand, so how do we do it right in tensorflow, one of The tools that we use extensively within Google, our function columns, these are settings that allow you to set up transformations on the incoming data, so here you can see we're taking our user ID from the categorical column and saying, hey, this is a columncategorical when to pass data for it and we don't want the model to use it as a categorical column.
We want to transform this in this case into an embed right so that it can do a unique representation. Here we are going to make an embedding that is actually learned as we train our model, this embedding and other columns that it has can then be input directly into the Kerris layers, so here we have a dense feature layer that will take all of these transformations and it will execute them when we pass our data and this is fed. directly into a Karass model so that when we pass the input data through the transformations they happen before we actually start learning from the data and that ensures that our model is learning what we want it to learn using real-valued numerical data and what is done with that layer once you have it in your model, at Kerris we provide quite a few layers.
Laurence explained convolutional layers that group layers together. Those are some of the most popular ones in the image models, but we have a lot of layers depending on what your needs are so many that I couldn't include them in a single screenshot here, but there are edge removal layers, standard for batches, all kinds of sampling layers, so it doesn't matter what kind of architecture you're building, if you're building something for your own use case small image classification model, whatever it is or the latest and greatest research model, there are a series of built-in layers that will make it much easier for you and if you have a custom use case that isn't actually represented in one of the layers and maybe you have custom algorithms or custom functionality, one of the beauties of Carus is that makes it easy to create subclasses of layers to build your own functionality.
Here we have a Poincaré normalization layer that represents a Poincaré embedding. It is not provided out of the box with tensor flow, but a community member has contributed this layer to the tensor flow plugin repository, where we provide a number of custom special use case layers. It's useful if you need point-of-care normalization but it's also very useful. A good example of how you could write a custom layer to handle all your needs if we don't have it ready to use. Here you write the calling method that handles the forward pass of this layer so you can check the tensor. flow plugins repository to see more examples of layers like this, in fact everything in Karis can be subclassed or almost everything it has, metric loss optimizers, if you need functionality that is not provided out of the box, we try to make it easy for you task. build on top of what Karis already provides while taking advantage of the entire Tensor Flow and Kerris ecosystem, so here I am subclassing a model, so if I need any custom forward steps in my model, I can easily do it in the method call and I can define custom training loops within the custom model, this makes it easy to do in this case something trivial like magic multiplied by a magic number, but in many models it is necessary to do something different than the standard.
You can customize the tuning loop this way and still take advantage of all the tools we provide to Karis, so one of the problems with custom models and more complicated models is that it's hard to know if you're really doing what you want. what you're doing and if your model is training one of the tools we provide for Karis and tensorflow more generally like tensor dashboard. This is a visualization tool, it is web-based and runs a server that will receive the data as it trains your model. so you can see in real time, epic by epic or step by step, how your model is doing, here you can see the accuracy and loss as the model trains and converges, and this allows you to track your model while training it and make sure that you are actually making progress toward convergence.
Anyone using Kerris can also see that they get the full graph of the layers they've used. You can drill down into them and get the optional tensorflow as well and this is really useful for debugging. To make sure you have connected your model correctly and that you are actually building and training what you think you are training in Kerris, the way you add this is as easy as a few lines of code, here we have our Tensor Dashboard callback . that we defined, we add it to our model during training and that will write to the disk logs a bunch of different metrics that will then be read by the tensor dashboard web GUI and as a bonus you will get performance profiles integrated with that.
Then, one of the tabs in the tensor dashboard will show you where all your operations are located and where you have performance bottlenecks. This is extremely useful as you start building more and more models because you will see that performance during training can increase. become one of the bottlenecks in your process and you really want to do it faster, speaking of performance, this is a graph of how long it takes resonant 51 of the most popular machine learning models for image classification resonant 52 to train using a GPU, don't even ask how long you've been on a CPU because no one likes to sit there and wait until it's done, but you can see you've been on a GP for almost a week.
One of the advantages of deep learning is that it is very easy to parallelize. So what we want to offer is tensorflow are ways to take this training process and parallelize it the way we do in tensorflow 2.0. We are providing a series of distribution strategies that will make it much easier for you to use your current knowledge. model code here we have a Karass model that looks like many of the others that you have seen throughout this talk and we will be spreading it across multiple GPUs, so here we add the strategy reflected with this elite in these few lines of code that Now we can distribute our model across multiple GPUs.
These strategies have been designed from the ground up to be easy to use and scale with many different architectures and to give you great performance out of the box. What is this? Actually, what we're doing here you can see that with those few lines of code by building our model under the strategy scope, what we've done is we've taken the model and we've copied it across all of our different devices in this image , let's say we have four GPUs, we copy our model to those GPUs and we chunk the input data, which means that it will actually process the input in parallel on each of your different devices and that way you can scale the model. training roughly linearly with the number of devices you have, so if you have four GPUs you can run about four times as fast.
What it ends up looking like in ResNet, you can see that we get excellent scaling and right out of the box, what you get with this is that your variables are zoomed in and synchronized across all available devices. The lots are obtained in advance. All of this contributes to your models performing much better during training time, all without changing the code when you are using it. Kerris is fine and the strategy reflected with multiple GPUs is just the beginning as you scale models like we do at Google for example you may want to use multiple nodes on multiple servers each of which has its own set of GPUs .
You can use mirrored multiple workers. strategy for what your model will take, replicate it on multiple machines, all working synchronously to train your model, mirroring variables across all of them, this allows you to train your model faster than ever and this API is still experimental as we are developing it, but in tensorflow 2.0 you will be able to run it out of the box and get that great performance on large scale clusters, so everything I have talked about so far falls under the heading of training models and you will be able to do it. We find that a lot of model builders only think about the training part, but if you have a machine learning model that you're trying to put into production, you know that's only half the story, there's another half, which is nice, is it? how to do it?
I take what I've learned and actually serve it to customers or whoever the end user is on tensorflow. The way we do it is that you will have to serialize your model to a saved model. This saved model becomes the serialized format of your model which is then integrated with the rest of the tensorflow ecosystem allowing you to deploy that model to production, so for example we have several different libraries and utilities that can take this model saved for tensorflow service. take that model and make a web-based service request. This is what we use at Google for some of our larger scale systems. tensorflow lite is for mobile development. tensorflow j/s is a native web solution for serving your models.
I'm going to go or I won't have time to go over all of this in the next few minutes, but I'll talk a little bit more about the tensorflow service and tensorflow Lite, but first, how do you re-access a saved model in tensorflow 2.0? This will be easy. and out of the box, where you will take your Karras model, call it save point and this will write the saved tensorflow model format. This is a serialized version of your model that includes the entire graph and all variables. and weights and everything you've learned and writes it to disk so you can take it to someone else, let's say you can load it back into Python, you'll get all the states of Python objects as you do it.
You can see here and you could continue training and continue using that, you could tune it based on that or you could take that model and load it into the TF service so that tensorflow serves as a response to G RPC or rest requests, act as an interface. which takes the requests, uploads them, or sends them to your model for inference, you will get the result, so if you are creating a web application for our rock, paper, scissors game, you can take a picture and send it to your server. The server will ask the model. Hey, what is this?
Send the response based on what the model found and that way you'll get that full round-trip tensorflow service that we use internally for many of our larger machine learning models. It has been optimized for low latency and high performance. You can check it out at 10 for Flo org. There is a complete set of processing components and production pipeline that we call tensorflow extended or tf-x. You can learn more about them at tensorflow org using that. Great handy QR code right there and maybe you have a model and your web app, but you really want it on a phone because you want to be mobile future and you want to be able to take it anywhere.
So, tensorflow Lite is the library we provide to convert your saved model into a very small size so that it can fit on your mobile device. Can fit built-in devices. Using Raspberry Pies Edge TP. We now run these models on several different devices. the way to do this is to take the same saved model from the same model code that you originally wrote, use the TF light converter which reduces the footprint of that model and then it can be loaded directly onto the device and this allows you to run it. device without internet without a server in the background, whatever your model is and you can take it, take tensorflow wherever you want to be, know that we have gone very quickly from some machine learning fundamentals to building your first model and some of the tools which provides the tensor field to take them and deploy them to production, what do you do now?
Well, there are many more available. You can go to Google Dev, you can go to tensorflow org or we have a ton of tutorials you can go to. to github, this is all open source, you can see the different libraries there, ask questions, submit PRS, we love PRS and with that we thank you.

If you have any copyright issue, please Contact