YTread Logo
YTread Logo

Docker Crash Course for Absolute Beginners [NEW]

May 09, 2024
In this video, I will teach you all the main concepts of Docker, including your first hands-on experience with it, so if you have to use Docker at work or if you need to learn Docker to improve your engineering skills and need to get started. fast and understand all the main concepts and learn the basics of how to work with Docker. This intensive

course

is exactly right for you. First, we will start by explaining what Docker is, why it was created, basically, what problems it solves in engineering and how it helps. in the software development and deployment process, so you will understand exactly why Docker is so important and why it has become so popular and widely used in IT projects and, as part of a virtualization solution, Docker is an improvement regarding virtual machines or the next step of evolution.
docker crash course for absolute beginners new
I will also explain the difference between virtual machine and Docker and what the advantages of Docker are in this comparison. After we have understood why we want to use Docker in the first place, we will install Docker and learn how to actually work with it. learn the concepts of Docker image containers, public and private Docker registry registries and we will run containers locally based on some of the images available in the public Dockers registry called Docker Hub. We will also learn the concept of creating your own images and learn about a Docker image. blueprint called Docker file and of

course

we will see all this in action and learn all the Docker commands to pull images, run containers, create your own Docker image etc., we will also learn how to version images with image text and finally , after you have learned how to work with Docker.
docker crash course for absolute beginners new

More Interesting Facts About,

docker crash course for absolute beginners new...

I will also explain with graphic animations how Docker fits into the big picture of the software development and deployment process so that at the end of this video you will feel much more confident in your knowledge and understanding of Docker and can develop it easily. Basic knowledge to become a Docker power user if you want and below the video description I will provide some resources to learn even more about Docker and advance in it, but before we start, it seems like a lot of you are watching the videos on our channel you are not subscribed yet, so if you are getting any value from the free tutorials I regularly post on this channel, be sure to subscribe so you don't miss any future videos or tutorials.
docker crash course for absolute beginners new
I would also love to connect with you on my other channel. social media accounts where I post behind the scenes content, weekly updates, etc. so I hope to connect with you there too. I'm really excited to show you all of this, so let's start with the most important question: what is Docker? Why was it created and what problem does it solve in simple words? Docker is a virtualization software that makes developing and deploying applications much easier compared to how it was done before Docker was introduced, and Docker does this by packaging an application into something called a container that has everything the application needs to run. , such as the application code itself, its libraries and dependencies, but also the runtime and environment configuration, so that the application and its runtime environment are packaged into a single Docker package that you can easily share and distribute now.
docker crash course for absolute beginners new
Because? This is a big deal and how applications are developed and deployed before Docker was introduced, let's look at that to understand the benefits of Docker more clearly, how do we develop applications before containers, usually when you have a team of developers working on some application? to install all the services that the application depends on or needs, such as database services etc., directly on your operating system, for example, if you are developing a JavaScript application and need a postgresql database, maybe you need a redis to cache mosquitoes for messaging like you have done. A microservices application now needs all these Services locally in its development environment in order to develop and test the application properly and each developer on the team would have to install all those Services, configure them and run them in their local development environment and depending on the operating system they are using, the installation process will be different because installing Postgresql database on Mac OS is different than installing it on a Windows machine for example, another thing when installing Services directly on an operating system following some installation guide is that you Typically you have multiple steps of installing and then configuring the service, so with multiple commands you have to run to install, configure and configure the service, the chances of something going wrong and an error occurring are quite high and this approach or this configuration process.
Creating a development environment for a developer can be quite tedious depending on how complex your application is, for example if you have 10 services that your application uses then you would have to do that installation 10 times for each service and again it will differ within the team is based on the operating system that each developer uses. Now let's look at how containers solve some of these problems. You don't actually have to install any of the services directly on your operating system because with Docker you have that service packaged into an isolated one. environment so that you have postgresql with a specific version packaged with all its configuration inside a container so that as a developer you don't have to search for some binaries to download and install on your machine but just go ahead and start with that. service as a Docker container using a single Docker command that retrieves the container package from the internet and starts it on your computer and the Docker command will be the same regardless of what operating system you are on and it will also be the same regardless of which service you are on installing, so if you have 10 services that your JavaScript application depends on, you will only have to run 10 Docker commands for each container and that will be it, as you can see, Docker standardizes the process of running any service in your environment. development and makes the whole process much easier so you can basically focus and work more on development instead of trying to install and configure services on your machine and this obviously makes setting up your local development environment much faster and easier than the containerless and

docker

option.
You can even have different versions of the same application running in your local environment without having any conflicts, which is very difficult to do if you are installing that same application with different versions directly on your operating system and we will see all this in action. In the demo part of this video, let's now look at how containers can improve the application deployment process. Before containers, a traditional deployment process would look like this development team producing an application artifact or a package together. with a set of instructions on how to install it and configure that application package on the server so that you have something like a jar file for the Java application or something similar depending on the programming language used and also of course have some kind of service database or other services that your application also needs. with a set of instructions on how to set it up and configure it on the server so that the application can connect to it and use it, so that the development team hands that artifact or application package to the operations team and the operations team takes care of of installation and configuration. the application and all its dependent services like the database for example, now the problem with this kind of approach is that first of all you need to configure everything and install everything again indirectly on the OS, which I mentioned in the development context , which is actually a big mistake. prone and can have several different problems during the configuration process, it can also have conflicts with dependency versions where two services depend on the same library for example, but with different versions and when that happens it will make the configuration process a lot more difficult and complex, so basically a lot of things can go wrong when the operations team installs and configures any service on a server.
Another problem that could arise from this type of process is when there is a lack of communication between the development team and the operations team because everything is in a textual guide like a list of instructions on how to configure and run the application or maybe some kind of checklist. There may be cases where developers forget to mention some important step about configuration and when that part fails, the operations team has to go back. to the developers and ask for more details and input, and this could lead to some back and forth communication until the application is successfully deployed to the server, so basically you have this additional communication overhead where the developers have to communicate in some type of textual graphic, whatever the format. the application should be running and as I mentioned this could lead to problems and poor communication with the containers.
This process is actually simplified because now developers create an application package that not only includes the code itself but also all the dependencies and configuration of the application. Instead of having to write that in some textual and document format, they basically just package all of that inside the application artifact and since it's already encapsulated in an environment, the operations staff doesn't have to configure any of this directly on the server, so it makes the whole process much easier and there is less room for the problems that I mentioned above, so the only thing the operations team has to do now in this case is to run a Docker command that gets the container package that created by the developers and runs it on the server in the same way that the operations team will run any services that the application needs also like Docker containers and that makes the deployment process much easier on the operations side now of course , the operations team will have to install everything and configure the Docker runtime on the server first.
They'll be able to run containers, but that's just a one-time effort for a service or a technology and once you have the Docker runtime installed, you can simply run Docker containers on that server. At the beginning I mentioned that Docker is a virtualization tool like a virtual machine and virtual machines have been around for a long time, so why was Docker adopted so widely? What advantage does it have over virtual machines and what is the difference between the two? For that, we need to see a little how Docker works. On a technical level, I also said that with Docker you don't need to install Services directly on the OS, but in that case, how does Docker run its containers on an OS?
To understand all this, let's first look at how an operating system works. The system is made up of operating systems that have two main layers: you have the operating system kernel and the operating system. The Apple Locations and Kernel layer is the part that communicates with hardware components such as CPU memory storage etc. when you have a physical machine with all these resources. and you install the operating system on that physical machine, the operating system kernel will actually be the one that communicates with the hardware components to allocate resources like CPU memory storage, etc., to the applications that then run on that operating system and those applications are part of the applications. layer and they run on top of the kernel layer, so the kernel is kind of an intermediary between the applications that you see when you interact with your computer and the underlying hardware of your computer and now, since Docker and the virtual machine are tools of virtualization, the question is what part of the operating system they actually virtualize and that is where the main difference between Docker and virtual machines lies, so Docker virtualizes the application layer, this means that when you run a Docker container, it actually contains the application layer of the operating system and some other applications. installed on top of that application layer, this could be a Java or Python runtime or whatever and it uses the host kernel because it doesn't have its own kernel, the virtual machine on the other hand has the application layer and its own kernel, so it virtualizes the entire operating system, which means that when you download a virtual machine image to your host it doesn't use the host's kernel, it actually creates its own.
So what is the difference between Docker and virtual machine? First of all, the size. TheDocker packages or images are much smaller because they only have to implement one layer of the operating system, so Docker images are typically a couple of megabytes. Large virtual machine images, on the other hand, can be a couple of gigabytes, this means when working with Docker. It actually saves a lot of disk space; can run and start Docker containers much faster than virtual machines because the virtual machine has to install a kernel every time it starts, while the Docker container simply reuses the host kernel and you simply start the application layer on top .
While the virtual machine needs a couple of minutes to start, Docker containers typically start within a few milliseconds. The third difference is compatibility, so you can run a virtual image of any OS on any other OS host, so on a Windows machine you can run a Linux virtual machine, for example, but you can't do it with Docker, at least not directly, so what's the problem here? Let's say you have a Windows operating system with the Windows kernel and its application layer and you want to run a Linux-based virtual machine. Docker image directly on that Windows host, the problem here is that the Linux based Docker image cannot use the Windows kernel;
It wouldn't need a Linux kernel to run because you can run a Linux application layer on a Windows kernel, so that's kind of a problem with Docker, however when you're developing on Windows or Mac OS you want to run multiple services because most containers for popular services are actually based on Linux. It's also interesting to note that Docker was originally written and created for Linux, but then Docker. In fact, it upgraded and developed what is called the Docker desktop for Windows and Mac, which made it possible to run Linux-based containers on Windows and Mac computers as well, so the way it works is that the desktop Docker uses a hypervisor layer with a lightweight Linux distribution on top. of this to provide the necessary Linux kernel and in this way make it possible to run Linux-based containers on Windows and Mac operating systems and by the way, if you want to understand more about virtualization and how virtual machines work and what is a hypervisor for example you can watch my other video where I explain all that in detail so this means for local development as an engineer you would install Docker desktop on your Windows or Mac OS computer to run images based on Linux which as I mentioned most popular service databases etc are mostly.
It's based on Linux so you would need that and that brings us to installing Docker so you can do some demos and learn Docker in practice you would need to install it first so to install Docker you just need to go to their official page to get the installation guide. and follow the steps because Docker is updated all the time and the installation changes so instead of just giving you some feedback that may work now but we will update in the future, you should always refer to the latest documentation for the guide. installation of any tool.
So if we look to install Docker on the desktop, click on one of those links like install on Windows. That's the Docker desktop, the tool I mentioned that solves this problem of running Linux-based images on a different operating system, but actually includes many others. things when you install it, so what exactly are you installing with the Docker desktop and do you see exactly what is included there? Basically, you get the Docker service itself. It's called Docker engine, which is the main part of Docker that makes this virtualization possible, but when we have a service we need to communicate with that right, so we need a client that can communicate with that service, so the Docker desktop actually It comes with a command line interface client, which means we can run Docker commands on a command line to start containers to create containers, start to stop them. delete them etc. and do all kinds of things, and it also comes with a GUI client, so if you're not comfortable working with the command line, you can use the GUI where you can do all of these things, but with a nice user. -Friendly user interface so you get all these things when you install Docker Desktop, basically everything you need to get started with Docker and of course depending on what OS you are on you will choose that Mac, Windows or Linux so do Click on one of them and basically just follow the instructions, it has some system requirements, you need to check things like the version of your Mac OS, how many resources it will need and you also have the options for Mac with Intel or Mac with Apple Silicon so you can toggle among them and basically just choose the guide that matches the specifications of your computer and once you have it, check the system requirements, go ahead and click on one of them.
In my case, I have a Mac with an Intel chip, so I would do it. click on this and that is actually the Docker desktop installer, so if I click on it, I will download this DMG image and once downloaded, basically follow the steps outlined here, double click on it, open the application and so on and the same for Windows. If you have Windows, basically click on this and download Docker desktop for Windows and make sure to check the system requirements and prepare everything you need to start Docker in general for the latest versions of Windows Mac or any operating system, it should be quite easy and It's simple to install Docker so go ahead and do it once you are done with the installation you can simply start the service by searching on Docker and if I click on it you will see here that it is actually starting the Docker service for the Docker engine and there Wow, it's running and this view here that you're seeing in this window is actually the Docker GUI that I mentioned, so that's the client that you can use to interact with the Docker engine so that you have a list of containers currently running, so there is no same list with images, yes I changed images, I cleaned my environment, so I am starting from scratch with empty state, like you, so we are ready to start using Docker .
But first you might wonder what images are and that's what I do. I will explain below why it is a very important concept in Docker. Now I mentioned that Docker allows you to package the application with its environment configuration in this package that you can easily share and distribute, like an application artifact file, like when we create a zip or a tar file or a jar file that you can upload to a storage of artifacts and then download it to the server or locally when you need it and then the package or artifact that we produce with Docker is called a Docker image, so it's basically an application artifact, but different from jar or other application artifacts , not only does it have the compiled application code inside it, but it also has information about the environment configuration, it has the application layer of the operating system, as I mentioned, plus tools like node npm or Java runtime installed on that , depending on programming. language your app was written in, for example you have a JavaScript app, you would need node.js and npm to run your app properly, so in the

docker

image you would already have node and npm installed, you can also add the environment variables that your application needs. for example, you can create directories, you can create files or any other environment configuration that you need around your application, so that all the information is packaged in the Docker image along with the application code and that is the big advantage of Docker that we talked about and like I said the package is called an image, so if that is an image, what is a container?
So, well, we need to start that application package somewhere right, so that when we take that package or image and download it to the local server or laptop that we want to run. on that computer the application actually has to run and when we run that image on an OS and the internal application starts in the preconfigured environment that a container gives us, so a running instance of an image is a container, for what a container is Basically, a running instance of an image and from the same image of an image you can run multiple containers, which is a legitimate use case if you need to run multiple instances of this same application to increase performance, for example, and that's exactly what we were seeing here.
So we have the images, these are basically the application packages and then from those images we can start containers that we will see listed here that run instances of those images and I also said that in addition to the GUI, we get a command . Line Interface Client Docker client that can communicate with the Docker engine and since we installed the Docker desktop we should have that Docker CLI available locally as well, meaning if you open your terminal you should be able to run Docker commits and Doc recommends that we can do anything, for example. we can check what images we have available locally, so if I do Docker images, it will give me a list of images that I have locally, which in this case I don't have any of the ones that we saw in the gui and I can also check. containers using a command docker happen PS and again, I don't have any containers running yet, before I continue I want to thank Ned Hopper net.
Hopper's cloud platform called kubernetes application operations offers an easy way for development teams to deliver. manage update, connect secure applications and monitor on one or more kubernetes clusters with this platform, they basically create this virtual network layer that connects multiple environments, for example, if you have multiple cloud platforms and multiple kubernetes clusters, even your own local data center where your application is located. is deployed, you can connect all of this into a virtual network so you can deploy and operate your Kubernetes workloads as if they were in a cluster or infrastructure environment, and the GitHub-centric approach they use provides the visibility to know who did what and when for both. their infrastructure and application so that with net Hopper Enterprises they can automate their operations and, instead of creating their own platform, development teams can focus on what matters most, which is releasing more application features faster, so give them a try Take a look and you can sign up for a free account and try it out to see if Net Hopper is the right solution for you.
Now it is clear that we get containers by running images, but how do we get images to run containers? Let's say we want to run a database or redis container or some registry. Collector service container, how do we get your Docker images right? That's where Docker Registries come in, so there are ready-made Docker images available online in the image storage or registry. Basically, this is specific storage for Docker image type artifacts and these are usually being developed by the company. services like redis mongodb Etc as well as the Docker Community itself will create what are called official images so you know that this mongodb image was actually created by mongodb itself or the Docker community so you know that it is a verified official image of the Docker itself and Docker offers the largest Docker Registry called Docker Hub where you can find any of these official images and many other images that different companies or individual developers have created and uploaded there, so if we search for Docker hub here you will see the image library from the Docker Hub container and this is what it looks like.
Like and you don't actually need to register or register with Docker Hub to find those official images, so anyone can go to this website and basically browse the container images and here in the search bar you can type any service that you are looking for. for example redis which I mentioned and if I press Enter you will basically see a list of various images related to Radius as well as the ready service itself as a Docker image and here you have this bundle or label that says Official Docker Image for example for the reddish image that we are going to choose here you will see that it is actually maintained by the Docker Community.
The way it works is that Docker has a dedicated team that is responsible for reviewing and publishing all content in the official Docker images and this team works on collaborating with thetechnology creators or maintainers, as well as words from security experts to create and manage those official Docker images, this way ensuring that not only the technology creators are involved in the creation of the official image, but also in all Docker security best practices and production. best practices are also considered in creating images and that is basically the description page with all the information on how to use this Docker image, what it includes etc. and again, like I said, Docker Hub is the largest Docker image registry, so you can find images for any service you want to use on Docker Hub now, of course, technology changes and there are updates for applications, those technologies , so you have a new version of redis or mongodb and in that case a new Docker image will be created so that the images also have versions and These are called image tags and on each image page you have the list of versions or tags of that image listed here so this is for redis and if I search for postgres for example Foreign you will see different image tags for the Postgres image.
It is also listed here, so when you are using a technology and you need a specific version, you can choose a Docker image that has that version of the technology and there is a special tag that all the images have called latest, so here you will see this last label. or here also in the recent text, so the most recent tag is basically the last image that was created, so if you don't specify or choose a version explicitly, you will basically get the most recent image from Docker Hub, so now we have seen what the images are and where you can get them, so now the question is how do we get the Docker Hub image and download it locally on our computer so that we can start a container from that image, so first we locate the image that we want to execute.
As a local container for our demo, I will be using an nginx image, so go ahead and search for nginx which is basically a simple web server and has a UI so we can access our container from the browser to validate the container. started correctly, that's why I choose nginx and here you have a bunch of image tags that you can choose from, so the second step after locating the image is to choose a specific image tag and keep in mind that selecting a version image specific is the best practice in most cases and let's say we choose version 1.23 so we choose this tag here and to download an image we go back to our terminal and run the docker pull comment and specify the name of the image image, which is nginx. so you have that full command here as well, so that's basically the name of the image that you've written here, so it's nginx and then we specify the image tag by separating it with a column and then version 1.23, that's what we choose , that's all. command for the Docker client to contact Docker Hub and say: I want to grab the nginx image with this specific tag and download it locally, so let's run it and here we see that it is pulling the image from the Docker Hub image registry and the reason why That you don't have to tell Docker to look for that image in Docker Hub is because Docker Hub is actually the default location where Docker will look for any image that we specify here, so it is automatically set as a location to download the images and the download was done and now if Run the Docker Images command again like we did here.
We should actually see an image now locally, which is nginx with a 1.23 image tag and some other information like the size of the image, which is usually in megabytes, as I mentioned, so now we have an image locally. and if we pull an image without any specific tag, we do this basically, Docker pulls the name of the image. If I run this, you'll see that it's pulling the last image automatically and now if I do Docker images again, we'll see two. nginx images with two different texts, so they are actually two separate images with different versions.
Great, we now have images locally, but they're obviously only useful when we run them in a container environment. How can we do it? It is also very easy: we choose the image that we already have available locally with the tag, so let's say we want to run this image as a container and we run the Docker run command and with the image name and tag it is very easy , let's run and that command actually starts the based container. in the image and we know that the container was started because we see the logs of the nginx service starting inside the container, so they are actually logs of the container that we see in the console, so a couple of scripts are started and right here we start the worker processes and the container is running, so now if I open a new terminal session like this and in Docker PS I should see a container, this one here in the list of running containers and we have information about the container, we have the ID, we have the image the container is based on including the tag when it was created and also the name of the container, so we have the ID and the name of the container.
This is the name that Docker automatically generates and assigns to a container when it is created, so it is random. generated name now, if I come back here, you will see that these

crash

es, the container logs are actually blocking the terminal, so if I want to recover the terminal and I do Ctrl C to exit, the container exits and the process actually dies, so now if I do Docker PS You will see that there is no container running but we can start a container in the background without it blocking the terminal by adding a flag called minus D which means detached so it disconnects the docker process from the terminal.
If I run this you will see that it no longer locks the terminal and instead of showing the nginx logs that are started inside the container it just locks the entire container ID, so now if I do Docker PS here in the same terminal, you should see that container running again and that's basically the ID or part of this full ID string shown here, but when we start a container in the background in a separate mode, you might still want to see the application logs inside the container, so you might want to see how nginx started what did it actually register?
For that you can use another Docker command called Docker Locks with the container ID like this and it will print the application logs from the container now to create the container, the nginx container, first we pull the image and then. We create a container from that image, but we can actually save the pull command and run the run command directly even if the image is not available locally, so now we have these two images available locally, but in the running docker can provide any The image that exists in Docker Hub doesn't necessarily have to exist locally on your computer, so you don't have to pull it first, so if I go back, we can choose a different image version.
Let's choose 1.22 Dash Alpine, so this image tag. that we don't have locally or of course this can be a completely different service, it doesn't matter, basically any image that we don't have locally can be run directly using the Docker run command, so what it does is first try to locate that image locally and if it doesn't find it, it will go to Docker Hub by default and pull the image from there automatically, which is very convenient as it basically does both with one command, so you downloaded the image with this tag and started the container and now if we do Docker PS, we should have two containers running with different versions of nginx and remember I said that Docker solves the problem of running different versions of the same application at the same time, that's how simple it is to do it with Docker so that We can actually exit this container and now again we have that nginx container with this version.
Now the important question is how do we access this container. We can't right now because the container is running on the closed Docker network so we can't access it. from our local computer's browser, for example, we first need to expose the container to our local network, which may seem a little difficult, but it's very easy, so basically we're going to do what's called a port binding: the container runs on some port. on the right and each application has a standard port that it runs on, like the nginx application always runs on port 80. The radio runs on port 6379, so these are standard ports for these applications, so that is the port where the container is running and for nginx we see the ports under the list of ports here, the application is running on port 80 inside the container, so now if I try to access the nginx container on this port in the port 80 from the browser and we try to do it, we are eating and press Enter, you will see that there is nothing. available on this port on localhost, so now we can tell Docker, hey, do you know what links that container?
Port 80 to our localhost on whatever port I tell you on some specific port like 8080 or 9000, it doesn't really matter so I can access the container or whatever is running inside the container as if it were running on my local host port 9000 and we do it with an additional flag when creating a Docker container, so what we're going to do is first stop. this container and we're going to create a new one so we're going to do Docker stopping which basically stops this running container and we're going to create a new container so we're going to have Docker run nginx with the same version and we're going to find it in the background in mode separately, now we're going to bind the port with an additional minus p flag and it's very easy.
We tell Docker the port of the nginx application inside the container, which is 80. Take it and look for that. on a host localhost on port 9000 for example, that's the port I'm choosing, so this flag here will expose the container to our local network or localhost so we can access these nginx processes running in the container on port 9000. now if I run this let's see the container is running and in the port section we see a different value so instead of just having 80 we have this port binding information so if you forgot what port you chose or if you have 10 different containers with Docker PS: you can see which port each container can be accessed on your localhost, so this will be the port.
Now if I go back to the browser and instead of localhost 80, we'll type localhost 9000 and press Enter there. You have welcome to the nginx page, which means that we are actually accessing our application and we can see that also in the logs Docker blocks the container ID and there you go, this is the log that the nginx application produced and received a application. from the Chrome browser of the MEC or Mac OS machine, so we see that our request actually reached the nginx application running inside the container, that's how easy it is to run a service inside the container and then access it locally.
Now, like I said, you can choose any port you want. I want to, but it's also pretty much a standard to use the same port on your host machine that the container uses, so if you were running a MySQL container that started on port 3306, you'd bind it on localhost 3306. So that's kind of a standard. Now there is one thing I want to point out here and that is that the Docker run command actually creates a new container every time it does not reuse the container we created earlier, which means that since we run the Docker run command a couple Sometimes we should do it.
I have multiple containers on our laptop, however if I do Docker PS I only see the container running. I don't see the ones that I created but stopped, but those containers actually still exist, so if I do Docker PS with a Fleck a and run this, I get actually, it has a list of all the containers, whether they're running or stopped, for so this is the active container that is still running and these are the stopped ones. It even says it came out 10 minutes ago, six minutes ago, so we have four containers with different configuration and Previously I showed you the Docker stop command which basically stops an actively running container so we can stop this one and now it will show it as a stopped container and exited a second ago, but in the same way you can also restart a container you created before. without having to create a new one with the Docker run command, for that we have a Docker startup and that takes the container ID and starts the container again and again.
You can start multiple containers at once if you want and have two. containers running now, you saw that we use the container ID in various docker commands, so to start the container, restart itto check logs etc., but the ID is hard to remember and you have to look for it all the time, so alternatively, we can also use the container name for all these commands instead of the ID that Docker automatically generates, but We can actually rewrite it and we can give our containers more meaningful names when we create them so that we can stop those two containers using the ID. or the name like this, so these are two different containers, one with the ID, one with the name and we're going to stop both, there you have it.
Now when we create a new container, we can give it a specific name and there is another one. mark for that which is hyphen hyphen name and then we provide the name that we want to give to our container, let's say it's a web application, this is what we will call our container and run if I do Docker PS you see that the name is not some random generated automatically, but our container is called web application, so now we can do Docker locks and name our container like this. Now that we have learned about Docker Hub, it is actually what is called a public image registry which means that the images that we use are visible and available to the public, but when a company creates its own images of its own applications, of course you don't want them to be publicly available, that's why there are what are called Private Docker Registries and there are many of them, almost every cloud provider has a service for private Docker registry, for example AWS is ECR or the Google Azure elastic container registry service all have their own Docker Registries Nexus, which is a popular artifact storage service, has a Docker registry, even Docker Hub has a private Docker. registration, so on the Docker Hub home page you saw this startup form.
Basically, if you want to store your private Docker images on Docker Hub, you can create a private registry on Docker Hub or even create a public registry and upload your images there. That's why I have an account because I've uploaded a couple of images on Docker Hub that my students can download for different courses and there's one more concept I want to mention related to logging, which is something called a repository that's also often used. listen to Docker repository Docker registry, so what is the difference between them? Explained very simply: AWS ECR is a registry, so basically it is a service that provides storage for images and within that registry you can have multiple repositories for all the different images of your applications, so that each application gets its own repository and in that repository you can store different versions of images or tags of that same application in the same way that dockerhub is a registry, it is a service to store images and in Docker Hub you can have your public repositories to store images that will be publicly accessible or you can have private repositories for different applications and again you can have a dedicated repository for each application, so that's a side note if you hear these terms and concepts and know what the difference is between them.
Now I mentioned that companies would want to create their own repositories. own custom images for your apps, so how does that actually work? How can I create my own Docker image for my application? The use case for this is when I'm done with development, the app is ready, it has some features and we want to release it. to the end users, so we want to run it on a deployment server and make the deployment process easier once we deploy our application as a Docker container along with the database and other services which will also run as Docker containers, how can? we take the code of our created deployed application and package it into a Docker image, for that we need to create a definition of how to build an image from our application and that definition is written in a file called Docker file, that's what it should be called.
Creating a simple Docker file is very easy and in this part we will take a super simple node.js application that I prepared and we will write a Docker file for that application to create a Docker image from it and like I said, it is very easy to do, this is the application, it is extremely simple. I just have a server.js file that basically starts the app on port 3000 and then says welcome when you access it from the browser and we have a Json file package that contains this dependency, but the express library we use here to start the app is super simple and easy and that is the application from which we are going to create a Docker image and launch it as a Docker container, so let's go ahead and do that in the root of the application, we are going to create a new file called Docker file, that's the name and You'll see that most code editors actually detect the Docker file and we get this Docker icon here, so in this Docker file we're going to write a definition of how the image should be built from this application, so what does our application need?
You need a node. installed because the node should run our application correctly, so if I wanted to start this application, luckily for my terminal, I would run the SRC node so that the source folder and the server.js command start the application, so we need that command node available within the image and that is where the concept of base image comes in, so every Docker image is actually based on this base image which is mainly a lightweight Linux OS image that has the npm node or whatever tool you need for your application installed on top of it, so for JavaScript application, you would have a node base image if you have a Java application, we will use an image that has the Java runtime installed again.
Linux operating system with Java installed on top and that is the base image and we define the base image using a directive in the Docker file. called from we are saying build this image from the base image and if I go back to Docker Hub and search for node you will see that we have an image that has node and npm installed inside and the base images are like other images so basically you can stack and build on top of the images in Docker so that they are like any other image we saw and also have text or image versions, so we will choose the node image and a specific version and we will search for 19-alpine, that is our base image and our first directive in the docker file, so again, this will just ensure that when our node.js application is started in a container, it will have a node and npm commands available inside to run our application now if we start our application with this command, We will see that we get an error because we first need to install the dependencies of an application.
We only have one dependency which is to press Library which means we would have to run the npm install command which will verify the package. The json file reads all the dependencies defined inside it and installs them locally to the node modules folder, so we are basically mapping the same thing we would do to run the application locally. We are doing it inside the container, so we would have to run npm install. The command is also inside the container, so as I mentioned before, most Docker images are based on Linux. Alpine is a lightweight distribution of the Linux operating system, so in the Docker file you can write any Linux command you want to run inside the container and whenever we do it. you want to execute any command inside the container, whether it is a linux command or a node command, npm command, whatever we have executed using a execute directive, then that is another directive and you see that the directives are written in uppercase and then comes the command, so npm install will be downloaded. dependencies inside the container and create a node modules folder inside the container before the application starts, so again think of a container as its own isolated environment.
It has a simple Linux OS with node and npm installed and we are running npm install anyway. We also need the application code inside the container so we need the server.js inside and we need the package.json because that's what the npm command will need to read the dependencies and that is another directive from where we take the files from our local computer. and we paste them, we copy them into the container and that's a directive called copy and you can copy individual files like package.json from here to the container and we can say where in the container, what location in the file system it should be copied to and let's say which needs to be copied to a folder called slash app inside the container, so it is on our machine.
We have the package.json here, this is inside the container. It's a completely isolated system from our local environment, so we can copy individual files and we can also copy entire directories, so we also need our application code inside, obviously, to run the application, so we can copy everything this source directory, so that we have multiple files inside, we can copy the entire directory in the Container back to the application location with forward slash and forward slash. in the end it is also very important for docker to know to create this folder if it doesn't already exist in the container so that the roots of the linux filesystem app folder are inside and then sweep so that now all the files of relevant applications like package.json and the entire source directory is copied to the container at this location.
The next thing we want to do before we can run the npm install command is change to that directory, so on Linux we have this CD right to change to a directory so we can run the following commands inside the directory in the Docker file, we have a directive for that called work dear, it is a working directory which is equivalent to changing to a directory to run all the following commands in that directory so we can make a slash app here so that sets this path as the default location for whatever comes then ok, we copy everything to the container, then we set the working directory or default directory inside the container and then we run npm install again inside the container to download everything. the dependencies that the application needs are defined here and finally we need to run the application properly so after installing npm the node command needs to be run and we learned how to run commands we use the Run directive; however, if this is the last command in the docker file.
So something that actually starts the process itself, the application inside, we have a different directive for that called CMD, so that's basically the last command in the docker file and that starts the application and the syntax for that is the command which is node and the parameter gserver. .js, so we copy everything into the slash app to have server.js inside the app directory and launch it or run it using the node commit. That's it, the complete Docker file that will create a Docker image for our node. js file which we can then start as a container, so now that we have the definition in the Docker file, it's time to build the image from this definition.
I'm going to clarify this and without switching to the terminal, we can reuse this one. we can run a Docker command to create a Docker image, which is very easy, we just create Docker and then we have a couple of options that we can provide. The first is the name of the image, so like all those images have names like node. drop Etc and text also we can name our image and give it some specific tag and we do it using this option Dash T and we can call our app node app maybe with Dash it doesn't matter and we can give it a specific tag like 1.0 for example and the last parameter is the location of the Docker file, so we are telling Docker to create an image with this name with this tag from the definition in this specific Docker file, so this is a location of the Docker file in this case we are in. the directory where the Docker file is located, so it will be the current directory, so this point basically refers to the current folder where the Docker file is located.
Now if we run this, as you can see, Docker is actually building the image from our Docker file and it looks like it was successful where it started building the image, you see those steps, those directives that we defined here, so we run the first directive, then we have the copy as a second step, then we copy the source folder, setting the working directory and running npm install and then the last one just starts the application, so now if I make docker images in addition to the docker images nginx that we downloaded earlier from Docker Hub, we should see the image we just created, this is the application imagenode labeled 1.0. and some other information, so that's our image and now we can launch this image and work with it just like we work with any other image downloaded from Docker Hub, so we'll go ahead and run the container from this node application image and make sure that the internal application is actually working, so we are going to run the Docker node application image with 1.010 and we are going to pass the parameter to start in offline mode and we also want to expose the correct port that we want to be able to access the application, the node application from localhost and we know that the application inside the container will start on port 3000 because that is what we have defined here, so the application itself will run on port 3000, so it is inside the container and we can link it to anything.
Port we want on localhost and we can do 3000 the same as on the container so this is the port of the host and this is the port of the container and now if I run the command and do Docker PS we should see our node application running on port 3000 and now the moment of truth, going back to the browser and opening localhost 3000, the message welcome to my amazing application appears from our application and we can even check the logs by taking the ID of our application no and doing blocks of Docker with the ID and that is the result. of our application inside the container, that's how easy it is to package your application into a Docker image using a Docker file and then run it as a container and finally return to this GUI client that the Docker desktop actually provides us now.
Here we can also see other containers and images and this is what this UI looks like: It gives you a pretty good overview of which containers you have, which ones are currently running, which ones are stopped with their names, etc., and even have some controls here to start a stopped container like this or even stop it again restart the deleted container whatever and in the same way you have a list of images including our own image and you can also create containers directly from here using some controls so I personally prefer the command line interface to interact with Docker, but some are more comfortable using the visual UI, so you can choose to work with whichever you prefer.
We have now learned many of the basic components of Docker; However, it is also interesting to see how Docker works. It actually fits into the entire software development and deployment process with many other technologies, so Docker is relevant at which steps throughout this whole process, so in this final part of the

crash

course we will look at Docker in a software overview. development life cycle, so let's consider a simplified scenario where you are developing a JavaScript application on your laptop directly in your local development environment. Your JavaScript application uses a mongodb database, and instead of installing it on your laptop, it downloads a Docker container from Docker Hub. then connect your javascript application with mongodb and start developing it.
Now let's say you developed the first version of the app locally and now you want to test or deploy it to the development environment where there is a tester in your team. We will test this by having you commit your JavaScript application to git or some other version control system which will trigger a continuous integration of a Jenkins build or whatever you have configured and the Jenkins build will produce artifacts of your application, so first you will create your JavaScript application. and then create a Docker image from that JavaScript artifact, so what happens to this Docker image once it is created by Jenkins?
It is pushed to a private Docker repository, so typically in an enterprise you would have a private repository because you don't have one. You want other people to have access to your images, so you send them there and now it's the next step. It can be configured in Jenkins or some other scripts or tools. The Docker image must be deployed to a development server so that you have a development server to pull from. the private repository image the image of your JavaScript application and then you pull the mongodb that your JavaScript application depends on from a Docker Hub and now you have two containers, one your custom container and a publicly available mongodb container running on the development server and they communicate with each other. another, you have to configure it, of course, they talk, they communicate with each other and they run as an application, so now, if a tester, for example, or another developer logs into a development server, they will be able to test the application, so this is a simplified workflow. how Docker will work in a real life development process, so in a short time we will learn all the basic components, the most important parts of Docker, so that you understand what images are, how to start containers, how they work and how to access them, as well as how to actually create your own Docker image and run it as a container, but if you want to learn more about Docker and practice your skills even more, such as how to connect your application to a Docker container, learn about Docker, compose volumes of Docker, etc., you can see it. my complete Docker tutorial and if you want to learn Docker in the context of devops and really master it with things like private registries using Docker to run Jenkins, integrating Docker into cicd pipelines and using it with other technologies like terraform ansible etc. you can check out Discover our Complete Devops bootcamp where you will learn all this and much more.

If you have any copyright issue, please Contact