YTread Logo
YTread Logo

Docker for Beginners: Full Free Course!

Jun 02, 2021
Hello and welcome to the dunker

course

for

beginners

. My name is Moon Shot 1 Omelette and I will be your instructor for this

course

. I am a DevOps and Cloud Trainer at Code Cloud Comm, which is an interactive online hands-on learning platform. I have worked in the industry as a consultant for over thirteen years and have helped hundreds of thousands of students learn technology in a fun and interactive way. In this course, you'll learn Docker through a series of lectures using animated illustrations and some fun simplifying analogies. complex concepts, we have demos that will show you how to install and get started with Docker, and most importantly, we have hands-on labs that you can access right from your browser.
docker for beginners full free course
I'll explain more about this in a moment, but first let's look at the objectives. of this course in this course we first try to understand what containers are, what Docker is and why you might need it and what it can do for you. We will see how to run a Docker container. How to create your own Docker image. We will see networks. in Docker and how to use Docker Composte what is the Docker registry, how to implement your own private registry and then we look at some of these concepts in debt and try to understand how Docker actually works under the hood, we look at Docker for Windows and Mac. before finally get a basic introduction to container orchestration tools like dr.
docker for beginners full free course

More Interesting Facts About,

docker for beginners full free course...

Swann and Kubernetes, here's a quick note on hands-on labs. First of all, to complete this course, you do not need to set up your own labs. You can configure them if you want, if you want to have your own environment and we. I also have a demo, but as part of this course we offer real labs that you can access directly from your browser anywhere, anytime and as many times as you want. Labs give you instant access to a terminal, a dockable host, and an attached quiz portal. The testing portal asks a series of questions, such as exploring the environment and gathering information, or you may be prompted to perform an action such as running the Docker container.
docker for beginners full free course
The testing portal then validates your work and gives you instant feedback. Each lecture in this course is accompanied by such challenging interactive quizzes that make learning Docker a fun activity, so I hope you're as excited as I am to get started, so let's start by looking at a high-level overview of why you need Docker and what. may be useful for you. Let me start by sharing how I came across Locker in one of my previous projects. I had a requirement to set up an end-to-end application stack that included several different technologies such as a web server using node.js and a database like MongoDB and a messaging system like Redis and an orchestration tool like ansible , we had a lot of problems developing this application stack with all these different components, first of all its compatibility with the underlying operating system was an issue that we had to ensure that all these different services were compatible with the version of the operating system we were planning to use .
docker for beginners full free course
There were times when certain versions of these services were not compatible with the operating system and we had to go back and look for different operating systems that were compatible with all of them. These different services, secondly, we had to check the compatibility between these services and the operating system libraries and dependencies. We have had issues where one service requires one version of a dependent library while another service requires another version. Our app architecture changed over time. We had to upgrade to newer versions of these components or change the database etc. and every time something changed we had to go through the same process of checking compatibility between these various components and the underlying infrastructure.
This compatibility matrix problem is usually known as The Hell Matrix below, every time we had a new developer on board, we found a new environment really difficult. New developers had to follow a large set of instructions and execute hundreds of commands to finally configure their environments. We had to make sure. they were using the correct operating system, the correct versions of each of these components and each developer had to configure everything themselves each time we also had different development test and production environments, a developer can feel comfortable using one operating system and others may be. I was comfortable using another one, so we couldn't guarantee that the app we were building would run the same in different environments, so all of this made our life developing, building, and shipping the app really difficult, so I needed something that could help. us with the compatibility issue and something that will allow us to modify or change these components without affecting the other components and even modify the underlying operating systems as necessary and that search led me to Docker with Docker.
I was able to run each component separately. container with its own dependencies and its own libraries, all on the same virtual machine and OS, but within separate environments or containers, we only had to create the

docker

configuration once and all our developers were now able to get started with a simple

docker

run command, respective of what the underlying operating system they run, all they had to do was make sure they had been disabled or installed on their systems, so what are containers? Containers are completely isolated environments. I think they can have their own processes for services, their own network interfaces, their own supports. like washing machines, except they all share the same operating system kernel.
We'll see what that means in a moment, but it's also important to note that containers are not new, as Docker containers have been around for about 10 years and some of the different types of containers. They are LX c LX d LX c FS, etc. Docker uses Aleksey containers. Setting up these continuous environments is difficult as they are very low-level and that is where Docker offers a high-level tool with several powerful functionalities that make it really easy for end users. If we like to understand how Docker works, let's first review some basic concepts of operating systems. If you look at operating systems like Ubuntu Fedora Susi Air Scent OS, they all consist of two things: an operating system kernel and a set of software for which the operating system kernel is responsible. interacting with the underlying hardware while the kernel of the operating system remains the same which is Linux, in this case it is the top software that makes these operating systems different, this software may consist of different user interface, drivers , compilers, file managers, development tools, etc., so it has a common The Linux kernel is shared among all races and some custom software that differentiates the operating systems from each other.
We said earlier that Docker containers share the underlying kernel, so what does kernel sharing actually mean? Let's say we have a system with an Ubuntu operating system with Docker installed. Docker can run any type of OS on top of it as long as they are all based on the same kernel, in this case Linux, if the underlying OS is Ubuntu, Docker can run a container based on another distribution like Debian Fedora SUSE or Sint OS each. The Docker container only has the additional software that we just talked about in the previous slide that makes these operating systems different and Docker uses the underlying kernel of the Docker host that works with all the above OSS, so what is an operating system that doesn't share the same kernel? like this Windows, so you won't be able to run a Windows based container on a Linux Docker host, for that you will need a Docker on a Windows server.
It's now when I say this that most of my students say, "Hey." Wait, that's not true and install Docker on Windows, run a Linux-based container and go see if it's possible. When you install Docker on Windows and run a Linux container on Windows, you are not actually running a Linux container on Windows. Windows runs. a Linux container in a Linux virtual machine under the hood, so it's actually a Linux container in a Linux virtual machine on Windows. We'll talk more about this in Windows or Mac docker later in this course. Now you might be wondering: isn't that a disadvantage?
Then no. Being able to run another kernel on the operating system, the answer is no because, unlike hypervisors, Docker is not designed to virtualize and run different operating systems and kernels on the same hardware. The main goal of Docker is to package and contain applications and ship and run them anywhere as many times as you want, which brings us to the differences between virtual machines and containers. Something we tend to do is especially those with experience in virtualization, as you can see on the right in the Docker case we have. the underlying hardware infrastructure and then the operating system and then the docker installed on the operating system.
Docker then manages containers that run with only libraries and dependencies. In the case of virtual machines, we have the hypervisor like ESX on the hardware and then the virtual machines on them like you. You can see that each virtual machine has its own operating system inside and then the dependencies and then the application, the overhead causes higher utilization of the underlying resources as there are multiple virtual operating systems and the kernel running the virtual machines also consumes higher space on disk since each VM is heavy. and is typically gigabytes in size, while docker containers are lightweight and typically megabytes in size, allowing dockers to start up faster, typically in a matter of seconds, while virtual machines as we know them , they take a few minutes to start as they need to install the For the entire operating system, it is also important to note that Docker has less isolation as more resources are shared between containers such as the kernel, while VMs have complete isolation each other, since VMs do not depend on the underlying operating system or kernel, you can run different types of applications built on different OSS, such as Linux or Windows based applications on the same hypervisor, so those are some differences between the two, having said that this is not a container or VM situation, your containers and VMs now when you have large environments with thousands of application containers running on thousands of dogs or hosts you will often see containers provisioned on virtual hosts of Docker that way, we can use the benefits of both technologies, we can use the benefits of virtualization to easily provision or tear down Docker House that serves as needed in the At the same time, take advantage of the benefits of Docker to easily provision applications and scale them quickly as needed, but remember that in this case we will not provision as many virtual machines as we used to before because now we provision one virtual machine for each application.
You can provision a virtual machine for hundreds or thousands of containers, so how do you do it? There are many containerized versions of applications available today, so most organizations have their products containerized and available in a public dock or repository called Docker Hub or Docker. store, for example, can find database images of the most common operating systems and other services and tools once you identify the images you need and install Docker on your hosts. Opening an application is as easy as running a Docker Run command with the image name in this case, running a Docker run and a single command will run an ansible instance on the Docker host.
Similarly, run a MongoDB Redis and nodejs instance using the Docker Run command. If we need to run multiple instances of the web service, just add as many. instances you need and set up a load balancer or some kind on the front end in case one of the instances fails just destroy that instance and run any one. There are other solutions available to handle such cases which we will look at later during this course and for now don't focus too much on the commands and we will get to that in a moment. We've been talking about images and containers.
Let's understand the difference between the two. An image is a package or template such as a VM template that you may have worked with in the world of virtualization. Used to create one or more containers. Containers run image instances that are isolated andThey have their own environments and set of processors, as we have seen before, many products have been coupled. In case you can't find what you are looking for, you can create your own image and push it to the Docker Hub repository to make it available to the public. If you look at it, developers traditionally built applications and then handed it over to the operations team to deploy. and manage it in production environments, they do this by providing a set of instructions, such as information on how hosts should be configured, what prerequisites should be installed on the host, and how dependencies should be configured, etc., since the operations did not.
They actually develop the app on their own, they have difficulty setting it up when the hidden problem arises. They work with developers to solve it with Docker. Developers and operations teams work hand in hand to transform the guide into a Docker file with both. requirements, this docker file is used to create an image for your applications. This image can now run on any host with Docker installed and is guaranteed to run the same everywhere, so the operations team can now simply use the image to deploy the application. Since the image was already working when the developer created it and was not modified by operations, it continues to work the same way when deployed to production and that is an example of how a tool like Docker contributes to the DevOps culture, that's all for now. in the next lecture we will see how to get started with Docker.
Now we will see how to get started with Docker. Now Docker has two editions: Community Edition and Enterprise Edition. Community Edition is the set of

free

Docker products that Enterprise Edition is. the certified and supported container platform that comes with enterprise add-ons like image management image security Universal control plane to manage and orchestrate container runtimes but of course these come at a price which we will discuss later in this course we will talk more about container orchestration and, along with some alternatives, we will move forward with the Community Edition for now. The Community Edition is available on Linux, Mac, Windows or on cloud platforms such as AWS or Azure.
In the next demo we will see how to install and get started with Docker on a Linux system now, if you are on Mac or Windows you have two options: install a Linux virtual machine using VirtualBox or some kind of virtualization platform and then follow the next demo, which It's really the easiest way to get started with Docker. The second option is to install Docker Desktop for Mac or Docker Desktop for Windows, which are native applications. If that's really what you want, check out the Docker for Mac and Windows sections toward the end of this article. course and then come back here once everything is set up, now we will go to a demo and see how to install Docker on a Linux machine.
In this demo, we'll look at how to install and get started with Docker first. First of all, identify a physical or virtual system machine or laptop that has a supported operating system. In my case, I have an Ubuntu virtual machine. Go to doctor comm and click on darken. You will be directed to the Docker Engine Community Edition page, which is the

free

version. What we are looking for in the menu on the left, select your system type. I choose Linux in my case and then select the type of operating system. I choose Ubuntu. Please read the prerequisites and requirements.
Your abun to the system must be 64 bit and one of these supported versions like disk cosmic bio or Sanyal in my case I have a bionic version to confirm see the Etsy release file then uninstall any previous version if it exists so let's make sure that there aren't any on my host, so I'll just copy and paste that command and confirm that no previous version exists on my system. The next step is to configure the repository and install the software. Now there are two ways to do it. The first is to use the package manager by first updating the repository using apt. -get the update command, then install the prerequisite packages and then add face GPG keys Dockers and then install Docker, but I'm not going to go that route;
There is an easier way: if you scroll to the bottom, you will find the instructions. to install Docker using the convenient script, it is a script that automates the entire installation process and works on most operating systems. Run the first command to download a copy of the script, and then run the second command to run the script to install Docker automatically. a few minutes to complete the installation. Now it is successful. You already checked the Docker version using the darker version command. We have installed version 19.0 3.1. Now we'll run a simple container to make sure everything works as expected.
Go to Docker. hub in hub docker calm here you will find a list of most popular Docker images such as MongoDB engine eggs Alpine nodejs Redis etc. Let's find a funny image called We will say we will say it is the Dockers version of cows which is basically a simple application that shows a cow saying something in this case it turns out to be a whale copy the docker run command given here remember to add a sudo and we will change the message to hello world by running this command docker pulls the whales II application image from Docker Hub and runs it and we have our help to say hello.
Great, we're ready. Remember that, for the purpose of this course, you do not need to set up a Docker system yourself. We offer hands-on labs that you will have access to, but if you want to experiment on your own and follow along, feel free to do so. Now we will look at some of the docker commands. At the end of this lecture, you will take a practical test where you will practice how to work. with these commands let's start by looking at the docker run command the docker run command is used to run a container from an image by running the docker run command nginx will run an instance of the nginx application on the docker host if it already exists if the image is not there present on the host, it will go to Docker Hub and pull down the image, but this is only done the first time for subsequent runs, the same image will be reused.
The Docker PS command lists all the running containers and some basic information about them, such as container ID, the name of the image we use to run the containers, the current state and container name, each container automatically gets an ID random and a name created for it by Docker, which in this case is nonsense, Samet sees all the containers running. or do not use - an option that generates all running containers, as well as previously stopped or closed containers. We'll talk about the command and port fields shown in this output later in this course, for now let's just focus on the basic commands for stopping a running container. use Tucker's stop command, but you must provide the container ID or continuation name in the stop command.
If you are not sure of the name, run the Docker PS command to get it correctly. You will see the name printed and running Docker. PS will again show that there are no running containers running Docker PS; However, it shows the silly top of the container and that it is in a crash state a few seconds ago. What if we don't want this container to sit around consuming space? What if we want to obtain it? get rid of it for good use the docker RM command to permanently remove a stopped or exited container if it prints the name again we are fine run the docker PS command again to check it is no longer present but what about the nginx image which was? downloaded at first we don't use it anymore, so how do we get rid of that image?
But first, how do we view a list of images present on our hosts? Run the Docker Images command to see a list of available images and their sizes on our hosts, we have four images, nginx, Redis, Ubuntu and Alpine, we will talk about the tags later in this course, when we discuss the images, to delete an image that you no longer plan to use, run the Docker RM I command, remember to make sure there are no containers. When running that image before attempting to delete it, you must stop and delete all dependent containers before you can delete an image.
When we ran the Docker Run command earlier, it downloaded the Ubuntu image since it couldn't find one locally. What if we just want to download the image and keep it, so when we run the run docker run command we don't want to wait for it to download, use the docker pull command to pull only the image and not run the container, so in this In case the docker pull comes a bull and extracts the Ubuntu image and stores it on our host. Let's look at another example. Let's say you are going to run a docker container from an Ubuntu image.
When you run the Docker Run Ubuntu command, it runs an instance of a package image and exits. Right off the bat, if you were to list the running containers, you wouldn't see the container running. If you list all containers, including those that are detained, you will see that the new container from Iran is in an exit state. Now, why, unlike virtual machines, are containers not? Intended to host an operating system, containers are intended to run a specific task or process, such as hosting an instance of a web server or application server or a database or simply to perform some type of calculation or analysis tasks once that the task is completed. exits a container only lives as long as the process inside it is alive if the web service inside the container stops or fails then the container exits.
This is why when you run a container from an Ubuntu image it stops immediately because you too are just an image of an OS that is used as a base image for other applications, there is no process or application running on it anyway. By default, if the image does not run any services, as is the case with Ubuntu, you can tell Docker to run a process. with the docker run command, for example, a sleep command with a duration of 5 seconds, when the container starts, it executes the sleep command and goes to sleep for 5 seconds. The publish with the sleep command exits and the container stops.
What we just saw was executing a command. when we run the container, but what if we wanted to run a command on a running container, for example, when I run the Docker PS command I can see that there is a running container that uses bun to generate images and sleeps for 400 seconds, like this what shall we say? I would like to see the contents of a file within this particular container. I could use the Docker Exec command to run a command in my Docker container. In this case, to print the contents of the Etsy hosts file.
Finally, let's look at one more option before we begin. Go to the practice exercises. Now I'm going to run a Docker image that I developed for a simple web application. The name of the repository is called Cloud Slash. Simple web application. Run a simple web server that listens on port 8080 when you run docker. This way it runs in foreground or attached mode which means you will be connected to the console or standard outside of the docker container and you will see the output of the web service on your screen and you will not be able to do anything else in this console other than seeing the result until this docker container is stopped, it will not respond to your inputs, press the combination Ctrl + C to stop the container and the application hosted in the container will close and return to your message.
The option is to run the docker container in D test mode by providing the -D option. This will run the docker container in background mode and return to the prompt immediately. The container will continue to run on the backend. Run docker. PS command to see the running container now, if you want to reattach it to the running container later, run the Docker Attach command and specify the name or ID of the docker container. Now remember if you are specifying the ID of a container in any docker command. You can simply provide the first few characters just to make it different from the other container IDs on the host.
In this case, I specify a 0 for 3 D. Now don't worry about accessing the web server UI, we'll see more for now. We'll talk about that in the next few lectures for now, let's understand that the basic commands will now get our hands dirty with the docker CLI, so let's take a look at how to access the practice lab environments. Next, let me guide you through the lab. environment the links to access the labs associated with this course are available in cold cloud en code cloud comm slash P slash docker dash labs this link is also provided in the description of this video once you are on this page use the links provided findthere to access the labs associated with your conference each conference has its own lab, so remember to choose the right lab for your conference.
The labs open directly in your browser. I would recommend using Google Chrome while working with labs. The interface consists of two parts. terminal on the left and a quiz portal on the right the cooze portal on the right offers you challenges to solve follow the quiz and try to answer the questions asked and complete the tasks assigned to you each scenario consists of between 10 and 20 questions that should be answered within 30 minutes to an hour at the top you have the question numbers below that is the time left for your lab below that is the question if you can't solve the challenge look for hints in the hints section You can skip a question by pressing the skip button in the top right corner, but remember that you will not be able to go back to a previous question once you have skipped it if the quiz portal gets stuck for any reason, click on the portal tab from the questionnaire above to open the testing portal in a separate window.
The terminal gives you access to a real system running Tucker. You can run any Docker command here and run your own containers or applications. Normally you would run commands to resolve the tasks assigned in the testing portal. You can play and experiment with this environment, but make sure you do it after you have completed the quiz so that your work does not interfere with the tasks provided by the quiz, so let me guide you through a few questions. There are two kinds. of Questions Each lab scenario begins with a set of export multiple choice questions where you are asked to explore and find information in the given environment and select the correct answer.
This is to familiarize you with a configuration that you are then asked to make. Tasks like running a container, stopping them, deleting them, creating your own image, etc. here the first question asks us to find the version of Docker server engine running on the host, run the Docker Reversion command in the terminal and identify the correct version and then select the appropriate option. From the given options, another example is the fourth question where it asks you to run a container using the Redis image. If you are not sure about the command, click on hints and it will show you a hint.
Now we run a Redis container using docker. run the readys command wait for the container to run once done click verify to verify your work. Now we have success

full

y completed the task. Similarly, go ahead and complete all the tasks once the lab exercise is completed. Remember to leave a comment and tell us how it went. A few things to keep in mind: These are public access labs that anyone can access, so if you get disconnected during a rush hour, wait a moment and try again. Remember not to store any private or confidential data on these systems.
Please remember that this environment is for learning purposes only and is only live for one hour, after which the lab is destroyed, as is all your work, but you can start over and access these labs as many times as you like until feel safe. I will also post solutions for these labs. questionnaires, so if you have problems, you can refer to them. That is all for now. The headline words are the first challenge and we'll see you on the other side. We will now look at some of the other Docker run commands at the end of this lecture.
We will do a practical test in which you will practice how to work with these commands. We learned that we can use the Docker Run Redis command to run the container by running a Redis service, in this case the latest version of Redis which is 5.0 to 5 as of today, but what if we want to run another version of Redis like, for example, and older versions say 4.0, then specify the separate version? for two points, this is called a tag, in that case Docker pulls a Redis version 4.0 image and runs it. Also note that if you don't specify any tags like in the first command, Docker will consider the default tag to be the last one. is a label associated with the latest version of that software that is governed by the authors of that software.
So as a user how can you find information about these versions and what is the latest version on Docker Hub com? Search an image and you will find everything. the tags supported in its description, each version of the software can have multiple short and long tags associated with it, as seen here, in this case version 0.5 also has the latest tag, now let's look at the entries. I have a simple notification application. which when run it asks for my name and upon entering my name it prints a welcome message if I were to see this application with Docker and run it as a docker container like this I wouldn't wait for the message it would just print what the application is supposed to do . print to stdout, that's because by default the docker container doesn't listen to stdin, even though it's connected to your console, it can't read any input from you, it doesn't have a terminal to read what it contains. runs in a non-interactive sludge, if you want to provide your input you have to map your host's stdin to the docker container using the -I parameter, the -I parameter is for interactive mode and when I enter my name it prints the expected result , but there is still something missing in this, the message when we run the application at first asked us for our name, but when Docker freezes that message is missing even though it seems to have accepted my input, that is because the application message in the terminal and we have not connected to the container terminal for this use the -T option, as well as -T means a sudo terminal, so with the combination of -int we are now connected to the terminal and in an interactive mode in the container, now we will look at port mapping or publishing ports to containers.
Let's go back to the example where we run a simple web application in a docker container on my docker host. Remember that the underlying host where Docker is installed is called docker host or docker engine when we run a containerized web application it runs and we can see the server is running but how does a user access my application? As you can see, my application is listening on port 5000, so I could access my application using port 5000, but what? What IP do I use to access from a web browser? There are two options available: one is to use the IP of the docker container.
Each docker container is assigned an IP by default. In this case it is 172 dot 17.0, but remember that this is an internal IP and can only be accessed within the docker host, so if you open a browser from the docker host you can go to http colon, forward slash, forward slash 172 dot 17 dot 0 dot 1:5,000 to access the IP address, but since this is an internal IP users outside of the docker host cannot access using this IP, so This we could use the IP of the docker host which is ninety two points one sixty eight points 1.5, but for that to work you must have assigned the port within docker. container to a free port on the docker host, for example, if I want users to access my application through port 80 on my docker host, I could map port 80 on the local host to port 5000 on the docker container using the P script parameter in my run command like this and so the user can access my application by going to the HTTP URL two dots forward slash one ninety two dots one sixty eight dots one dot five two dots 80 and all traffic on port 80 on my daughter's host will be routed to port 5000 internal to the docker container this way you can run multiple instances of your application and map them to different ports on the docker host or run instances of different applications on different ports for example in this case and run a MySQL instance that runs a database on my host and listens on the default MySQL port which happens to be three 3:06 or another MySQL instance on another port eight 3:06 so you can run as many applications like this and map them to as many ports as you want and of course it cannot be mapped to the same port on the docker host more than once, we will discuss more about mapping of ports and container networking in the networking conference later, now let's see how data is persisted in a docker container, for example, let's say you are going to run a MySQL container when creating databases and tables, the data files are stored in the /wor Labe MySQL location inside the docker container remember that the docker container has its own isolated file system and any change to any file happens inside the container, suppose you download a large amount of data in the database, what if you were to delete the MySQL container and delete it as soon as you do?
The container along with all the data in it disappears, which means that all its data will disappear. If you want to keep the data, you would. I want to map a directory outside the container on the docker host to a directory inside the container. In this case, I create a directory called /opt slash data dir and assign it to var Lib MySQL inside the docker container using the -V option and specifying the directory on the docker host followed by a colon and the directory inside the gawker container this way when the docker container is run it will implicitly mount the external directory to a folder inside the docker container this way all your data will now be stored on the external volume in /opt sweep the data directory and therefore it will remain even if you delete the docker container, the docker PS command is good enough to get basic details about containers such as their names and IDs, but if you want View additional details about a specific container, use docker.
Inspect the command and provide the container name or ID; returns all the details of a container in a format of taste, such as status, mounds configuration data configuration, network configuration, etc. Remember to use it when you need to find details about a container and eventually how to do it. we see the logs of a container that we run in the background, for example, I ran my simple web application using the -D parameter and ran the container in standalone mode, how do I see the logs, which are the contents written to the standard? outside of that container, use the docker logs command and specify the container ID or name like this.
Sit down for this lecture, head through the challenges, and practice working with Docker commands to get started with a simple web application written in Python, this code snippet. It is used to create a web application that displays a web page with a background color. If you look closely at the application code, you will see a line that sets the background color to red. That works fine, however if you decide to change the color in the future you will have to change the application code. It is good practice to take such information out of the application code and put it, for example, in an environment variable called application color.
The next time you run the application, set an environment variable called applicationcolor to the desired value. value and the app now has a new color once your app is packaged into a docker image, then you will run it with the run docker command followed by the image name; However, if you want to pass the environment variable like we did before, you would now use the Docker Run Commands - II option to set an environment variable inside the container to deploy multiple containers with different colors. I would run the Docker command multiple times and set a different value for the environment variable each time, so how do you find the environment? variable configured in a container that is already running, use docker inspect command to inspect the properties of a running container in the configuration section, you will find the list of environment variables configured in the container, that's all for this lecture on how to set environment variables in Docker. and welcome to this conference on Docker images.
In this lecture we will see how to create your own image. Before that, why would you need to create your own image? It could be because you can't find a component or service you need. want to use as part of your application in Docker Hub or you and your team decided that the application you are developing will be derived for easy shipping and deployment. In this case, I'm going to contain an application, a simple web application that I created using the Python flask framework. First we need to understand what our container is or what application we are creating an image for.and how the application is built, so start by thinking about what you could do if you want to deploy the application manually.
Write the necessary steps in the correct order and create an image for a simple web application. If I had to configure it manually, I would start with an OS like Ubuntu, then update the source repositories using the apt command, and then install the dependencies using the apt command. then install python dependencies using pip command then copy my app source code to a location like opt and finally run the web server using flask command now that i have the instructions create a docker file using this here is a description Quick overview of the process of creating your own image, first create a docker file called docker file and write instructions for configuring your application in it, such as installing dependencies, where to copy the source code to and from, and what is the entry point of the application. etc Once this is done, build your image using the Docker Build command and specify the Docker file as input as well as a tag name for the image.
This will create an image locally on your system so that it is available in the public Docker Hub registry. Run Docker Push. command and specify the name of the image you just created, in this case the image name is my account name, which is M Amjad, followed by the image name, which is my custom application. Now let's take a closer look at that docker file. The docker file is a text file written in a specific format that Docker can understand is in a format of instructions and arguments, for example, in this Docker file, everything on the left in uppercase is an instruction, in this case from execute copy and entry point are all instructions, each of these instructions.
Docker performs a specific action while the image is being created. Everything to the right is an argument for those instructions. The first line of Ubuntu defines what the base operating system for this container should be. Each Docker image must be based on another image, whether it is an operating system or another. image that was created before based on an operating system, you can find official versions of all operating systems on Docker Hub. It is important to note that all Docker files must start with a from statement. The run statement tells Docker to run a particular command on those base images, so at this point, Docker runs the apt-get update commands to retrieve the updated packages and installs the required dependencies on the image.
Then, the copy statement copies the local system files to the Docker image. In this case, the source code of our application is in the current file. folder and I will copy it to the option source code location inside the docker image and finally the entry point allows us to specify a command that will be executed when the image is run as a container when docker builds the images, you will build them in a layered architecture, each line of instruction creates a new layer in the docker image with only the changes from the previous layer, for example the first layer is a base Ubuntu OS followed by the second statement that creates a second layer that installs all the apt packages and then the third statement creates a third layer with the Python packages followed by the fourth layer that copies the source code and the final layer that updates the image entry point, as each layer only stores the changes from the previous layer and is reflected in the size too, if you look at the open base in the image it is around 120MB in size, the apt packages I install are around 300MB and the remaining layers are small.
You can see this information if you run the Docker History command followed by the image name when you run the Bilka door man, you can see the various steps involved and the result of each task, all the built layers are output, so the layered architecture helps you restart the docker created from that particular step in case it fails or if you had to do so. add new steps in the build process you wouldn't have to start over, all the layers created are cached by docker so in case a particular step failed for example in this case step three failed and you had to fix the problem and Rerun Docker Build, it will reuse the previous layers from the cache and continue building the remaining layers.
The same goes if you were to add additional steps in the Docker file. This way rebuilding your image is faster and you don't have to wait for Docker. Rebuild the entire image each time, this is useful especially when you update your application source code as it may change more frequently, you only need to rebuild the layers on top of the updated layers. We've just seen a number of containerized products such as database development tools up and running. systems, etc., but that's not all, it can contain almost all applications, even the simplest ones like browsers or utilities like curl applications like Spotify, Skype, etc.
Basically it can hold everything and in the future see this is how everyone will run applications, no one will. to install anything in the future they will just run it using docker and when they no longer need it they will easily get rid of it without having to clean up too much. In this lecture we will look at command arguments and entry points. In Docker, let's start with a simple scenario: Let's say you want to run a Docker container from an Ubuntu image when you run the Docker Run Ubuntu command, it runs an instance of the Ubuntu image and exits immediately.
If you listed the running containers, you wouldn't see the container running. If you list all the containers, including the ones that are stopped, you will see that the new container you ran is in a state of excitement now, why is it that unlike virtual machines, containers are not intended to host an operating system ? Containers are intended to run a specific task or process, such as hosting an instance of a web server or application server. or a database or simply to perform some type of calculation or analysis once the task is completed the container exits a container only lives as long as the process inside it is alive if the web service inside the container is a point or fails , the container comes out. who defines what process runs inside the container.
If you look at the docker file for popular docker images like ng INX, you will see an instruction called CMD which stands for command that defines the program that will be run inside the container when it starts for the ng INX image is the ng command INX for the MySQL image is the MySQL D command what we tried to do before was run a container with a simple Ubuntu operating system. Let's look at the docker file for this image and you'll see that it uses bash as the default command now bash isn't really a process like a web server or a database server, it's a shell that listens for input from a terminal if it can't find the terminal which comes out when we run the Ubuntu container.
Previously, Docker created a The Ubuntu image container launched the bash program by default, docker does not attach a terminal to a container when run, so the bash program does not find the terminal and therefore exits as the process that was started when the container was created has ended. the container also exits, so how do you specify a different command to start the container? One option is to add a command to the docker run command and that way it overrides the default command specified with it, image, in this case I run docker run. Ubuntu command with sleep 5 command as an option added this way, when the container starts, it runs the sleep program, waits 5 seconds and then exits, but how do you make that change permanent?
Let's say you want the image to always run the sleep command when it starts, then you'll create your own image from the base ubuntu image and specify a new command. There are different ways to specify the command, either the command simply as is in shell form or in a JSON array format like this, but remember when specifying in a JSON array format, the first element of the array must be the executable, in this case the sleep program does not specify the command and parameters together like this, the command and its parameters should be separate items in the list, so now I build my new image using docker build command and name it as Ubuntu Sleeper.
Now you could just run the Docker Ubuntu Sleeper command and get the same results. You always sleep for 5 seconds and exit, but what if you wanted to change the number of seconds you currently sleep? It's hard. -hardcoded to 5 seconds, as we learned before, one option is to run the docker run command with the new command added, in this case sleep 10, so the command that will be executed at startup will be sleep 10, but it doesn't look very good, The image name ubuntu sleeper itself implies that the container will sleep, so we shouldn't have to specify the sleep command again.
Instead, we'd like it to be something like this docker run ubuntu sleeper 10 that we just want to pass through. the number of seconds the container should sleep and the sleep command should be invoked automatically and that's where the entry point instructions come into play. The entry point statement is like the command statement in that you can specify the program to run when the container starts. and whatever you specify on the command line, in this case 10 will be added to the entry point, so the command that will be executed when the container starts is sleep 10, so that is the difference between the two in case of CMD statement the command line parameters passed will be replaced completely while in case of entry point the command line parameters will be added now in the second case what happens if Do I run the ubuntu sleeping image command without adding the number of seconds?
So the command at startup will just sleep and you will get the error that the operand is missing, so how do you set a default value for the command if one wasn't specified on the command line, that's where I would use both the entry point like the command statement? In this case, the command statement will be added. to the entry point statement so at startup the command would be sleep 5 if you didn't specify any parameters on the command line, if you did that will override the command statement and remember for this to happen you must always specify the entry point and command instructions in JSON format, finally, what if you actually want to modify the entry point during runtime, say from sleep to an imaginary sleep 2.0 command?
In that case you can override using the entry point option in the docker run command the final command a startup then it would be sleep 2.0 10 well that's all for this lecture and see you in the next one. Now we will look at networking in Docker. When you install Docker, it creates three networks automatically. Bridge does not and Host Bridge is the default network that gets a container. Attached to If you want to associate the container with any other network, you specify the network information using the network command line parameter like this. Now we will look at each of these networks.
The BRIT network is a private internal network created by Docker on the host. all containers connected to this network by default and get an internal IP address usually in the 170 2.17 series range. The containers can access each other using this internal IP if necessary to access any of these containers from the outside world. Map the ports of these containers. to the ports on the docker host, as we have seen before, another way to access containers externally is to associate the container to the hosts network, this removes any network isolation between the docker host and the docker container, that is, if you were to run a web server. on port 5000 in a web application container, it is automatically reachable on the same port externally without requiring any port mapping, since the web container uses the hosts network, this would also mean that unlike before, you will now not be able to run multiple web containers on the same host on the same port as the ports are now common to all containers on the host network with non-networking, the container is not connected to any network and does not have any access to the external network nor to other containers running on an isolated network, so we just saw the default burst network with network ID 170 2.72 0.1, so all containers associated with this default network will be able to communicate with each other, but what What happens if we want to isolate the containers within the docker host, for example the first one? two web containerson internal network 172 and the second two containers on a different internal network like 182, by default Docker only creates an internal bridge network.
We could create our own internal network using the Docker Network create command and specify the controller that is the bridge in this case and the subnet for that network followed by the name of the custom isolated network run the docker network LS command to list all the networks So how do we see the network configuration and IP address assigned to an existing container? run the docker inspect command with the container ID or name and you will find a section on network configuration there you can see the type of network the container is connected to it is the internal IP address, MAC address and other settings, containers can communicate with each other using their names, for example, in this case I have a web server and a MySQL database container running on the same node, how can I make my web server access the database in the container database?
One thing you could do is use the signed internal IP address on the MySQL container, which in this case is 170 2.72 0.3 but that's not very ideal because the container is not guaranteed to get the same IP when the system reboots. The correct way to do this is to use the container name. All containers on a Docker host can resolve each other. container name Docker has a built-in DNS server that helps containers resolve each other using the container name. Note that the built-in DNS server always runs at address 127 dot 0 dot 0 dot 11, so how does Docker implement networking? What is the technology behind this like how are the containers isolated within the host, docker uses network namespaces which creates a separate namespace for each container, then uses virtual ethernet pairs to connect the containers , that's all we can talk about for now, more about these are advanced. concepts that we discussed in the advanced course on Docker in the code cloud, that's all for now for this networking lecture, move on to the practice tests and practice working with networking in Docker.
See you at the next conference. Hello and welcome to this conference. We are learning advanced Docker concepts in this lecture. We will talk about storage drivers and Docker file systems. We'll look at where and how Docker stores data and how it manages container file systems. Let's start with how Docker stores. data on local filesystem when you install docker on a system it creates this folder structure where lib docker has several folders underneath called ufs container image volumes etc. this is where docker stores all its data by default when I say data. means files related to images and containers running on the docker host, for example, all files related to containers are stored in the containers folder and files related to images are stored in the images folder.
Any volumes created by docker containers are created in the volumes folder. Well, don't worry about that for now, we'll come back to that in a moment, for now, let's understand where Docker stores its files and in what format. So how exactly does Docker store the files of an image and a container to understand that we? I need to understand the layered architecture of Dockers. Let's quickly recap something we learned when Docker creates images. It builds them in a layered architecture. Each line of statement in the Docker file creates a new layer in the Docker image with only the changes from the previous layer, for example, The first layer is a base Ubuntu operating system, followed by the second statement which creates a second layer which installs all the apt packages and then the third statement creates a third layer which with the python packages, followed by the fourth layer which copies the source code over and over again. then finally the fifth layer which updates the entry point of the image, as each layer only stores the changes from the previous layer, it is also reflected in the size, if you look at the base open to the image it is around 120 megabytes in size, the apt The packages I install are around 300 MB and then the remaining layers are small to understand the advantages of this layered architecture, let's consider a second application.
This app has a different talker file but is very similar to our first app as it uses the same base. image since Ubuntu uses the same Python and flask dependencies, but uses different source code to build a different application and therefore also a different entry point when I run the Docker Build command to create a new image for this application Since the first three layers of both applications are the same, Docker is not going to build the first three layers, instead it reuses the same three layers that it created for the first application from the cache and only creates the last two layers with the new sources and the new entry point like this.
Docker creates images faster and saves disk space efficiently. This also applies if you had to update your application code every time you update your application code, such as the dot py application. In this case, Docker simply reuses all previous cache layers and quickly rebuilds the application. image updating the latest source code, thus saving us a lot of time listening to rebuilds and updates. Let's rearrange the layers from the bottom up so we can understand them better. At the bottom we have the base open to layers that the packages, then the dependencies and then the source. The application code and then the entry point.
All these layers are created when we run the Docker Build command to form the final docker image, so all these are the layers of the docker image. Once the build is complete, the content of these layers cannot be modified. Therefore, they are read-only and you can only modify them by starting a new build when you run a container based on this image using the Docker Run command. Docker creates a container based on these layers and creates a new writable layer on top. of the image layer, the writable layer is used to store data created by the container, such as log files written by applications, any temporary files generated by the container, or simply any files modified by the user in that container, although the lifetime of this layer is only As long as the container is alive when the container is destroyed, this layer and all changes stored in it are also destroyed.
Remember that all containers created with this image share the same image layer if you were to log into the newly created one. container and say create a new file called temp dot txt, it will create that file in the containing layer which is read and write. We just said that the files on the image layer are read-only, which means you can't edit anything on those layers. Let's take an example of our application code, since we embed our code in the image, the code is part of the image layer and as such is read-only after running a container.
What if I want to modify the source code to say test a change? Remember the same image. The layer can be shared between multiple containers created from this image, so that means I can't modify it. file inside the container no, I can still modify this file, but before saving the modified file, docker automatically creates a copy the file in the read and write layer and then I will modify a different version of the file in the read and write layer, all future modifications This is called copy-on-write mechanism. The fact that the image layer is read-only only means that the files in these layers will not be modified in the image itself, so the image will remain the same.
The same all the time until you rebuild the image using the Docker Build command. What happens when we get rid of the container? All data that was stored in the container layer is also deleted. The change we made in the dot py application and the new temperature. file we created we will also delete it, so what if we want to keep this data? For example, if we were working with our database and wanted to preserve the data created by the container, we could add a persistent volume to the container to do so. this first creates a volume using docker volume create command so when I run docker volume create data underscore volume command it creates a folder called data underscore volume in var Lib docker volumes directory and then when I run container docker using the docker run command You could mount this volume inside the read/write layer of docker containers using the -B option like this, so I would do a docker run -V and then specify the name of my newly created volume followed by a colon and the location within my container, which is the default. location where MySQL stores data and that is where Lib MySQL and then the name of the MySQL image this will create a new container and mount the data volume we created to the Lib MySQL folder inside the container so that all the data will be written by the database of data and are actually stored. on the volume created on the Docker host, even if the container is destroyed, the data is still active.
Now what happens if you didn't run the Docker Volume Create command to create the volume before the Docker Run command, for example if I run the Docker Run command? to create a new MySQL container instance with data underline volume, volume which I haven't created yet, Docker will automatically create a volume called data underline volume, and mounted on the container, you should be able to see all these volumes if lists the contents of the Lib docker volumes folder, this is called volume mounting, since we are mounting a volume created by Docker to the var Lib docker volumes folder, but what if we already have our data in another location, For example, let's say we have some external storage? the docker host with four slashes and we would like to store the database data on that volume and not in the default docker Lib volumes folder, in that case we would run a container using the docker run command - V but in this case we will provide the

full

path to the folder that we would like to mount is for slash data minus QL, so it will create a container and mount the folder into the container.
This is called a linked mount, so there are two types of mounts, a volume mount and a bind mount volume mount mounts a volume from the volumes directory and bind mount mounts a directory from any location on the docker host a final note before I let you go, you think - V is an old style, the new way is to use - mount The - - mount option is the preferred way as it is more verbose, so you have to specify each parameter in a key equals value, for example, the above command can be written with the -mount option, thus using the source and destination type options. the type in this case is link, the source is the location on my host and the destination is the location on my container, so who is responsible for performing all these operations, maintaining the layered architecture, creating a writable layer, moving files between layers to allow copying and writing, etc.
They are storage controllers, so Dockery uses storage controllers to enable layered architecture. Some of the common storage controllers are au FS btrfs The ZFS device allocator overlay and the overlay for storage controller selection depends on the underlying operating system being used, for example to bind to the The default storage controller is u FS, while this storage driver is not available on other operating systems like Fedora or Cent OS, in that case device mapper may be a better option. Docker will automatically choose the best available storage driver based on the operating system and different storage drivers. It also provides different performance and stability features, so you may want to choose one that fits the needs of your application and your organization.
If you would like to read more about any of these storage controllers, check out the links in the accompanying documentation for now. everything from Docker architecture concepts see you at the next conference Hello and welcome to this conference on Docker Compose. From now on we will be working with configurations in the Yamo file, so it is important that you are comfortable with llama. Let's recap some things, a very quick course. We first learned how to run a Docker container using the Docker Run command. If we needed to configure a complex application that runs multiple services, a better way to do it is to use Docker Compose.
With Docker Compose we couldcreate a configuration file in yamo format called Docker Compose Gemmell and gather the different services and the specific options to run them in this file. Then we could simply run a Docker Compose Up command to open the entire application stack. This is easier to deploy, run, and maintain, since all changes are always stored in the Docker Compose configuration file; However, all of this only applies to running containers on a single Docker host and for now, don't worry about the yamo file. We'll take a closer look at the yamo file in a moment and see how to put it together, it was a really simple application I put together, let's look at a better example.
I'm going to use the same sample app that everyone uses to demonstrate Docker. It is a simple yet complete application developed by Docker to demonstrate the various features. available to run an application stack in Docker, so let's first get familiar with the application because we will be working with the same application in different sections for the rest of this course. This is a sample voting application that provides one interface for a user to vote and another interface to display the results. The application consists of several components such as the voting application which is a web application developed in Python to provide the user with an interface to choose between two options, a cat and a dog, when they make a selection, a vote is taken. stored in Redis for those of you who are new to Redis Redis in this case serves as an in-memory database, this payload is then processed by the worker which is an application written in dotnet, the worker application takes the new vote and updates the persistent database which is a Postgres SQL in our case the Postgres SQL simply has a table with the number of votes for each category dogs and cats in this case it increases the number of votes for cats as it was for cats finally the voting result is displayed in a web interface which is another web application developed in node.js, this resulting application reads the vote count from the Postgres sequel database and displays it to the user, so that that is the architecture and data flow of this simple voting application stack, as you can see this sample application is built with a combination of different services, different development tools and multiple different development platforms such as Python, node. js net, etc., this sample application will be used to show how easy it is to set up a complete application stack consisting of various components in Docker let's leave aside Docker Swarm services and stacks for a minute and see how we can put this together application stack on a single Docker engine using first Docker run commands and then Docker Compose.
Let's assume that all application images are already built. and are available in the Docker repository. Let's start with the data layer. We first run the Docker Run command to start a Redis instance by running the Docker Run Redis command. We will add the Dash D parameter to run this container in the background and also name the Redis container now naming the containers is important why is it so important? Hold that thought, we'll get to that in a moment below, we'll deploy the Postgres sequel database by running the docker run Postgres command this time; We will add the -d option to run this in the background and name this container database for the database.
Next, we will start with the application services. We will implement a front-end application for the voting interface by running an instance of the voting application image. Run the Docker Run command and name the instance. vote, since this is a web server, it has a web UI instance running on port 80, we will publish that port to 5000 on the host system so we can access it from a browser. Next we will deploy the results web application which shows the results to the user for this we deploy a container using the results - application image and publish port 80 - port 5001 on the host this way we can access the web UI of the resulting application in a browser finally we deploy the worker by running an instance of the image worker ok now everything is fine and we can see that all the instances are running on the host but there is some problem it just doesn't seem to work.
The problem is that we have successfully run all the different containers, but we haven't actually done it. we link them together, since we have not told the voting web application to use this particular Redis instance, there could be multiple Redis instances running, we have not told the worker and the resulting application to use this database just like Postgres in particular that we run, so how? We do this? That's where we use links. Binding is a command line option that can be used to link two containers, for example the Voltage App web service depends on the Redis service when the web server starts, as you can see in this piece. of code on the web server looks for a Redis service running on the Redis host but the voting application container cannot resolve a host with the name Redis so that the voting application is aware of the Redis service we add a hook option while running the voting application container to link to the Redis container, add a link option to the Docker Run command and specify the name of the Redis container, which in this case is Redis followed by a colon and the name of the host that look for the voting app. which is also Redis for in this case, remember that this is why we name the container when we run it the first time so we can use its name while creating a link.
What this actually does is create an entry in the host file, etc. in the voting application container by adding an entry with a Redis hostname with an internal IP of the Redis container. Similarly, we add a hook for the result application to communicate with the database by adding a hook option to refer the database with name DB as you can be seen in this source code of the application trying to connect to a Postgres database in the hosts database. Finally, the worker application requires access to both Redis and the Postgres database, so we added two links to the worker application, one link to link Redis and the other link to link the Postgres database.
Please note that using links in this way is deprecated and support may be removed in the future in Docker. This is because, as we'll see at some point, advanced and newer concepts in Docker Swarm and networking support better ways to achieve what we just did here with bindings, but I wanted to mention that anyway so you've learned the concept. From the very basics, once we have the run commands tested and ready, it is easy to generate a docker compose file from it, we start by creating a dictionary of container names we will use the same name that we use in the run commands of Docker, so we take all the names and create a key with each of them, then under each item we specify which image to use, the key is the image and the value. is the name of the image to use, then inspect the commands and see what the other options used are.
We publish ports, so let's move those ports under the respective containers to create a property called ports and list all the ports you would like to publish. Below that we are finally left with links, so any container that requires a properly created link below is called links and provides a series of links like Redis or TB. Note that you can also specify the link name this way without the semicolon and the target name and it will create a link with the same name as the target name by specifying database:DB is similar to simply specifying dB , we'll assume the same value to create a link now that we're done with our Docker compose file.
Stack up is really simple from the docker compose up command to open the full application stack. When we look at the sample voting application example, we assume that all images are already built from the five different components, two of them Redis and Postgres images. We know that they are now available on Docker Hub. There are official Redis and Postgres images, but the remaining three are our own application. They do not need to be already compiled and available in a Docker registry if we want to tell Docker Compose to run a Docker. build Instead of trying to pull an image, we can replace the image line with a build line and specify the location of a directory containing the application code and a docker file with instructions for building the docker image in this example to the voting application. the application code in a folder called vote which contains all the application code and a docker file.
This time when you run the docker compose up command, it will first create the images, give it a temporary name, and then use those images to run containers using the options. you specified earlier similarly use the build option to build the other two services from the respective folders. Now we will see different versions of the Docker Compose file. This is important because you may see Docker Compose files in different formats in different places and wonder about the white sand look. differentDocker Compose evolved over time and now supports many more options than at the beginning, for example, this is the trimmed version of the Docker Compose file we used previously.
In fact, it is the original version of the Docker Compose file known as version 1. It had a number of limitations, for example, if you wanted to deploy containers on a different network than the default bridge network, there was no way to specify that in this version of the file. also say that it has a dependency or a starting order of some kind, for example. your database container should appear first and only then, and if the voting application is started, there was no way you could specify that in the ocean one of Docker's compose file support for these came in version 2 with version 2 and higher in the The file format also changed a bit, it no longer specifies its stack information directly as it did before.
It's all encapsulated in a Services section, so create a property called services at the root of the file and then move all the services below that you'll still use. the same Docker Compose Up command to open the application stack, but how does Docker Compose know which version of the file you are using? You are free to use version 1 or version 2 depending on your needs, so how does Docker Compose know what? format you are using for version 2 and above, you must specify the version of the Docker Compose file you want to use by specifying the version at the top of the file, in this case version: 2, another difference is with networking in version 1 from Docker Compose. all containers you run on the default bridged network and then use bindings to allow communication between containers like we did before with version 2 dr.
Campos automatically creates a dedicated bridge network for this application and then connects all containers to that new network. All containers can communicate with each other using each other's service name, so there is basically no need to use bindings in version 2 of Docker Compose. you can just get rid of all the bindings you mentioned in version 1 when you convert a file from version one to version two and finally version 2 also introduces the depends feature if you want to specify a boot order, for example let's say the irrigation web application. It depends on the Redis service, so you need to ensure that the Redis container is started first and only then the voting web application should be started.
We could add a depends property to the voting application and indicate that it depends on Redis and then comes version 3. which is the latest as of today, version 3 is similar to version 2 in structure, which means it has a version specification at the top and a Services section below which you can place all your services like in version 2. Make sure to specify the version number as 3 in the higher version 3 comes with support for Docker Swamp, which We'll see later. There are some options that were removed and added so see details about them. You can check out the documentation section using the link on the reference page after this lecture.
We'll look at version 3 in much detail later when we discuss Docker stacks. Let's talk about networking in Docker Compose. Returning to our application, so far we have only been deploying all containers to the default bridged network. Let's sayWe modified the architecture a little. bit to contain the traffic from the different sources, for example, we would like to separate the user generated traffic from the internal traffic of the applications, so we create a front-end network dedicated to user traffic and a back-end network -end dedicated to traffic within the application. We then connect the user-facing applications, which are the voting application and the results application, to the front-end network and all the components to an internal back-end network, so that in our compose file Docker Note that I have actually removed the port section.
To simplify, they are still there, but they are just not shown here. The first thing we must do if we use networks is define the networks that we are going to use. In our case, we have two networks, one front and one rear. finish, create a new property called networks at the root level adjacent to the services in the docker compose file and add a map of the networks we plan to use, then under each service create a network property and provide a list of networks that the service should be attached to in case of Redis and DB, it is just the backend network;
For front-end applications, such as a dedicated application and the results application, they must be connected to a front-end and back-end network. Also add a section to add the worker container to the backend network. I just left that out on this slide due to space limitations. Now that you've seen the Docker Compose files, head over to the coding exercises and practice developing some Docker Compose files. This is for this conference and we'll see you at the next conference. Now we will look at the Docker registry. So what is a registry? If containers were rain, then they would rain from the Docker registry.
What are the clouds? That's where the Docker images are stored. a central repository of all Docker images, let's look at a simple nginx container, we run the Docker Run Engine X command to run an instance of the nginx image, let's take a closer look at the name of that image. Now the name is nginx, but what is this image? Where is this image of this name taken from? Follow the Dockers nginx image naming convention here is the image or repository name when you say nginx it's actually nginx forward slash nginx the first part represents the username or account so if you don't provide an account or a repository name, it is assumed to be the same as the given name, which in this case is nginx, usernames are usually the name of your Docker Hub account or, if you are an organization, then it is the name of the organization if you were to create your own account and create your own repositories or images under it then you would use a similar pattern now where are these images stored and pulled?
Since we haven't specified the location where these images will be pulled from, it is assumed to be in Dockers by default. Docker Hub registry whose DNS name is more obscure mark the registry is where all the images are stored whenever you create a new image or update an existing image you push it to the registry and every time someone deploys this application it is pulled from that registry There are also many other popular logs, for example the Google log is on GCR, where I provide many Kubernetes related images that are stored as those used for end-to-end testing on the cluster.
All of them are publicly accessible anywhere. can download and access when you have internally created applications that should not be available to the public. Hosting a private internal registry can be a good solution. Many cloud service providers, such as AWS, as their GCP, provide a private registry by default when you open an account with them on any of these solutions, whether it's a Docker Hub or a Google registry or their internal private registry. You can choose to make a repository private so that it can only be accessed using a set of credentials. From a Dockers perspective to run a container using an image from a private registry, first log into your private registry using the command.
Docker login, enter your credentials once successful, run the application using the private registry as part of the image name like this now, if you are not logged in to the private registry it will return saying it cannot be done find the image, so remember to always log in before accessing or entering a private record. We said that cloud providers like AWS or GCP provide a private registry when you create an account with them, but what if you are running your application on-premises and you don't have a private registry, how do you implement your own private registry within your organization? ?
The Docker registry is itself another application and is of course available as a Docker image. The image name is registry and it exposes the API on port 5000, now that you have your custom registry running on port 5000 on this Docker host, how can you push your own image? Use the Docker Image Tag command to tag the image with a private registry URL in this case, since it is running on the same gateway host, I can use localhost semicolon 5000 followed by the image name. I can then push my image to my local private registry using the docker push command and the new image name with the docker registry information from there.
I can pull my image from anywhere within this network using localhost if you are on the same host or the IP or domain name of my Docker host if I am accessing from another host in my environment. Well, let's sit down for this conference. Hello, words from the practical test and the practice of working with private Docker registries welcome to this lecture on the Docker engine in this lecture we will take an in-depth look at the architecture of Dockers, how it actually runs applications in isolated containers and how it works under the hood of the engine Docker as we have learned.
Before you simply refer to a host with Docker installed when you install Docker on a Linux host you are actually installing three different components, the Docker daemon, the rest of the API server and the Docker CLI, the Docker daemon is a second process plane that manages Docker objects. Like image, volume, and network containers, the Docker REST API Server is the API interface that programs can use to communicate with the daemon and provide instructions. You can create your own tools using this REST API and the Docker CLI is nothing more than the command line. interface that we have been using until now to perform actions like running a container, stopping image destruction, etc., uses the REST API to interact with the docker daemon.
Something to keep in mind here is that the docker CLI doesn't necessarily have to be on the same host. It could be on another system like a laptop and still be able to work with a remote Docker engine. Simply use the H dash option in the Docker command and specify the address of the remote Docker engine and a port as shown here, for example to run an ng-based container. Me and Now let's try to understand exactly how our applications are contained in Docker. How does it work under the hood? Docker uses namespaces to isolate the workspace.
Process IDs, network interprocess communication heaps, and Unix timesharing systems are created in their own namespace, providing isolation between containers. Let's take a look at one of the namespace isolation techniques. Process ID namespaces Every time a Linux system starts, it starts with a single process. with a process ID of one, this is the root process and it starts all other processes on the system when the system fully starts we have a handful of processors running. This can be seen by running the PS command to list all that are running. processes, process IDs are unique and no two processes can have the same process ID now.
If we were to create a container that is basically like a child system within the current system, the child system needs to think that it is an independent system on its own and has its own set of processes that originate from a root process with an ID of one process, but we know that there is no strict isolation between containers and the underlying host, so the processes running inside the container or indeed the processes running on the underlying host. and therefore two processes cannot have the same process ID, this is where namespaces come into play with process ID namespaces, each process can have multiple process IDs associated with it, for example when processes start in the container, it's really just another set of processes. on the base Linux system and gets the next available process ID, in this case 5 and 6; however, they also get another process ID starting with PID 1 in the container namespace, which is only visible inside the container, so the container has its own path process tree and is therefore an independent system, so how does that relate to a real system?
What does this look like on a host? Let's say you were to run an ng I in X Server as a container. We know that the nginx container is running. a following ngo service, if we were to list all the services inside the docker container, we will see that the ng ionic service is running with a process ID of one, this is the process ID of the service inside the container namespace if We list the services on the Docker Host we will see the same service but with a different process ID which indicates that in fact all the processes are running on the same host but separated in their own containers using namespaces, so we learned that The underlying Docker Host and containers share the same service. same system resources, such as CPU and memory, how many resources are dedicated to the host and containers, and how Docker manages and shares resources between containers By default, there are no restrictions on how many resources a container can use and So a container can end up using all the resources on the underlying host, but there is a way to restrict the amount of CPU or memory a container can use.
Docker uses three groups or control groups to restrict the amount of hardware resources allocated to each. container, this can be done by providing the - - CPUs option to the docker run command. Providing a value of 0.5 will ensure that the container does not occupy more than 50% of the host's CPU at any given time. The same goes for memory settings. The value of one hundred M for the - - memory option limits the amount of memory the container can use to one hundred megabytes. If you are interested in reading more on this topic, check out the links I posted on the reference page, that's all for now on Docker Engine.
Earlier in this course we learned that containers share the underlying operating system kernel and, as a result, we cannot have a Windows container running on a Linux host or vice versa. We must keep this in mind while reading this lesson as it is a very important concept and most of the

beginners

tend to have a problem with it, so what are the options available for Docker on Windows there are two options available, the first one is dock or on Windows using the Docker toolbox and the second is the Docker desktop option for Windows. We will see each of them.
Now let's take a look at the first option of Docker Toolbox. Original support for Docker on Windows Imagine you have a Windows laptop and you don't have access to any Linux system, but you would like to try Docker and you don't have access to a Linux system in the lab or in the cloud, what would you do? What I did was install virtualization software on my Windows system like Oracle VirtualBox or VMware Workstation and deploy a Linux virtual machine on it, like Ubuntu or Debian, then install Docker on the Linux virtual machine and then play with it.
This is the first option. it doesn't really have much to do with Windows, you can't build Windows based Docker images or run Windows based Docker containers, you obviously can't run Linux containers directly on Windows, or you're just working with Docker in a virtual machine of Linux. However, a Windows host docker provides us with a set of tools to facilitate this, which is called the docker toolbox. The docker toolbox contains a set of tools like Oracle VirtualBox, docker engine, docker machine, docker compose and a user interface called CAD Matic, this will help you start by simply downloading and running the executable. from Docker Toolbox.
I will install virtualbox, deploy a lightweight virtual machine called boot to docker that already has Docker so you are ready to get started with Docker easily and in a short time.time frame. And now that? About the requirements, you need to make sure that your operating system is Windows 7 64-bit or higher and virtualization is enabled on the system. Now remember that Docker to Box is a legacy solution for older Windows systems that do not meet the requirements of the new Docker for Windows. option the second option is NewYork an option called docker Desktop for Windows in the previous option we saw that we had Oracle VirtualBox installed on Windows and then a Linux system and then docker on that Linux system now with docker for Windows we take out Oracle VirtualBox and use the technology of native virtualization available with Windows called Microsoft Hyper-V during the Docker for Windows installation process.
It will still automatically create a Linux system underneath, but this time it is created on Microsoft Hyper-V instead of Oracle VirtualBox and has Docker. running on that system due to this dependency on Hyper-V, this option is only supported on Windows 10 Enterprise or Professional Edition and on Windows Server 2016 because both operating systems come with Hyper-V support by default. Now here is the most important point. So far, everything we've been discussing about Dockers support for Windows is strictly for Linux containers. Linux applications packaged in Linux Docker images. We're not talking about Windows applications, Windows images, or Windows containers.
Both options we just discussed will help you run Linux. container on a Windows host running Windows Server 2016 Microsoft announced support for Windows containers for the first time. You can now package Windows applications in Windows Docker containers and run them on a Windows chopper host using Docker Desktop for Windows by installing Docker Desktop for Windows. The default option is to work with Linux containers, but if you want to run Windows containers, you must explicitly configure dockers to switch to using Windows containers in early 2016. Microsoft announced Windows containers, then you could create Windows-based images and running Windows containers on a Windows server is like you would run Linux containers on a Linux system, now you can create a container of Windows images as your applications and share them via the Docker store.
Unlike Linux, there are two types of containers in Windows, the first is Windows. Server container that works exactly like Linux containers where the operating system kernel is shared with the underlying operating system to allow better security boundary between containers and for many kernels with different versions and configurations to coexist. The second option known as Hyper-V was introduced. isolation with Hyper-V isolation, each container runs inside a highly optimized virtual machine, ensuring complete kernel isolation between the containers and the underlying host, whereas in the Linux world you had multiple base images for a Linux system like ubuntu, debian, fedora alpine. etc., if you remember that's what you specify at the beginning of the docker file in the Windows world, we have two options: Windows core server and nano server.
A nano server is a headless deployment option for Windows Server that runs at a fraction of the size of the full operating system, you can think of this as the Alpine image on Linux, the core of Windows Server, although it is not as Lightweight as you would expect, Windows Containers are finally supported on Windows Server 2016 nano server and Windows 10 professional and Enterprise Edition. Windows 10 professional and Enterprise Edition only supports isolated Hyper-V containers, which means that, as we just discussed, each deployed container is deployed to a highly optimized virtual machine. Well, that's all about Docker on Windows.
Before finishing, I want to point out an important fact that I saw two ways to run a Docker container using VirtualBox or Hyper, but remember that VirtualBox and Hyper-V cannot coexist on the same Windows host, so if you started with Docker Toolbox with VirtualBox and you plan to migrate to Hyper-V, remember that you cannot have both solutions at the same time there is a migration and guide available on the Docker documentation page on how to migrate from Marshall Box to Hyper Wait. That is all for now. Thank you and see you at the next conference. Now we will see Docker on Mac.
On Mac it is similar to Docker on Windows, there are two options to get started with Docker on Mac using Docker Toolbox or Docker Desktop option for Mac. Let's look at the first option of Docker Toolbox. This was the original support for Docker on Mac. It is more obscure on Linux. VM created using VirtualBox on Mac as with Windows, it has nothing to do with Mac applications or Mac-based images or Mac containers, it simply runs Linux containers in a Mac OS dollar toolbox containing a set of tools like Oracle VirtualBox Docker and Jane Docker Machine Docker Compose. and a user interface called CAD Matic when you download and install the Docker Toolbox executable, install VirtualBox, deploy a lightweight virtual machine called boot, an already more obscure Docker.
This requires Mac OS 10.8 or later. The second option is the newer option called Docker Desktop. for Mac with Docker Desktop for Mac we take out a commercial box and use Hyper Cat virtualization technology during the installation process. Docker for Mac will still automatically create a Linux system underneath, but this time it is created in Hyper Kit instead of Oracle VirtualBox and having Docker running on that system this requires Mac OS Sierra 10.12 or later and Martin and Mac hardware must be model 2010 or later finally remember that all this is to be able to run Linux container on Mac from this recording there are no Mac based Images or containers, well with Docker on Mac for now we will try to understand what container orchestration is.
So far in this course we have seen that with Docker you can run a single instance of the application with a simple Docker run command. In this case, to run a node.js-based application, it's in the Docker Run Nodejs command, but that's just an instance of your application on a Docker host. What happens when the number of users increases and that instance can no longer handle the load, deploy an additional instance of your application by running the docker run command multiple times, so that's something you need to do yourself, you need to keep a close eye on it the load and performance of your application and deploy additional instances yourself and not just the ones you have. to keep a close eye on the status of these applications and if a container fails you should be able to detect it and run the docker run command again to deploy another instance of that application.
What about the Docker host status? And if? host crashes and becomes inaccessible, containers hosted on that host also become inaccessible, so what should you do to resolve these issues? You will need a dedicated engineer who can sit down and monitor the status, performance and health of the containers and take necessary actions to remedy the situation, but when you have large applications deployed with tens of thousands of containers, that is not a practical approach to that you can create your own scripts and that will help you address these problems to some extent, container orchestration is just a solution for that. is a solution that consists of a set of tools and scripts that can help host containers in a production environment.
Typically, a container orchestration solution consists of multiple docker hosts that can host containers that way, even if one fails, the application is still accessible through the others. The container orchestration solution allows you to deploy hundreds or thousands of instances of your application with a single command. This is a command used for Docker Swarm. We'll look at the command itself in a moment. Some orchestration solutions can help you automatically increase the number. of instances where users increase and reduce the number of instances where demand decreases, some solutions can even help you automatically add additional hosts to support user load and not just cluster and scale container orchestration solutions that They also provide support for advanced networking between these containers on different hosts, as well as load balancing user requests across different houses, they also provide support for sharing storage between the house, as well as support for configuration and security management within the cluster.
There are multiple container orchestration solutions available today. Docker has Google's Docker Swamp Kubernetes and Paget's mezzo mezz, while Docker Swamp is really easy to set up and get started, it lacks some of the advanced auto-scaling features needed for complex production applications. On the other hand, mezzos is quite difficult to set up and get started, but supports many advanced features. Arguably the most popular of all, kubernetes is a bit difficult to set up and get started, but offers many options to customize deployments and is supported by many different providers. Kubernetes is now supported by all public cloud service providers like GCP azure and AWS and the kubernetes project is one of the highest ranked projects on github.
In the next few lectures we will take a quick look at Docker Swamp and Kubernetes. Now we will get a quick introduction to Docker Swarm. Docker Swarm has a lot of concepts to cover and requires. It's its own course, but we'll try to take a quick look at some of the basic details so you can get a brief idea of ​​what Docker Swarm is. You can now combine multiple Docker machines into a single cluster. Docker Swarm will take care of it. Distribute your services or instances of your application on separate hosts for high availability and load balancing between different systems and hardware To set up a Docker Swamp, you must first have hosts or multiple hosts with Docker installed and then you must designate one host to be the manager or the master or the swarm manager as it is called and others as slaves or workers once you are done with that run the docker swarm init command on the swarm manager and that will initialize the swarm manager the output will also provide the command to will be run on the workers so copy the command and run it on the worker nodes to join manager after joining the swamp workers are also known as nodes and now you are ready to create services and deploy them to the swamp cluster , so let's go into more details, as you already know, to run an instance of my web server, I run the docker run command and specify the name of the image I want to run, this creates a new container instance of my application and serves to my web server Now that we have learned how to create a swamp cluster, how do I use my cluster to run multiple instances of my web server?
Now one way to do this would be to run the Docker Run command on each worker node, but that's not ideal as you might have to do that. log into each node and run this command and there could be hundreds of nodes there. I'll have to set up load balancing myself, a big monitor of the health of each instance and if the instances crashed I'll have to restart them myself so it's going to be an impossible task, that's where Docker Swarm Orchestration consents to Dr. Swan Orchestrator does all of this for us so far, we've only set up this group, but we haven't seen the orchestration in action.
The key component of suam orchestration is Ducker, a dr service. services are one or more instances of a single application or service running on the song, nodes in the song cluster, for example, in this case I could create a docker service to run multiple instances of my web server application on the worker nodes of my swamp group. for this, use the create docker service command on the manager node and specify the name of my image there, which is my web server in this case, and use the replicas option to specify the number of instances of my web server to me I would like to run across the entire cluster from I specified 3 replicas and I get 3 instances of my web server spread across the different worker nodes.
Remember that the Docker Service command must be run on the manager node and not the worker node. The Docker Service Create command is similar to the Docker Run command in terms of the options passed, such as the environment variable e, the key for publishing ports, the network option for connecting a container to a network, etc. Well, that's a high-level introduction to dr. Swan, there is a lot more to know, like setting up multiple overlapping network managers, etc., as I mentioned, requires its own separate course. That's all for now, in the next lecture we will look at Kubernetes at a higher level.
Now we will get a brief introduction to the basics of Kubernetes. Again, Kubernetes requires its own course, well a few courses, at least five, but we'll try to get a brief introduction. here with Docker you were able to run a single instance of an applicationusing the Docker CLI by running the Docker Run command, which is running an application on a grid has never been easier with Kubernetes using the Kubernetes CLI known as cube control. run a thousand instances of the same application with a single command, Kubernetes can scale it up to two thousand with another command, Kubernetes can even be configured to do this automatically, so that the instances and the infrastructure itself can scale up or down depending user burden.
Kubernetes can update these 2000 application instances continuously, one at a time with a single command, if something goes wrong, it can help you roll back these images with a single command. Kubernetes can help you test new features of your application by updating only a percentage of these instances through testing methods. Kubernetes' open architecture provides support for many different networking and storage providers. Every networking or storage brand you can think of has a plugin for Kubernetes. Kubernetes supports a variety of authentication and authorization mechanisms. All major clouds. Service providers have native support for Kubernetes, so what is the relationship between Docker and Kubernetes?
Well, Kubernetes uses Docker Host to host applications in the form of Docker containers. Docker does not need to be supported all the time. Kubernetes is supported as a relative of Dockers, like Rocket or a cryo, but let's take a quick look at the kubernetes architecture a kubernetes cluster consists of a set of nodes, let's start with the nodes in the node is a physical or virtual machine in the that a cobranet is Karina's software a set of tools is installed a node is a work machine and that is where Kubernetes will launch the containers, but what happens if the node the application is running on fails?
Obviously our application goes down, so it needs to have more than one node. A cluster is a set of nodes grouped in this way. This way, even if one node fails, you can still access your application from the other nodes. Now we have a cluster, but who is responsible for managing this cluster? Where is information about cluster members stored? How are nodes monitored when a node fails? you move the workload from the failed nodes to another worker node, that's where the master comes in. The master is a note with the Kubernetes control plane components installed. The master monitors that the nodes are in the cluster and is responsible for the actual orchestration of the containers. worker nodes When you install Kubernetes on a system, you are actually installing the following components, an API server and an EDD server, a cubelet service that contains a runtime engine like Docker and a bunch of drivers and the scheduler, the API server acts As an interface to Kubernetes, the command line interfaces of the user management devices all communicate with the API server to interact with the Kubernetes cluster.
Next up is Etsy, which will be a key value store. Etsy D is a trusted, distributed key-value store used by Kubernetes to store all the data used to manage the Think about it this way when you have multiple nodes and multiple masters in your cluster, let CD store all that information on all the nodes in the cluster. cluster in a distributed manner and CD is responsible for implementing locks within the cluster to ensure there are no conflicts between masters the scheduler is responsible for distributing work or containers across multiple nodes looks for newly created containers and assigns them to notes controllers are the brains behind orchestration are responsible for noticing and responding when note containers or endpoints fail controllers make decisions to open new containers in such cases container runtime is the underlying software used to run containers, in our case it turns out to be Docker and finally cubelet is the agent that runs on each node of the cluster, the agent is responsible. to make sure the containers are running in the notes as expected and finally we also need to learn a bit about one of the command line utilities known as cube command line tool or cube control tool or cube cuddle as is.
Kubernetes CLI is also called cube monitoring tool, which is used to deploy and manage applications in a convergence cluster to obtain information related to the cluster, obtain the status of the nodes in the cluster, and many other things to deploy the cube control run command. an application in the cluster, the keep control cluster info command is used to view information about the cluster and the cube control get nodes command is used to list all the nodes that are part of the cluster to run hundreds of instances of your application on hundreds of nodes. all I need is a single kubernetes command like this, well that's all we have for now, a quick introduction to Cornelis and the architecture of it.
We currently have three courses on code cloud in kubernetes that will take you from an absolute beginner to a certified expert, so have a look when you get a chance, so we're at the end of this Docker beginner course. I hope you had a great learning experience. If so, leave a comment below if you like my way of teaching, you will love my other organized courses. On my site at Code Cloud we have courses for Docker Swarm Kubernetes, advanced courses on Kubernetes certifications as well as OpenShift, we have courses for automation tools like Antigo Chef and Puppet and many more on the way with Code Cloud at www.calculated.com/ . support

If you have any copyright issue, please Contact