Docker is a very popular topic of conversation nowadays, and we’re going to break down what that tool is and what it isn’t, and why it’s useful to you. And we'll also try to answer a very popular question: "What is Docker and how it actually works?"
It’s not feasible to have different computers and servers for every application that we might want to run our interface with. This would be a big waste of resources since not all applications are constantly running at full capacity and most applications don’t need the resources that exist on a modern server. The approach that has been used for a long time to deal with this is called Virtualization.
Virtualization vs Containerization
To understand what is Docker you need to know the difference between virtualization and containerization. In Virtualization you have some hardware, an operating system, you have a hypervisor, and then on top of that, you have multiple VMs (Virtual Machines).
Containers utilize a similar structure but are different at the same time. You have a server in an operating system, but instead of using a hypervisor you utilize some of the tools given by the Linux Kernel, specifically namespaces, cgroups, and chroots to break up the system into chunks called «containers». Looking at the difference between a VM and a container you’ll see that there isn’t a whole lot of difference but the one difference that they do have is significant.
A VM is created out of having: an operating system, the libraries, and dependencies that are necessary, and then the application that you want to run within that virtual machine.
Containers consist from the libraries and dependencies that you need in order to run that app and your application logic. The big difference here is that there is no operating system required inside of a container to run.
As I mentioned before, containers are created using cgroups, namespaces, and chroots. But what is docker? It is a system that interacts with the operating system on your behalf to create containers, but it is not actually the thing that creates containers and it did not invent containers either.
So what is Docker and how it works?
Now let’s take a look at the tools that Docker provides you for interacting with Linux operating system to create containers. When first working with such an approach you start with a computer that’s running Linux. Then you install Docker daemon onto that server. You prepare to log in and interact with Docker hub and registry. And then you run commands from your Docker client. It can be on the same server that is running the daemon. Or it can be on your own machine, which connects to the daemon through a remote network.
When you run
Docker run Apache you first talk to the Docker daemon. It will check and see if it has an Apache image that it can run. If it does not - it will talk to Docker hub. If the hub can find the Apache image it will return the contents of that image. The daemon will then cache the image and talk to Linux OS. After it will manipulate various tools that it needs as part of Linux Kernel in order to start the container.
You can repeat this process with different applications. So if you run «Docker run Rails» and talk to the Docker once again - talk to Docker hub if you don't have a Rails image. And then have Docker interact with the operating system on behalf to start a container that contains Rails. By the way, CI&CD often uses docker, you can read more about it in our recent article.
I hope you've learned from this article that Docker didn't create container technology, but it does create containers for you. Docker makes it much easier to utilize containers to run your applications. It also removes the need for hypervisor and virtualization, which both reduces the cost and improve the utilization of server resources. The container standard Docker provides is also incredibly useful because it makes working with containers created by others easier. The flexibility and various solutions that you can find on Docker hub are pretty amazing.