Docker is a very popular topic of conversation nowadays, and we’re going to break down what it is and what it isn’t and why it’s useful to you.
It’s not feasible to have different computers and servers for every application that we might want to run our interface with. This would be a big waste of resources since not all applications are constantly running at full capacity and most applications don’t need the resources that exist on a modern server. The approach that has been used for a long time to deal with this is called Virtualization.
In Virtualization you have some hardware, you have an operating system, you have a hypervisor, and then on top of that you have multiple VMs.
Containers utilize a similar structure but are different at the same time. You have a server in an operating system, but instead of using a hypervisor you utilize some of the tools given by the Linux Kernel, specifically namespaces, cgroups, and chroots to break up the system into chunks called «containers». Looking at the difference between a VM and a container you’ll see that there isn’t a whole lot different but the one difference that they do have is significant. A VM is created out of having: an operating system, the libraries and dependencies that are necessary and then the application that you want to run within that virtual machine. Containers consists of the libraries and dependencies that you need in order to run that app and your application logic. The big difference here is that there is no operating system that is required inside of a container to run.
As I mentioned before, containers are created using cgroups, namespaces and chroots. Docker is not in that list. Docker is a system that interacts with the operating system on your behalf to create containers but it is not actually the thing that creates containers and Docker did not invent containers either.
Now let’s take a look at the tools that Docker provides you for interacting with Linux operating system in order to create containers. When first working with Docker you start with a computer that’s running Linux. Then you install Docker daemon onto that server. You prepare to log in and interact with Docker hub and registry. And then you run commands from your Docker client. This Docker client can be on the same server that is running Docker daemon or it can be on your own machine connected to Docker daemon through a remote network. When you run «Docker run Apache» you first talk to the Docker daemon. It will check and see if it has an Apache image that it cat run. If it does not – it will talk to Docker hub. If Docker hub can find the Apache image it will return the contents of that image. The Docker daemon will then cache the image and talk to Linux OS and manipulate various tools that it needs as part of Linux Kernel in order to start the container.
You can repeat this process with different applications. So if you run «Docker run Rails» and talk to the Docker once again – talk to Docker hub if you don’t have a Rails image. And then have Docker interact with the operating system on behalf to start a container that contains Rails.
As I hope you’ve learned from this article that Docker didn’t create containers technology, but it does create containers for you.` Docker makes it much easier to utilize containers to run your applications and it also removes the need for a hypervisor and virtualization, which both reduces the cost and improve the utilization of server resources. The container standard Docker provides is also incredibly useful because it makes working with containers created by others easier. The flexibility and various solutions that you can find on Docker hub are pretty amazing