Containerization with Docker has become very popular, enabling many developers to build lightweight, Dockerized infrastructures with features such as fast code deployment.
By design, Docker assumes that its containers can connect to the outside network. Meanwhile, by default configuration, you are not able to access the containers from the outside.
Restricting access to Docker containers from the outside world is a good security measure, but it can be problematic when you need external access - for example, to test an application, website, etc.
Also, you can take a closer look at some useful Docker commands in this article.
Port Binding in Docker
Let's assume that we want to run NGINX in a Docker container. You can install NGINX and run a container, but you can't access it from the outside.
By default, Docker containers use an internal network, and each container has its own IP address accessible from the Docker Machine. The interesting thing is that internal IPs can't be used to access containers from outside. However, Docker Machine's primary IP is accessible from the external network.
Obviously, you'll want to enable user access to the web server application from the internet. We usually bind the Docker container's 80 port to the host machine's 8080 port. Consequently, users will be able to access the web server on port 80 using the host machine port 8080.
Are you tired of managing your Docker infrastructure? Our DevOps Engineers can take care of your Docker infrastructure and make it working as Swiss watches.
Exposing Docker Ports
To expose Docker ports and bind them while starting a container with docker run you should use -p option with the following Docker commands:
shell
1docker run -d -p 8080:80 -t nginxThis will create an NGINX container and bind its internal 80 port to the Docker machine's 8080.
Check it out with docker ps command:
shell
1docker run -d -p 8080:80 -t nginx2
3Unable to find image 'nginx:latest' locally4latest: Pulling from library/nginx5Digest: sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba30489036Status: Downloaded newer image for nginx:latest7f5d742a79f1bec1f95445eb896a6b49bb493a5930b3b2ac7377efd8f2bfa620fUse docker inspect command to view ports:
yaml
1"Ports": {2 "80/tcp": [3 {4 "HostIp": "0.0.0.0",5 "HostPort": "8080"6 },7 {8 "HostIp": "::",9 "HostPort": "8080"10 } ]11},After that, the internal 80 port will be accessible using the Docker machine IP on port 8080.
Expose multiple Docker ports
Now we know how to bind one container port to the host port, which is pretty useful. But in some particular cases (for example, in microservices application architecture), there is a need to set up multiple Docker containers with a lot of dependencies and connections.
In such cases, it is conceivable to delineate the scope of ports in the Docker host to a container port:
shell
1docker run -d -p 7000-8000:4000 web-appIt will bind 4000 container's port to a random port in the range 7000-8000.
Include "EXPOSE" parameter in Dockerfile if you want to update Docker container listening ports: EXPOSE <CONTAINER_PORT>
Exposing Docker port to a single host interface
There are also some specific cases when you need to expose a Docker port to a single host interface, say localhost.
You can do this by mapping the port of the container to the host port at the particular interface:
shell
1docker run -p 127.0.0.1:$HOSTPORT:$CONTAINERPORT -t imageYou can use docker port [id_of_the_container] command to verify the particular container port mapping:
shell
1docker port epic_pasteur2
380/tcp -> 0.0.0.0:8080Docker ports random exposure during build time
Want to map any network port inside a container to a port in the Docker host randomly?
Use -P in docker run command:
shell
1docker run -d -P webappTo verify if the port mapping for a particular container - use docker ps command after creating the container:
shell
1docker ps2
3CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES4f5d742a79f1b nginx "/docker-entrypoint.…" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp, [::]:8080->80/tcp epic_pasteurIn addition
Security is the #1 priority in any online project, so you should not forget about special security measures. A good solution will be to create Firewall rules and configure them to block unauthorized access to containers.
If you're having problems with Docker, you can always hire our experts to manage your containerized infrastructure - we'd be happy to collaborate with you!
