This is a long overdue post on iximiuz Labs' internal kitchen. It'll cover why I decided to build my own learning-by-doing platform for DevOps, SRE, and Platform engineers, how I designed it, what technology stack chose, and how various components of the platform were implemented. It'll also touch on some of the trade-offs that I had to make along the way and highlight the most interesting parts of the platform's architecture. In the end, I'll, of course, share my thoughts on what's next on the roadmap. Sounds interesting? Then brace for a long read!
The only "official" way to publish a port in Docker is the
-p|--publish flag of the
docker run (or
docker create) command. And it's probably for good that Docker doesn't allow you to expose ports on the fly easily. Published ports are part of the container's configuration, and the modern infrastructure is supposed to be fully declarative and reproducible. Thus, if Docker encouraged (any) modification of the container's configuration at runtime, it'd definitely worsen the general reproducibility of container setups.
But what if I really need to publish that port?
For instance, I periodically get into the following trouble: there is a containerized Java
monster web service that takes (tens of) minutes to start up, and I'm supposed to develop/debug it. I launch a container and go grab some coffee. But when I'm back from the coffee break, I realize that I forgot to expose port 80 (or 443, or whatever) to my host system. And the browser is on the host...
- Restart the container exposing the port, potentially committing its modified filesystem in between. This is probably "the right way," but it sounds too slow
and boringfor me.
- Modify the container's config file manually and restart the whole Docker daemon for the changes to be picked up. This solution likely causes the container's restart too, so it's also too slow for me. But also, I doubt it's future-proof even though it's kept being suggested 9 years later.
- Access the port using the container's IP address like
curl 172.17.0.3:80. This is a reasonable suggestion, but it works only when that container IP is routable from the place where you have your debugging tools. Docker Desktop (or Docker Engine running inside of a vagrant VM) makes it virtually useless.
- Add a DNAT iptables rule to map the container's socket to the host's. That's what Docker Engine itself would do had you asked it to publish the port in the first place. But are you an iptables expert? Because I'm not. And also, it has the same issue as the above piece of advice - the container's IP address has to be routable from the host system.
- Start another "proxy" container in the same network and publish its port instead - finally, a solution that sounds good to me ❤️🔥 Let's explore it.
If you're dealing with containers regularly, you've probably published ports many, many times already. A typical need for publishing arises like this: you're developing a web app, locally but in a container, and you want to test it using your laptop's browser. The next thing you do is
docker run -p 8080:80 app and then open
localhost:8080 in the browser. Easy-peasy!
But have you ever wondered what actually happens when you ask Docker to publish a port?
In this article, I'll try to connect the dots between port publishing, the term apparently coined by Docker, and a more traditional networking technique called port forwarding. I'll also take a look under the hood of different "single-host" container runtimes (Docker Engine, Docker Desktop, containerd, nerdclt, and Lima) to compare the port publishing implementations and capabilities.
As always, the ultimate goal is to gain a deeper understanding of the technology and get closer to becoming a power user of containers. Let the diving begin!
Slim containers are faster (less stuff to move around) and more secure (fewer places for vulnerabilities to sneak in). However, these benefits of slim containers come at a price - such containers lack (the much-needed at times) exploration and debugging tools. It might be quite challenging to tap into a container that was built from a distroless or slim base image or was minified using DockerSlim or alike. Over the years, I've learned a few tricks how to troubleshoot slim containers, and it's time for me to share.
A container image is a combination of layers where every layer represents some intermediary state of the final filesystem. Such a layered composition makes the building, storage, and distribution of images more efficient. But from a mere developer's standpoint, images are just root filesystems of our future containers. And we often want to explore their content accordingly - with familiar tools like
file. Let's try to see if we can achieve this goal using nothing but the means provided by Docker itself.