Learning Containers From The Bottom Up

When I started using containers back in 2015, my initial understanding was that they were just lightweight virtual machines with a subsecond startup time. With such a rough idea in my head, it was easy to follow tutorials from the Internet on how to put a Python or a Node.js application into a container. But pretty quickly, I realized that thinking of containers as of VMs is a risky oversimplification that doesn't allow me to judge:

  • What's doable with containers and what's not
  • What's an idiomatic use of containers and what's not
  • What's safe to run in containers and what's not.

Since the "container is a VM" abstraction turned out to be quite leaky, I had to start looking into the technology's internals to understand what containers really are, and Docker was the most obvious starting point. But Docker is a behemoth doing a wide variety of things, and the apparent simplicity of docker run nginx can be deceptive. There was plenty of materials on Docker, but most of them were:

  • Either shallow introductory tutorials
  • Or hard reads indigestible for a newbie.

So, it took me a while to pave my way through the containerverse.

I tried tackling the domain from different angles, and over the years, I managed to come up with a learning path that finally worked out for me. Some time ago, I shared this path on Twitter, and evidently, it resonated with a lot of people:

This article is not an attempt to explain containers in one go. Instead, it's a front-page for my multi-year study of the domain. It outlines the said learning path and then walks you through it, pointing to more in-depth write-ups on this same blog.

Mastering containers is no simple task, so take your time, and don't skip the hands-on parts!

Read more

Disposable Local Development Environments with Vagrant, Docker, and Arkade

I use a (rather oldish) MacBook for my day-to-day development tasks. But I prefer keeping my host operating system free of development stuff. This strategy has the following benefits:

  • Increasing reproducibility of my code - it often happened to me in the past that some code worked on my machine but didn't work on others; usually, it was due to missing dependencies. Developing multiple projects on the same machine makes it harder to track what libraries and packages are required for what project. So, now I always try to have an isolated environment per project.
  • Testing code on the target platform - most of my projects have something to do with server-side and infra stuff; hence the actual target platform is Linux. Since I use a MacBook, I spend a lot of time inside virtual machines running the same operating system as my servers do. So, I'd need to duplicate the development tools from my macOS on every Linux OS I happen to use.
  • Keeping the host operating system clean and slim - even if I work on something platform-agnostic like a command-line tool, I prefer not to pollute my workstation with the dev tools and packages anyway. Projects and domains change often, and installing all the required stuff right into the host operating system would make it messy real quick.
  • Decreasing time to recover in case of machine loss - a single multi-purpose machine quickly becomes a snowflake host. Coming up with the full list of things to reinstall in the case of a sudden machine loss would be hardly feasible.

Since I usually work on several projects at the same time, I need not one but many isolated development environments. And every environment should be project-tailored, easy to spin up, suspend, and, eventually, dispose. I figured a way to achieve that by using only a few tools installed on my host operating system, and I'm going to share it here.

The approach may be helpful for folks using macOS or Linux:

  • to work on server-side and full-stack projects
  • to do Linux systems programming
  • to play with Cloud Native stack
  • to build some cool command-line tools.
Language logos

Read more

DevOps, SRE, and Platform Engineering

I compiled this thread on Twitter, and all of a sudden, it got quite some attention. So here, I'll try to elaborate on the topic a bit more. Maybe it would be helpful for someone trying to make a career decision or just improve general understanding of the most hyped titles in the industry.

Read more

My Choice of Programming Languages

When I was a kid, I used to spend days tinkering with woodworking tools. I was lucky enough to have a wide set of tools at my disposal. However, there was no one around to give me a hint about what tool to use when. So, I quickly came up with a heuristic: if my fingers and a tool survived an exercise, I've used the right tool; if either the fingers or the tool got damaged, I'd try other tools for the same task until I find the right one. And it worked quite well for me! Since then, I'm an apologist of the idea that every tool is good only for a certain set of tasks.

A programming language is yet another kind of tool. When I became a software developer, I adapted my heuristic to the new reality: if, while solving a task using a certain language, I suffer too much (fingers damage) or I need to hack things more often than not (tool damage), it's a wrong choice of a language.

Since the language is just a tool, my programming toolbox is defined by the tasks I work on the most often. Since 2010, I've worked in many domains, starting from web UI development and ending with writing code for infrastructure components. I find pleasure in being a generalist (jack of all trades), but there is always a pitfall of spreading yourself too thin (master of none). So, for the past few years, I've been trying to limit my sphere of expertise with the server-side, distributed systems, and infrastructure. Hence, the following choice of languages.

Language logos

Read more

KiND - How I Wasted a Day Loading Local Docker Images

From time to time I use kind as a local Kubernetes playground. It's super-handy, real quick, and 100% disposable.

Up until recently, all the scenarios I've tested with kind were using public container images. However, a few days ago, I found myself in a situation where I needed to run a pod using an image that I had just built on my laptop.

One way of doing it would be pushing the image to a local or remote registry accessible from inside the kind Kubernetes cluster. However, kind still doesn't spin up a local registry out of the box (you can vote for the GitHub issue here) and I'm not a fan of sending stuff over the Internet without very good reasons.

Read more