- Docker: How To Debug Distroless And Slim Containers
- Kubernetes Ephemeral Containers and kubectl debug Command
- Containers 101: attach vs. exec - what's the difference?
- Why and How to Use containerd From Command Line
- Docker: How To Extract Image Filesystem Without Running Any Containers
- KiND - How I Wasted a Day Loading Local Docker Images
Don't miss new posts in the series! Subscribe to the blog updates and get deep technical write-ups on Cloud Native topics direct into your inbox.
A container image is a combination of layers where every layer represents some intermediary state of the final filesystem. Such a layered composition makes the building, storage, and distribution of images more efficient. But from a mere developer's standpoint, images are just root filesystems of our future containers. And we often want to explore their content accordingly - with familiar tools like cat
, ls
, or file
. Let's try to see if we can achieve this goal using nothing but the means provided by Docker itself.

The docker save
command is not what you think it is
The docker help
output has just a few entries that look relevant to our task. The first one is the docker save
command:
$ docker save --help
Usage: docker save [OPTIONS] IMAGE [IMAGE...]
Save one or more images to a tar archive (streamed to STDOUT by default)
Trying it out quickly shows that it's not something we need:
The docker save
command, also known as docker image save
, dumps the content of an image in its canonical layered representation while we're interested in the final state of the filesystem that image would produce when mounted.
docker export
is what you need (but with a trick)
The second command that looks relevant is docker export
. Let's try our luck with it:
$ docker export --help
Usage: docker export [OPTIONS] CONTAINER
Export a container's filesystem as a tar archive
The problem with this command is that it expects a container and not an image name:
$ docker export nginx -o nginx.tar.gz
Error response from daemon: No such container: nginx
An obvious solution would be to run the container and repeat the export attempt:
$ docker pull nginx
$ CONT_ID=$(docker run -d nginx)
$ docker export ${CONT_ID} -o nginx.tar.gz
What's inside?
$ mkdir rootfs
$ tar -xf nginx.tar.gz -C rootfs
$ ls -l rootfs
total 84
drwxr-xr-x 2 vagrant vagrant 4096 Aug 22 00:00 bin
drwxr-xr-x 2 vagrant vagrant 4096 Jun 30 21:35 boot
drwxr-xr-x 4 vagrant vagrant 4096 Sep 12 14:07 dev
drwxr-xr-x 2 vagrant vagrant 4096 Aug 23 03:59 docker-entrypoint.d
...
drwxr-xr-x 2 vagrant vagrant 4096 Aug 23 03:59 tmp
drwxr-xr-x 11 vagrant vagrant 4096 Aug 22 00:00 usr
drwxr-xr-x 11 vagrant vagrant 4096 Aug 22 00:00 var
💡 Pro Tip: If the accurate file ownership information is required, you can use the --same-owner
flag while extracting the tar archive. However, you'll have to be root for that.
Example: sudo tar --same-owner -xf nginx.tar.gz -C rootfs
Well, that does look like what we need - just a regular folder with a bunch of files inside that we can explore. However, running a container just to see its image content has significant downsides:
- The technique might be slow and potentially insecure.
- Some files can be modified upon startup, spoiling the export results.
- Sometimes, running a container is simply impossible (it can be broken).
But can the docker export
command be used without running a container?
Containers are stateful creatures - they are as much about files as processes. In particular, it means that when a containerized process dies, its execution environment, including the filesystem, is preserved on disk (unless you run it with --rm
, of course). I realized this quite some time ago, so using docker export
for stopped containers has been a no-brainer for me. However, this approach suffers from pretty much the same set of drawbacks as exporting a filesystem of a running container...
Call me stupid, but it just occurred to me that docker export
can be used with a container that was created but hasn't been started yet.
The well-known docker run
command is a shortcut for two less frequently used commands - docker create <IMAGE>
and docker start <CONTAINER>
. And since containers aren't (only) processes, the docker create
command prepares the root filesystem for the future container. So, here is the trick:
$ docker pull nginx
$ CONT_ID=$(docker create nginx)
$ docker export ${CONT_ID} -o nginx.tar.gz
And a handy oneliner (assuming the image has already been pulled and the target folder created):
$ docker export $(docker create nginx) | tar -xC <dest>
P.S. Don't forget to docker rm
the temporary container 😉
docker build
can also be helpful
Turns out that starting from Docker 18.09 (released ~early 2019), it's possible to specify a custom location for the docker build
command results using the --output
flag. Generally, building an image triggers (temporary) container runs, but a Dockerfile without any RUN
instructions should do the trick!
The --output
flag works only if BuildKit is used, so you'll either need to go with docker buildx build
or set the DOCKER_BUILDKIT=1
variable:
$ echo 'FROM nginx' > Dockerfile
$ DOCKER_BUILDKIT=1 docker build -o rootfs .
$ ls -l rootfs
total 84
drwxr-xr-x 2 vagrant vagrant 4096 Aug 22 00:00 bin
drwxr-xr-x 2 vagrant vagrant 4096 Jun 30 21:35 boot
drwxr-xr-x 4 vagrant vagrant 4096 Sep 12 14:07 dev
drwxr-xr-x 2 vagrant vagrant 4096 Aug 23 03:59 docker-entrypoint.d
...
drwxr-xr-x 2 vagrant vagrant 4096 Aug 23 03:59 tmp
drwxr-xr-x 11 vagrant vagrant 4096 Aug 22 00:00 usr
drwxr-xr-x 11 vagrant vagrant 4096 Aug 22 00:00 var
Thanks to Chris Guest for pointing me to this amazing feature!
⚠️ Caveat: I couldn't find a way to preserve the file ownership information with docker build -o
.
Bonus: Mount container images as host folders
As you probably know, Docker delegates more and more some of its container management tasks to another lower-level daemon called containerd. It means that if you have a dockerd
daemon running on a machine, most likely there is a containerd
daemon somewhere nearby as well. And containerd often comes with its own command-line client, ctr
, that can be used, in particular, to inspect images.
The cool part about containerd is that it provides a much more fine-grained control over the typical container management tasks than Docker does:
$ ctr image pull docker.io/library/nginx:latest
$ mkdir rootfs
$ ctr image mount docker.io/library/nginx:latest rootfs
The above trick is what I used before this recent realization of how to use docker export
in combination with just created container.
It would be great if it were possible to use ctr
to inspect images owned by dockerd
, but a quick check (ctr --namespace moby image ls
) showed that it's not the case yet. However, this might change soon, thanks to the ongoing attempt to offload more and more lower-level tasks from Docker to containerd.
Instead of Conclusion
Keep playing with containers, folks. It's fun!
Further reading
- How to Run a Container Without an Image
- Cracking the Docker CLI: How to Grasp Container Management Commands
- Learning Docker with Docker - Toying With DinD For Fun And Profit
- Why and How to Use containerd From Command Line
- 🎬 containerd - a secret hero of the Cloud Native world
- Containers 101: attach vs. exec - what's the difference?
- Docker: How To Debug Distroless And Slim Containers
- Kubernetes Ephemeral Containers and kubectl debug Command
- Containers 101: attach vs. exec - what's the difference?
- Why and How to Use containerd From Command Line
- Docker: How To Extract Image Filesystem Without Running Any Containers
- KiND - How I Wasted a Day Loading Local Docker Images
Don't miss new posts in the series! Subscribe to the blog updates and get deep technical write-ups on Cloud Native topics direct into your inbox.