Docker is no longer the only option

Docker is not the only containerization tool, there may be better choices...
In the early days of containers (in fact, more like 4 years ago), Docker was the only player in the container game. But the situation is different now. Docker is no longer the only one, but only one of the container engines. Docker allows us to build, run, pull, push, or inspect container images, but for every task, there are other alternative tools, which may even be better than Docker. So, let's explore it, and then uninstall (only possible) until we completely forget about Docker...


So, why no longer use Docker?

1286c9a7179c34d18060d9d0971e20ba.png


If you have been using Docker for a long time, it is estimated that to really convince you to consider other tools, you must first provide some evidence.
First of all, Docker is a monolithic tool. It tries to cover all functions, which is usually not the best practice. In most cases, we only choose a specialized tool, which only does one thing, and it does it very well and very well.
If you are afraid of switching to a different tool set because you will have to learn to use a different CLI, API, or different concepts, then this will not be a problem. Any of the tools shown in this article can be completely seamless because they (including Docker) follow the same specifications under OCI (Open Container Initiative). They contain specifications for container runtime, container distribution, and container mirroring, which cover all the features needed to use containers.
With OCI, you can choose a set of tools that best meets your needs, while you can still enjoy using the same API and CLI commands as Docker.
So, if you are willing to try new tools, let us compare the advantages, disadvantages and characteristics of Docker and its competitors, and see if it is necessary to consider giving up Docker and using some new shiny tools.


Container engine

1286c9a7179c34d18060d9d0971e20ba.png


When comparing Docker and other tools, we need to break it down into components. First, let's discuss the container engine. Container Engine is a tool that provides a user interface for processing images and containers so that you don't have to deal with things like SECCOMP rules or SELinux policies. Its job also includes extracting images from remote warehouses and expanding them to disk. It also seems to run the container, but in fact its job is to create a container manifest and a directory with an image layer. Then it passes them to the container runtime, such as runC or Crun (we will discuss this later).
There are already many container engines, but Docker's most prominent competitor is Podman, developed by Red Hat. Unlike Docker, Podman does not require Daemon to run, nor does it require root privileges. This has been a concern for Docker for a long time. Based on its name, Podman can not only run containers, but also pods. If you are not familiar with the concept of pods, in fact, a simple summary is that Pod is the smallest computing unit of Kubernetes. It consists of one or more containers (the main container and the Sidecar that performs supporting tasks), which makes it easier for Podman users to migrate their workloads to Kubernetes in the future. So, as a simple demonstration, this is how to run two containers in one Pod:
 
 

~ $ podman pod create --name mypod
 ~ $ podman pod list
 

 POD ID         NAME    STATUS    CREATED         # OF CONTAINERS   INFRA ID
 211eaecd307b   mypod   Running   2 minutes ago   1                 a901868616a5
 

 ~ $ podman run -d --pod mypod nginx  # First container
 ~ $ podman run -d --pod mypod nginx  # Second container
 ~ $ podman ps -a --pod
 

 CONTAINER ID  IMAGE                           COMMAND               CREATED        STATUS            PORTS  NAMES               POD           POD NAME
 3b27d9eaa35c  docker.io/library/nginx:latest  nginx -g daemon o...  2 seconds ago  Up 1 second ago          brave_ritchie       211eaecd307b  mypod
 d638ac011412  docker.io/library/nginx:latest  nginx -g daemon o...  5 minutes ago  Up 5 minutes ago         cool_albattani      211eaecd307b  mypod
 a901868616a5  k8s.gcr.io/pause:3.2                                  6 minutes ago  Up 5 minutes ago         211eaecd307b-infra  211eaecd307b  mypod

Finally, Podman provides the exact same CLI commands as Docker, so just execute alias docker = podman and pretend that nothing has changed.
In addition to Docker and Podman, there are other container engines, but I personally think that they are all technologies that have no way out, or are not suitable for local development and use. However, to fully understand, at least look at the contents:

  • LXD-LXC (Linux Containers) is a container manager (daemon). This tool provides the ability to run system containers, which provide a container environment more similar to VMs. It is located in a very narrow space and has no users, so unless you have a very specific instance, it is best to use Docker or Podman.

  • CRI-O-When you Google what is CRI-O you may find that it is described as a container engine. However, in fact it is only the container runtime. In fact, it is neither an engine nor suitable for "normal" use. I mean, it was built specifically for the Kubernetes runtime (CRI), not for end users.

  • Rkt-rkt ("Rocket") is a container engine developed by CoreOS. This project is mentioned here for completeness, because the project has ended and development has stopped-so there is no need to use it anymore.


Build image

1286c9a7179c34d18060d9d0971e20ba.png


For container engines, Docker is generally only selected. However, when it comes to building a mirror, there are still more options.
First, let’s introduce Buildah. Buildah is another tool developed by Red Hat, which is quite suitable for use with Podman. If you have already installed Podman, you may notice the podman build subcommand, which is actually just a fake Buildah, because its binary files are already included in Podman.
As for its features, it follows the same route as Podman-daemonless and rootless, and follows the OCI mirroring standard, so it can ensure that the built image is the same as that built by Docker. It can also build images from Dockerfile or more appropriately named Containerfile, Dockerfile and Containerfile are the same, just the difference in naming. In addition, Buildah also provides finer control over the mirroring layer, allowing more changes to be submitted in a single layer. The only exception is (in my opinion) the difference from Docker is that the images built by Buildah are user-based, so users can only list the images they have built.
So, considering that Buildah is already included in Podman CLI, you might ask, why use a separate Buildah CLI? Buildah CLI is a superset of the commands included in podman build, so there is basically no need to touch Buildah CLI separately, but by using it, you may find some additional useful features (details about the differences between podman build and buildah, Please refer to this article [1]).
Now, let's take a look at a demo:
 
 

~ $ buildah bud -f Dockerfile .

~ $ buildah from alpine:latest  # Create starting container - equivalent to "FROM alpine:latest"
Getting image source signatures
Copying blob df20fa9351a1 done  
Copying config a24bb40132 done  
Writing manifest to image destination
Storing signatures
alpine-working-container  # Name of the temporary container
~ $ buildah run alpine-working-container -- apk add --update --no-cache python3  # equivalent to "RUN apk add --update --no-cache python3"
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
...

~ $ buildah commit alpine-working-container my-final-image  # Create final image
Getting image source signatures
Copying blob 50644c29ef5a skipped: already exists  
Copying blob 362b9ae56246 done  
Copying config 1ff90ec2e2 done  
Writing manifest to image destination
Storing signatures
1ff90ec2e26e7c0a6b45b2c62901956d0eda138fa6093d8cbb29a88f6b95124c

# buildah images
REPOSITORY               TAG     IMAGE ID      CREATED         SIZE
localhost/my-final-image latest  1ff90ec2e26e  22 seconds ago  51.4 MB

As you can see from the above script, we can only use buildah bud to build the image, bud stands for using Dockerfile to build, but you can also use more Buildahs scripts: from, run and copy, these commands correspond to the command Dockerfile (FROM image, RUN..., COPY...).
The next one is Kaniko from Google. Kaniko also builds a container image from a Dockerfile, similar to Buildah, and does not require a daemon. The main difference with Buildah is that Kaniko is more focused on building images in Kubernetes.
Kaniko uses gcr.io/ Kaniko -project/executor as a mirror to run. This works for Kubernetes, but it is not very convenient for local builds, and to some extent violates its original intention, because we have to use Docker to run the Kaniko image first, and then build the image. In other words, if you are choosing a tool for mirroring in a Kubernetes cluster (for example, in CI/CD Pipeline), then Kaniko may be a good choice because it is daemonless and (probably) more secure .
From my personal experience-I used Kaniko and Buildah to build images in a Kubernetes/OpenShift cluster. I think both can accomplish the task very well, but when using Kaniko, I saw some importing images In the warehouse, there will be random build crashes and failures.
The third contender is Buildkit, which can also be called the next generation of Docker build. It is part of the Moby project. In Docker, you can use DOCKER_BUILDKIT=1 Docker build... as an experimental feature to enable. So, what are its core values? It introduces many improvements and cool features, including parallel builds, skipping unused stages, better incremental builds, and rootless builds. On the other hand, it still needs to run the daemon (buildkitd) to run. So, if you don't want to get rid of Docker, but want some new features and better improvements, then using Buildkit may be the best choice.
As before, here we also have some "glamorous products", they also have very specific scenes, although not our first choice:

  • Source-To-Image (S2I) is a toolkit that does not require Dockerfile to build images directly from source code. This tool works well in simple, expected scenarios and workflows, but if there is too much customization, or the project does not have the expected layout, you will quickly find this tool annoying and clumsy. If you are not yet very confident in Docker, or if you are building images on an OpenShift cluster, then you can try to consider using S2I, because building with S2I is a built-in feature.

  • Jib is another tool of Google, specifically for building Java images. It includes Maven and Gradle plugins, which can easily build images without disturbing the Dockerfile.

  • The last but not unimportant one is Bazel, another tool from Google. It is not only used to build container images, but also a complete build system. If you just want to build a mirror, then delving into Bazel may be a bit too much, but it is definitely a good learning experience, so if you want to try, rules_docker is definitely a good starting point.


Container runtime

1286c9a7179c34d18060d9d0971e20ba.png


The last big piece is when the container is running, which is responsible for running the container. The container runtime is part of the entire container life cycle/stack. Unless you have some very specific requirements for speed, security, etc., there is generally no need to interfere with it. So, if readers are tired of seeing this, they can skip this part. If not, then the choice of container runtime is as follows:
runC is created based on the OCI container runtime specification and is the most popular container runtime. Docker (via containerd), Podman, and crio use it, so almost everything depends on LXD. It is the default preference for almost all products/tools, so even if you give up Docker after reading this article, you will still use runC.
Another alternative to runC is Crun, with a similar name (easy to be confused). This is a tool developed by Red Hat, written entirely in C (runC is written in Go). This makes it faster than runC and more memory efficient. Considering that it is also OCI compatible runtime. So, if you want to do a test, it's easy to switch. Although it is not very popular now, in the RHEL 8.3 technology preview, it will be used as an alternative to the OCI runtime. At the same time, considering that it is a Red Hat product, we may eventually see it become Podman or CRI- O's default preference.
Speaking of CRI-O. As I said earlier, CRI-O is not actually a container engine, but a container runtime. This is because CRI-O does not include features such as push images, which are the features of the container engine. As a runtime, CRI-O uses runC to run containers internally. Normally, you don't need to try this tool on a stand-alone machine, because it is built for runtime on Kubernetes nodes, and you can see that it is described as "all runtimes that Kubernetes needs, nothing more." Therefore, unless you are setting up a Kubernetes cluster (or an OpenShift cluster-CRI-O is already the default preference), you are unlikely to touch this.
The last content of this section is containerd, which is a graduated project of CNCF. It is a daemon that acts as an API for various container runtimes and operating systems. In the background, it relies on runC, which is the default runtime of the Docker engine. Google Kubernetes Engine (GKE) and IBM Kubernetes Service (IKS) are also in use. It is a deployment of the Kubernetes container runtime interface (same as CRI-O), so it is a good alternative for Kubernetes cluster runtime.


Mirror detection and distribution

1286c9a7179c34d18060d9d0971e20ba.png


The last part of the container stack is the detection and distribution of images. This effectively replaces docker inspect and (optionally) adds the ability to copy/map images between remote mirror repositories.
The only tool that can accomplish these tasks here is Skopeo. It is developed by Red Hat and is a supporting tool for Buildah, Podman and CRI-O. In addition to the basic skopeo inspect that we all know from Docker, Skopeo can also use skopeo copy to copy images, which allows you to map images between remote mirror repositories without first pulling them to a local repository. If you use a local warehouse, this function can also be used as a pull/push.
In addition, I would like to mention Dive, which is a tool for checking, detecting and analyzing images. It is more user-friendly, provides a more readable output, can probe the image more deeply, analyze and measure its efficiency. It is also suitable for use in CI pipelines. It can measure whether your image is "efficient enough", or in other words-whether it wastes too much space.


in conclusion

1286c9a7179c34d18060d9d0971e20ba.png


The purpose of this article is not to persuade you to completely abandon Docker, but to show you the entire scenario and all the options for building, running, managing, and distributing containers and their images. Each tool, including Docker, has its advantages and disadvantages. It is most important to evaluate which set of tools is most suitable for your workflow and scenario. I sincerely hope that this article can help you in this regard.
Related Links:
  1. https://podman.io/blogs/2018/10/31/podman-buildah-relationship.html


Original link: https://towardsdatascience.com/its-time-to-say-goodbye-to-docker-5cfec8eff833


Guess you like

Origin blog.51cto.com/14992974/2552133