Apart from Docker, what other options do we have?

Source | CSDN
compilation | Crescent moon editor | Zhang Wen
head picture | CSDN download in Visual China

[Editor's note] The so-called Hedong for thirty years, and Hexi for thirty years. Docker, which was once doomed in the container field, is no longer in its glory. Feelings aside, we have to admit that Docker has been shot dead on the beach by the back waves...

In the container field about 4 years ago, Docker was the only choice.

However, the situation is very different now, Docker is no longer the only option, it is just a container engine. We can use Docker to build, run, pull, push or check container images, but every task here can be replaced with other tools, and some tools are even better than Docker.

So, let us explore this area, then uninstall and forget about Docker.


Why don't you use Docker?

If you have been using Docker for a long time, it may take a bit more to convince you to consider other tools.

First of all, Docker is a holistic tool. It tries to do everything well, but it often backfires. In most cases, we should choose a specialized tool. It may only do one thing, but it will do its best.

Maybe you are afraid of using other tools because you are worried about the need to learn to use different CLIs, different APIs, or accept different concepts. But don’t worry. Any of the tools described in this article can be seamlessly connected because they (including Docker) follow the same OCI (OpenContainer Initiative) specification. OCI includes specifications for container runtime, container distribution, and container mirroring, covering all the functions required to use containers.

Because of OCI, you are free to choose the tool that suits your needs. At the same time, you can continue to use the same API and CLI commands as Docker.

Therefore, if you are willing to try new tools, let's compare the advantages, disadvantages and functions of Docker and its competitors, and see if it is necessary to consider abandoning Docker and try some fresh tools.

Container engine

 

When comparing Docker with other tools, we need to discuss its components separately. The first thing to discuss is the container engine.

The container engine is a tool that provides a user interface for processing images and containers, so you don't need to struggle with SECCOMP rules or SELinux policies. In addition, the container engine can also extract images from remote warehouses and decompress them to local disks. It also seems to run the container, but in fact, its job is to create the container manifest and the directory of the mirror layer. Then, it passes these files to container runtimes such as runc or crun.

There are many container engines available for us to use, but the main competitor of Docker is Podman developed by Red Hat. Unlike Docker, Podman does not need to run a daemon, nor does it need root privileges. These are issues that Docker has been paying attention to for a long time. As you can see from the name, Podman can run not only containers, but also pods.

If you are not familiar with pod, I can briefly introduce it: Pod is the smallest computing unit of Kubernetes, composed of one or more containers (the main container and the sidercar container responsible for supporting the main container). Therefore, Podman users can easily migrate their workloads to Kubernetes in the future.

Below, we use a simple demonstration to illustrate how to run two containers in a Pod:

~ $ podman pod create --name mypod~ $ podman pod listPOD ID        NAME    STATUS    CREATED         # OF CONTAINERS   INFRA ID211eaecd307b  mypod   Running   2 minutes ago   1                 a901868616a5 ~ $ podman run -d --pod mypod nginx  # First container~ $ podman run -d --pod mypod nginx  # Second container~ $ podman ps -a --pod CONTAINER ID IMAGE                          COMMAND               CREATED        STATUS            PORTS  NAMES               POD           POD NAME3b27d9eaa35c  docker.io/library/nginx:latest  nginx -g daemon o...  2 seconds ago Up 1 second ago         brave_ritchie      211eaecd307b  mypodd638ac011412 docker.io/library/nginx:latest nginx -g daemon o...  5 minutesago  Up 5 minutes ago         cool_albattani      211eaecd307b mypoda901868616a5 k8s.gcr.io/pause:3.2                                  6 minutesago  Up 5 minutes ago         211eaecd307b-infra  211eaecd307b mypod

Finally, the CLI commands provided by Podman are exactly the same as Docker, so you only need to execute

alias docker=podman

Then it's like nothing happened.

In addition to Docker and Podman, there are other container engines, but I am not optimistic about their development or not suitable for local development.

However, if you want to have a more complete understanding of the container engine, I can also introduce some:

  • LXD: LXD is the container manager (daemonization sequence) of LXC (Linux container). This tool provides the ability to run system containers, and these system containers provide a container environment similar to a virtual machine. This tool is relatively small and does not have many users, so unless you have a very specific use case, it is better to use Docker or Podman.

  • CRI-O: If you search the Internet for what cri-o is, you may find that it is described as a container engine. But in fact, it is a kind of container runtime. It is neither a container engine nor suitable for "regular" use. I mean, it was created specifically as a Kubernetes runtime (CRI), not for end users.

  • rkt: rkt (pronounced "rocket") is a container engine developed by CoreOS. This project is mentioned here for the completeness of the list, because this project has ended and its development has stopped, so you should not use it anymore.

Build image

 

For the container engine, there is actually only one alternative to Docker (Podman). However, we have many options when it comes to building images.

First, let's take a look at Buildah. This is also a tool developed by Red Hat that can work well with Podman. If you have already installed Podman, you may notice the podman build subcommand, because its binary files are already included in Podman. In fact, this command is just a packaged Buildah.

As for functionality, Buildah follows Podman's policy: there is no daemon, no root privileges, and OCI-compliant images are generated, so your image runs in exactly the same way as images built with Docker. It can also use Dockerfile or Containerfile to build images. Dockerfile and Containerfile are actually the same thing, but they are called differently. In addition, Buildah also provides finer control over the mirroring layer and supports the submission of a large number of changes to a single layer. I think that there is an unexpected difference between it and Docker (but this difference is a good thing), that is, the images built with Buildah are user-specific, so you can only list the images you built.

You may ask, since Podman CLI already includes Buildah, why use a separate Buildah CLI? In fact, Buildah CLI is a superset of the commands included in podman build, so you may not need to use Buildah CLI directly, but by using it, you may find some additional features.

Below, we look at an example:

~ $ buildah bud -f Dockerfile . ~ $ buildah from alpine:latest  # Create starting container - equivalent to"FROM alpine:latest"Getting image source signaturesCopying blob df20fa9351a1 doneCopying config a24bb40132 doneWriting manifest to image destinationStoring signaturesalpine-working-container # Name of the temporary container~ $ buildah run alpine-working-container -- apk add--update --no-cache python3  # equivalentto "RUN apk add --update --no-cache python3"fetchhttp://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gzfetchhttp://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz... ~ $ buildah commit alpine-working-containermy-final-image  # Create final imageGetting image source signaturesCopying blob 50644c29ef5a skipped: already existsCopying blob 362b9ae56246 doneCopying config 1ff90ec2e2 doneWriting manifest to image destinationStoring signatures1ff90ec2e26e7c0a6b45b2c62901956d0eda138fa6093d8cbb29a88f6b95124c ~ # buildah imagesREPOSITORY              TAG     IMAGE ID      CREATED         SIZElocalhost/my-final-image latest  1ff90ec2e26e 22 seconds ago  51.4 MB

As can be seen from the above script, you can directly use buildah bud to build the image, where bud stands for building with Dockerfile, you can also use other scripting methods, such as from, run and copy of Buildahs, which correspond to the Dockerfile respectively FROM, RUN, COPY commands.

Next is Kaniko of Google. Kaniko also uses Dockerfile to build container images, and similar to Buildah, it does not require a daemon. But the main difference between it and Buildah is that Kaniko focuses more on image building in Kubernetes.

Kaniko itself should also be run as a mirror (gcr.io/kaniko-project/executor), which is no problem for Kubernetes, but it is not very convenient for local construction, and to some extent violates the construction of mirroring Purpose, because you need to use Docker to run the Kaniko image to build the image. Having said that, if you are looking for a tool to build images in a Kubernetes cluster (for example, in a CI/CD pipeline), then Kaniko may be a good choice because it does not require a daemon and is more secure.

In my personal experience, I think both can do the job well, but when using Kaniko, I encountered some random build failures and failed when pushing the image to the warehouse.

The third tool I want to introduce is buildkit, which can also be called the second generation of docker build. It is part of the Moby project (same as Docker). Just set DOCKER_BUILDKIT=1 docker build to start this tool and use it as an experimental feature of Docker. So, what exactly can this tool bring to you? It brings many improvements and cool features, including parallel builds, skipping unused stages, better incremental builds, and builds that do not require root permissions. However, it still needs to run the daemon (buildkitd). Therefore, if you don't want to get rid of Docker, but also want some new features and improvements, then you can consider buildkit.

Here, I will also list some other tools, they have their own specific uses, but not my first choice:

  • Source-To-Image (S2I): This is a toolkit that does not use Dockerfile and builds an image directly from the source code. This tool performs well in simple expected scenarios and workflows, but if you need more customization, if the structure of your project does not meet expectations, then it becomes very annoying and clumsy. If you are not satisfied with Docker, or you are building an image on an OpenShift cluster, you can consider using S2I, because using S2I to build an image is a built-in feature of it.

  • Jib: This is a tool developed by Google specifically for building Java mirrors. It provides Maven and Gradle plugins, allowing you to easily build images without worrying about the Dockerfile.

  • Bazel: This is also a tool developed by Google. It can not only be used to build container images, but also a complete build system. If you just want to build a mirror, then using Bazel may be a little overkill, but it is definitely a good learning experience. If you want, you can start with rules_docker.

Container runtime

 

Finally, let's talk about the container runtime responsible for running the container. The container runtime is part of the entire container life cycle, unless you have some very special requirements for speed, security, etc., please don't mess with it.

Seeing this, if you are bored, you can skip this part. However, if you want to know what are the options when it comes to container runtimes, you can look at the following:

runc is a popular container runtime that complies with the OCI container runtime specification. Docker (via containerd), Podman, and CRI-O all use it, so I don't need to say much. It is the default setting for almost all container engines, so even if you abandon Docker after reading this article, you will probably still use runc.

Another alternative to runc is crun. This is a tool developed by Red Hat, all written in C language (runc is written in Go), so it is faster than runc and more memory efficient. Since it is also an OCI-compatible runtime, you should be able to get started quickly if you want to try it. Although it is not very popular now, it will soon appear in the technology preview as an alternative OCI runtime for RHEL 8.3, and considering that it is a Red Hat product, it may eventually become Podman or CRI-O default allocation.

Speaking of CRI-O, I said earlier that it is not a container engine, but a container runtime. This is because CRI-O does not have functions such as pushing images, but these functions are what the container engine should have. CRI-O uses runc internally to run containers. You should not try to use this runtime on your own machine, because it is designed to be the runtime on the Kubernetes node, and it is "the only runtime required by Kubernetes". Therefore, unless you are building a Kubernetes cluster, you should not consider CRI-O.

The last one is containerd, which is an upcoming project of the Cloud Native Computing Foundation. It is a daemon process that can be used as an API interface for various container runtimes and operating systems. It depends on runc in the background, which is the default runtime of the Docker engine. Google Kubernetes Engine (GKE) and IBM Kubernetes Service (IKS) are also using it. It is an implementation of the Kubernetes container runtime interface (same as CRI-O), so it is an ideal choice for Kubernetes cluster runtime.


Mirror inspection and distribution

 

The last part is the inspection and distribution of images, mainly to replace docker inspect and increase the ability to copy images between remote warehouses (optional).

Here, the only tool I want to mention that can accomplish these tasks is Skopeo. It was developed by Red Hat and is a subsidiary tool of Buildah, Podman, and CRI-O. In addition to the basic skopeo inspect (Docker has corresponding commands), Skopeo can also copy images through the skopeo copy command, so you can directly copy images between remote repositories without pulling them locally. If you use a local warehouse, then this function can also be used as a pull/push.

In addition, I would also like to mention Dive, which is a tool for checking, exploring and analyzing images. It is more user-friendly, provides a more easy-to-read output, and can also dig deeper into the mirror, and analyze and measure the efficiency of the mirror. In addition, it is also very suitable for use in CI pipelines to measure whether your image is "efficient enough", or in other words, whether it wastes too much space.

to sum up

 

The purpose of this article is not to persuade you to completely abandon Docker, but to show you the entire process of building, running, managing, and distributing containers and their images and all alternative tools. Each tool (including Docker) has its advantages and disadvantages. Therefore, it is most important to evaluate which tool is most suitable for your workflow and situation. I hope this article can provide you with some help in this regard.

Reference link: https://martinheinz.dev/blog/35

More reading recommendations

Guess you like

Origin blog.csdn.net/FL63Zv9Zou86950w/article/details/112855410