Multi-architecture container image construction practice

I recently encountered such a scenario in a localization project. The nodes in the same Kubernetes cluster have a mixed architecture. That is to say, the CPU architecture of some nodes is x86, while the CPU architecture of some nodes is x86. It's from ARM. In order to make our image run in such an environment, the simplest way is to label it accordingly according to the node type, and then build different images for different architectures, such asdemo :v1-amd64 and demo:v1-arm64, and then you need to write two sets of YAML: one uses the demo:v1-amd64 image and passesnodeSelector Select the x86 node, use the other set of demo:v1-arm64 mirror, and pass < a i=7>nodeSelector Select the ARM node. Obviously, this approach is not only very cumbersome, but also quite cumbersome to manage. If there are nodes of other architectures in the cluster, the maintenance costs will increase exponentially.

Overview

As you may know, each Docker image is described through a manifest. The manifest contains the basic information of the image, including its mediaType, size, summary, and hierarchical information for each layer. You can use docker manifest inspect to view the manifest information of a certain image:

$ docker manifest inspect aneasystone/hello-actuator:v1
{
    
    
        "schemaVersion": 2,
        "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
        "config": {
    
    
                "mediaType": "application/vnd.docker.container.image.v1+json",
                "size": 3061,
                "digest": "sha256:d6d5f18d524ce43346098c5d5775de4572773146ce9c0c65485d60b8755c0014"
        },
        "layers": [
                {
    
    
                        "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
                        "size": 2811478,
                        "digest": "sha256:5843afab387455b37944e709ee8c78d7520df80f8d01cf7f861aae63beeddb6b"
                },
                {
    
    
                        "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
                        "size": 928436,
                        "digest": "sha256:53c9466125e464fed5626bde7b7a0f91aab09905f0a07e9ad4e930ae72e0fc63"
                },
                {
    
    
                        "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
                        "size": 186798299,
                        "digest": "sha256:d8d715783b80cab158f5bf9726bcada5265c1624b64ca2bb46f42f94998d4662"
                },
                {
    
    
                        "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
                        "size": 19609795,
                        "digest": "sha256:112ce4ba7a4e8c2b5bcf3f898ae40a61b416101eba468397bb426186ee435281"
                }
        ]
}

You can add --verbose to view more detailed information, including the image tag and architecture information referenced by the manifest:

$ docker manifest inspect --verbose aneasystone/hello-actuator:v1
{
    
    
        "Ref": "docker.io/aneasystone/hello-actuator:v1",
        "Descriptor": {
    
    
                "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
                "digest": "sha256:f16a1fcd331a6d196574a0c0721688360bf53906ce0569bda529ba09335316a2",
                "size": 1163,
                "platform": {
    
    
                        "architecture": "amd64",
                        "os": "linux"
                }
        },
        "SchemaV2Manifest": {
    
    
                ...
        }
}

We generally do not use the manifest directly, but associate it with tags to make it easier for people to use. As can be seen from the above output, the manifest is associated with the image tag docker.io/aneasystone/hello-actuator:v1 and supports the platform Yeslinux/amd64, the image has four layers. Also note the mediaType field here. Its value is application/vnd.docker.distribution.manifest.v2+json, which means this is the Docker image format. (If it is application/vnd.oci.image.manifest.v1+json, it means OCI image).

It can be seen that this image tag is only associated with one manifest, and one manifest only corresponds to one architecture; if the same image tag can be associated with multiple manifests, and different manifests correspond to different architectures, then when we start the container through this image tag , the container engine can automatically find the corresponding manifest and download the corresponding image based on the current system architecture. In fact, this is the basic principle of multi-arch images. We call the multiple manifests here collectively the manifest list (called image index in the OCI specification). The image tag can not only be associated with the manifest, but also Associated manifest list.

You can use docker manifest inspect to view the manifest list information of a multi-architecture image:

$ docker manifest inspect alpine:3.17
{
    
    
   "schemaVersion": 2,
   "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
   "manifests": [
      {
    
    
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 528,
         "digest": "sha256:c0d488a800e4127c334ad20d61d7bc21b4097540327217dfab52262adc02380c",
         "platform": {
    
    
            "architecture": "amd64",
            "os": "linux"
         }
      },
      {
    
    
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 528,
         "digest": "sha256:ecc4c9eff5b0c4de6be6b4b90b5ab2c2c1558374852c2f5854d66f76514231bf",
         "platform": {
    
    
            "architecture": "arm",
            "os": "linux",
            "variant": "v6"
         }
      },
      {
    
    
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 528,
         "digest": "sha256:4c679bd1e6b6516faf8466986fc2a9f52496e61cada7c29ec746621a954a80ac",
         "platform": {
    
    
            "architecture": "arm",
            "os": "linux",
            "variant": "v7"
         }
      },
      {
    
    
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 528,
         "digest": "sha256:af06af3514c44a964d3b905b498cf6493db8f1cde7c10e078213a89c87308ba0",
         "platform": {
    
    
            "architecture": "arm64", 
            "os": "linux",
         }
      },
      {
    
    
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 528,
         "digest": "sha256:af6a986619d570c975f9a85b463f4aa866da44c70427e1ead1fd1efdf6150d38",
         "platform": {
    
    
            "architecture": "386", 
            "os": "linux"
         }
      },
      {
    
    
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 528,
         "digest": "sha256:a7a53c2331d0c5fedeaaba8d716eb2b06f7a9c8d780407d487fd0fbc1244f7e6",
         "platform": {
    
    
            "architecture": "ppc64le",
            "os": "linux"
         }
      },
      {
    
    
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 528,
         "digest": "sha256:07afab708df2326e8503aff2f860584f2bfe7a95aee839c8806897e808508e12",
         "platform": {
    
    
            "architecture": "s390x",
            "os": "linux"
         }
      }
   ]
}

The alpine:3.17 here is a multi-architecture image. From the output result, we can see that the mediaType is application/vnd.docker.distribution.manifest.list.v2+json, indicating that this image tag is associated with a manifest list , which contains multiple manifests and supports multiple architectures such as amd64, arm/v6, arm/v7, arm64, i386, ppc64le, and s390x. We can also see this information directly on Docker Hub:
Insert image description here
Obviously, in our hybrid architecture Kubernetes cluster, this image can be run directly. We can also build our application into such a multi-architecture image, then we can freely run our own application in this Kubernetes cluster. This method is more elegant than the above method of building a mirror for each architecture. Much more.

So, how do we build such a multi-architecture image? Generally speaking, if you use Docker as your build tool, there are usually two methods: docker manifest and docker buildx.

Create multi-architecture images using docker manifest

docker build is the most commonly used image building command. First, we create a Dockerfile file with the following content:

FROM alpine:3.17``CMD ["echo", "Hello"]

Then use docker build to build the image:

$ docker build -f Dockerfile -t aneasystone/demo:v1 .

Such a simple image is built, use docker run to test it:

$ docker run --rm -it aneasystone/demo:v1
Hello

Very smooth, the mirror can run normally. However, there is a problem with the image built in this way. Docker Engine automatically pulls the base image based on our current system. My system is x86, so the pulled alpine:3.17 image architecture is linux/amd64:

$ docker image inspect alpine:3.17 | grep Architecture
 
        "Architecture": "amd64",

If you want to build images of other architectures, there are three ways. The first is the most original method. Docker officially creates different independent accounts for each different architecture. For example, the following are some commonly used accounts:
● ARMv6 32-bit (arm32v6): https://hub.docker.com/u/arm32v6/
● ARMv7 32-bit (arm32v7): https://hub.docker.com/u/arm32v7/
● ARMv8 64-bit (arm64v8): https://hub.docker.com /u/arm64v8/
● Linux x86-64 (amd64): https://hub.docker.com/u/amd64/
● Windows x86-64 (windows-amd64): https://hub.docker.com/u/winamd64/
So we You can use amd64/alpine and arm64v8/alpine to pull the corresponding For the image of the architecture, we slightly modify the Dockerfile file:

ARG ARCH=amd64
FROM ${ARCH}/alpine:3.17
CMD ["echo", "Hello"]

Then use --build-arg parameters to build images of different architectures:

docker build --build-arg ARCH=amd64 -f Dockerfile-arg -t aneasystone/demo:v1-amd64 .
docker build --build-arg ARCH=arm64v8 -f Dockerfile-arg -t aneasystone/demo:v1-arm64 .

However, starting from September 2017, one image can support multiple architectures, and this method is gradually no longer needed. The second method is to directly use the basic image alpine:3.17 and let Docker Engine use the --platform parameter of the FROM command. Automatically pull images for specific architectures. We create two new files Dockerfile-amd64 and Dockerfile-arm64, Dockerfile-amd64The content of the file is as follows:

FROM --platform=linux/amd64 alpine:3.17
CMD ["echo", "Hello"]

The contents of the Dockerfile-arm64 file are as follows:

FROM --platform=linux/arm64 alpine:3.17
CMD ["echo", "Hello"]

Then use docker build to build the image again:

$ docker build --pull -f Dockerfile-amd64 -t aneasystone/demo:v1-amd64 .
$ docker build --pull -f Dockerfile-arm64 -t aneasystone/demo:v1-arm64 .

Pay attention to the --pull parameter here, which forces Docker Engine to pull the base image. Otherwise, the first cache will be used during the second build, so the base image will be incorrect.

The third method does not need to modify the Dockerfile file, because docker build also supports --platform parameters, we only You need to build the image as follows:

$ docker build --pull --platform=linux/amd64 -f Dockerfile -t aneasystone/demo:v1-amd64 .
$ docker build --pull --platform=linux/arm64 -f Dockerfile -t aneasystone/demo:v1-arm64 .

When executing the docker build command, you may encounter the following error message:

$ docker build -f Dockerfile-arm64 -t aneasystone/demo:v1-arm64 .
[+] Building 1.2s (3/3) FINISHED
 => [internal] load build definition from > Dockerfile-arm64                   0.0s
 => => transferring dockerfile: > 37B                                          0.0s
 => [internal] load .> dockerignore                                            0.0s
 => => transferring context: > 2B                                              0.0s
 => ERROR [internal] load metadata for docker.io/library/alpine:3.> 17         1.1s
------
 > [internal] load metadata for docker.io/library/alpine:3.17:
------
failed to solve with frontend dockerfile.v0: failed to create LLB > definition: unexpected status code [manifests 3.17]: 403 Forbidden

Based on the information here, modify the Docker Daemon configuration file and set buildkit to false:

"features": {
    
    
  "buildkit": false
},

After building images of different architectures, we can use the docker manifest command to create a manifest list and generate our own multi-architecture images. Since the current manifest list creation must reference the image in the remote repository, before that, we need to push the two images just generated to the image repository:

$ docker push aneasystone/demo:v1-amd64
$ docker push aneasystone/demo:v1-arm64

Then use docker manifest create to create a manifest list, including our two images:

$ docker manifest create aneasystone/demo:v1 \
    --amend aneasystone/demo:v1-amd64 \
    --amend aneasystone/demo:v1-arm64

Finally, push the manifest list to the mirror warehouse and you're done:

$ docker manifest push aneasystone/demo:v1

You can use docker manifest inspect to view the manifest list information of this image:

$ docker manifest inspect aneasystone/demo:v1
{
    
    
   "schemaVersion": 2,
   "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
   "manifests": [
      {
    
    
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 528,
         "digest": "sha256:170c4a5295f928a248dc58ce500fdb5a51e46f17866369fdcf4cbab9f7e4a1ab",
         "platform": {
    
    
            "architecture": "amd64",
            "os": "linux"
         }
      },
      {
    
    
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 528,
         "digest": "sha256:3bb9c02263447e63c193c1196d92a25a1a7171fdacf6a29156f01c56989cf88b",
         "platform": {
    
    
            "architecture": "arm64",
            "os": "linux",
            "variant": "v8"
         }
      }
   ]
}

You can also see the architecture information of this image on Docker Hub:
Insert image description here

Create multi-architecture images using docker buildx

As can be seen from the previous section, the steps to build a multi-architecture image usingdocker manifest are roughly divided into the following four steps:

Usedocker build to build the image of each architecture in turn;
Usedocker push Push the image to the mirror warehouse;
use docker manifest create Create a manifest list, including each image above;
Use docker manifest push to push the manifest list to Mirror warehouse;
It is very troublesome to go through so many steps every time to build a multi-architecture mirror. This section will introduce a more convenient way, using docker buildx to create multi-architecture images.

buildx is a Docker CLI plug-in that greatly extends the build function of Moby BuildKit while maintaining the same user experience as docker build< a i=2> Same, users can get started quickly. If your system is Windows or MacOS, buildx is already built into Docker Desktop, no additional installation is required; if your system is Linux, you can use DEB Or install it in the form of an RPM package, or you can install it manually. Please refer to the official documentation for specific installation steps.

Using docker buildx to create a multi-architecture image only requires one simple line of command:

docker buildx build --platform=linux/amd64,linux/arm64 -t aneasystone/demo:v2 .

However, when executing this command for the first time, the following error will be reported:

ERROR: multiple platforms feature is currently not supported for docker driver. Please switch to a different driver (eg. "docker buildx create --use")

This is because the builder used by buildx by default is a docker driver, which does not support building multiple platform images at the same time. , we can use docker buildx create to create builders for other drivers (refer to here for the four drivers of buildx and the features they support): a>

docker buildx create --use``nice_cartwright

The builder driver created in this way is docker-container driver, which has not been started yet:

$ docker buildx ls
  NAME/NODE          DRIVER/ENDPOINT                STATUS   BUILDKIT PLATFORMS
  nice_cartwright *  docker-container
  nice_cartwright0 npipe:./pipe/docker_engine inactive
  default            docker
  default          default                        running  20.10.17 linux/amd64, linux/arm64, ...

The builder is automatically started when docker buildx build is executed:

$ docker buildx build --platform=linux/amd64,linux/arm64 -t aneasystone/demo:v2 .
[+] Building 14.1s (7/7) FINISHED
 => [internal] booting buildkit                                                                                                            1.2s 
 => => starting container buildx_buildkit_nice_cartwright0                                                                                 1.2s 
 => [internal] load build definition from Dockerfile                                                                                       0.1s 
 => => transferring dockerfile: 78B                                                                                                        0.0s 
 => [internal] load .dockerignore                                                                                                          0.0s 
 => => transferring context: 2B                                                                                                            0.0s 
 => [linux/amd64 internal] load metadata for docker.io/library/alpine:3.17                                                                12.3s 
 => [linux/arm64 internal] load metadata for docker.io/library/alpine:3.17                                                                12.2s 
 => [linux/arm64 1/1] FROM docker.io/library/alpine:3.17@sha256:f271e74b17ced29b915d351685fd4644785c6d1559dd1f2d4189a5e851ef753a           0.2s 
 => => resolve docker.io/library/alpine:3.17@sha256:f271e74b17ced29b915d351685fd4644785c6d1559dd1f2d4189a5e851ef753a                       0.1s 
 => [linux/amd64 1/1] FROM docker.io/library/alpine:3.17@sha256:f271e74b17ced29b915d351685fd4644785c6d1559dd1f2d4189a5e851ef753a           0.2s 
 => => resolve docker.io/library/alpine:3.17@sha256:f271e74b17ced29b915d351685fd4644785c6d1559dd1f2d4189a5e851ef753a                       0.1s 
WARNING: No output specified with docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load

Use docker ps to see the running builder, which is actually the buildkitd service. Docker buildx build automatically downloads it for usmoby/buildkit:buildx-stable-1< /span> Mirror and run:

$ docker ps
CONTAINER ID   IMAGE                           COMMAND       CREATED         STATUS         PORTS     NAMES
e776505153c0   moby/buildkit:buildx-stable-1   "buildkitd"   7 minutes ago   Up 7 minutes             buildx_buildkit_nice_cartwright0

There is a line of WARNING information in the above build result, which means that we did not specify the output parameter, so the build result only exists in the build cache. If you want to push the built image to the mirror warehouse, you can add a < /span>--push Parameters:

$ docker buildx build --push --platform=linux/amd64,linux/arm64 -t aneasystone/demo:v2 .
[+] Building 14.4s (10/10) FINISHED
 => [internal] load build definition from Dockerfile                                                                                       0.1s 
 => => transferring dockerfile: 78B                                                                                                        0.0s 
 => [internal] load .dockerignore                                                                                                          0.0s 
 => => transferring context: 2B                                                                                                            0.0s 
 => [linux/arm64 internal] load metadata for docker.io/library/alpine:3.17                                                                 9.1s 
 => [linux/amd64 internal] load metadata for docker.io/library/alpine:3.17                                                                 9.0s 
 => [auth] library/alpine:pull token for registry-1.docker.io                                                                              0.0s 
 => [linux/arm64 1/1] FROM docker.io/library/alpine:3.17@sha256:f271e74b17ced29b915d351685fd4644785c6d1559dd1f2d4189a5e851ef753a           0.1s 
 => => resolve docker.io/library/alpine:3.17@sha256:f271e74b17ced29b915d351685fd4644785c6d1559dd1f2d4189a5e851ef753a                       0.1s 
 => [linux/amd64 1/1] FROM docker.io/library/alpine:3.17@sha256:f271e74b17ced29b915d351685fd4644785c6d1559dd1f2d4189a5e851ef753a           0.1s 
 => => resolve docker.io/library/alpine:3.17@sha256:f271e74b17ced29b915d351685fd4644785c6d1559dd1f2d4189a5e851ef753a                       0.1s 
 => exporting to image                                                                                                                     5.1s 
 => => exporting layers                                                                                                                    0.0s 
 => => exporting manifest sha256:4463076cf4b016381c6722f6cce481e015487b35318ccc6dc933cf407c212b11                                          0.0s 
 => => exporting config sha256:6057d58c0c6df1fbc55d89e1429ede402558ad4f9a243b06d81e26a40d31eb0d                                            0.0s 
 => => exporting manifest sha256:05276d99512d2cdc401ac388891b0735bee28ff3fc8e08be207a0ef585842cef                                          0.0s 
 => => exporting config sha256:86506d4d3917a7bb85cd3d147e651150b83943ee89199777ba214dd359d30b2e                                            0.0s 
 => => exporting manifest list sha256:a26956bd9bd966b50312b4a7868d8461d596fe9380652272db612faef5ce9798                                     0.0s 
 => => pushing layers                                                                                                                      3.0s 
 => => pushing manifest for docker.io/aneasystone/demo:v2@sha256:a26956bd9bd966b50312b4a7868d8461d596fe9380652272db612faef5ce9798          2.0s 
 => [auth] aneasystone/demo:pull,push token for registry-1.docker.io                                                                       0.0s 
 => [auth] aneasystone/demo:pull,push library/alpine:pull token for registry-1.docker.io

Visiting Docker Hub, you can see that our image has been successfully pushed to the warehouse:
Insert image description here

More

Use QEMU to run programs of different architectures

After building images of multiple architectures, we can use docker run to test:

$ docker run --rm -it aneasystone/demo:v1-amd64
Hello
 
$ docker run --rm -it aneasystone/demo:v1-arm64
WARNING: The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64) and no specific platform was requested
Hello

A very strange phenomenon can be found here. Our system is obviously not arm64. Why can the arm64 image run normally? Except for a line of WARNING information, there is nothing unusual about it, and we can also use sh to enter the container and operate normally:

> docker run --rm -it aneasystone/demo:v1-arm64 sh
WARNING: The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64) and no specific platform was requested
/ # ls
bin    dev    etc    home   lib    media  mnt    opt    proc   root   run    sbin   srv    sys    tmp    usr    var
/ #

But when we executed the ps command, we found some clues:

/ # ps aux
PID   USER     TIME  COMMAND
    1 root      0:00 {
    
    sh} /usr/bin/qemu-aarch64 /bin/sh sh
    8 root      0:00 ps aux

It can be seen that the sh command we executed was actually converted by/usr/bin/qemu-aarch64, and ). https://blog.lyle.ac.cn/2020/04/14/transparently-running-binaries-from-any-architecture -in-linux-with-qemu-and-binfmt-misc/ is a powerful simulator that can simulate arm instructions on x86 machines. For information about QEMU executing cross-architecture programs, please refer to this article:QEMU

View the manifest information of the image

In addition to the docker manifest command, there are many other ways to view the manifest information of the image, such as:
crane manifest
manifest-tool

Several output types supported by buildx

In the above, we used the –push parameter to push the image to the mirror warehouse:

$ docker buildx build --push --platform=linux/amd64,linux/arm64 -t aneasystone/demo:v2 .

This command is actually equivalent to:

$ docker buildx build --output=type=image,name=aneasystone/demo:v2,push=true --platform=linux/amd64,linux/arm64 .

Also equivalent to:

$ docker buildx build --output=type=registry,name=aneasystone/demo:v2 --platform=linux/amd64,linux/arm64 .

We specify the output type of the image through the --output parameter, which is also called the exporter (exporter), buildx supports the following different exporters:
☑ image - Export the build results to the mirror
☑ registry - Export the build results to the mirror, and Push to the mirror warehouse
☑ local - export the built file system to a local directory
☑ tar - package the built file system into a tar package
☑ oci - BuildOCI image format Image
☑ docker - Build< a i=12>Docker image format Image ☑ cacheonly - Place the build result in the build cache where The two exporters a> have been used above and are generally used to push images to remote image warehouses. If we only want to build a local image and do not want to push it to a remote image warehouse, we can use oci or docker exporter. For example, the following command uses the docker exporter to export the build results to a local image: and

imageregistry

$ docker buildx build --output=type=docker,name=aneasystone/demo:v2-amd64 --platform=linux/amd64 .

You can also use the docker exporter to export the build results into a tar file:

$ docker buildx build --output=type=docker,dest=./demo-v2-docker.tar --platform=linux/amd64 .

This tar file can be loaded via docker load:

$ docker load -i ./demo-v2-docker.tar

Because I am running the Docker service locally and does not support the OCI image format, so when specifyingtype=oci, an error will be reported:

$ docker buildx build --output=type=oci,name=aneasystone/demo:v2-amd64 --platform=linux/amd64 .
ERROR: output file is required for oci exporter. refusing to write to console

However, we can export the OCI image into a tar package:

$ docker buildx build --output=type=oci,dest=./demo-v2-oci.tar --platform=linux/amd64 .

After decompressing this tar package, you can see what a standard image format is:

$ mkdir demo-v2-docker && tar -C demo-v2-docker -xf demo-v2-docker.tar
$ tree demo-v2-docker
demo-v2-docker
├── blobs
│   └── sha256
│       ├── 4463076cf4b016381c6722f6cce481e015487b35318ccc6dc933cf407c212b11
│       ├── 6057d58c0c6df1fbc55d89e1429ede402558ad4f9a243b06d81e26a40d31eb0d
│       └── 8921db27df2831fa6eaa85321205a2470c669b855f3ec95d5a3c2b46de0442c9
├── index.json
├── manifest.json
└── oci-layout
 
2 directories, 6 files

A little strange is that the tar package in OCI image format and the tar package in docker image format are exactly the same. I don’t know why?

If we don’t care about the build results, but just want to see the file system of the built image, for example, to see what its directory structure is like, or to see if there are files we need, we can use exporter. The local exporter exports the file system to a local directory:local or tar

$ docker buildx build --output=type=local,dest=./demo-v2 --platform=linux/amd64 .

The tar exporter exports a file system into a tar file:

$ docker buildx build --output=type=tar,dest=./demo-v2.tar --platform=linux/amd64 .

It is worth noting that this tar file is not in a standard image format, so we cannot use docker load to load, but we can use docker import to load. The loaded image only has the file system. When running this image, Commands such as CMD 或 ENTRYPOINT in Dockerfile will not take effect:

$ mkdir demo-v2 && tar -C demo-v2 -xf demo-v2.tar``$ ls demo-v2``bin dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var

Unsafe mirror warehouse

In the above, we used two methods to build a multi-architecture image, and pushed the image to the official Docker Hub warehouse. If you need to push it to your own mirror warehouse (about how to build your own mirror warehouse, You can refer to Blog ), because this warehouse may be unsafe and you may encounter some problems.

The first way is to use docker push directly. Before pushing, we need to modify the Docker configuration file /etc/docker/daemon.json and add the warehouse address to the insecure-registries configuration item:

{
    
    
  "insecure-registries" : ["192.168.1.39:5000"]
}

Then restart Docker.

The second way is to use docker buildx's image or registry exporter to push. This push is actually done by buildkitd, so we need to let buildkitd ignore this unsafe image repository. We first create a configuration file buildkitd.toml:

[registry."192.168.1.39:5000"]
  http = true
  insecure = true

For detailed configuration of buildkitd, please refer here. Then use docker buildx create to recreate a builder:

$ docker buildx create --config=buildkitd.toml --use

This allows docker buildx to push the image to an unsafe image warehouse.

Guess you like

Origin blog.csdn.net/qq_50573146/article/details/135001952