How to build a Docker image in a GitLab CI pipeline

insert image description here
A common use case for CI pipelines is building Docker images for deploying applications. GitLab CI is a great option because it supports an integrated pull proxy service, which means faster pipelines, and a built-in registry for storing build images.

In this guide, we'll show you how to set up a Docker build that uses both of the above features. The steps you need to take will vary slightly depending on the type of GitLab Runner executor you will be using for your pipeline. We'll cover the Shell and Docker executors below.

Build with Shell Executor

If you are using the shell executor, make sure you have Docker installed on the machine hosting the runner. The docker executor works by running regular shell commands using the binary on the Runner host.

Go to the Git repository of the project you want to build the image for. Create a .gitlab-ci.yml file at the root of the repository. This file defines the GitLab CI pipeline that will run when changes are pushed to the project.

Add the following to the file:

stages:
  - build

docker_build:
  stage: build
  script:
    - docker build -t example.com/example-image:latest .
    - docker push example.com/example-image:latest

This simple configuration is enough to demonstrate the basics of pipeline-driven image building. GitLab automatically clones your Git repository into the build environment, so running docker build will use your project Dockerfile and make the contents of the repository available as a build context.
Once the build is complete, you can docker push to add the image to your registry. Otherwise, it will only be available to the local Docker installation running the build. If you are using a private registry, run docker login first to provide the correct authentication details:

script:
  - docker login -u $DOCKER_REGISTRY_USER -p $DOCKER_REGISTRY_PASSWORD

Define the values ​​for the two credential variables by going to Settings > CI/CD > Variables in the GitLab Web UI. Click the blue Add Variable button to create a new variable and assign a value. GitLab will make these variables available in the shell environment used to run the job.insert image description here

Build with Docker executor

GitLab Runner's Docker executor is typically used to provide a completely clean environment for each job. The job will be executed in an isolated container, so the binaries on the dockerRunner host will not be accessible.

The Docker executor gives you two possible strategies for building images: using Docker-in-Docker, or binding the host's Docker socket to the Runner's build environment. Then, you use the official Docker container image as your job image to make docker commands available in your CI scripts.

Docker-in-Docker

Using Docker-in-Docker (DinD) to build your images provides a completely isolated environment for each of your jobs. The Docker process that executes the build will be a child of the container that GitLab Runner creates on the host to run the CI job.

You need to register your GitLab Runner Docker executor and enable privileged mode to use DinD. Add the flag when registering the runner --docker-privileged:

sudo gitlab-runner register -n \
  --url https://example.com \
  --registration-token $GITLAB_REGISTRATION_TOKEN \
  --executor docker \
  --description "Docker Runner" \
  --docker-image "docker:20.10" \
  --docker-volumes "/certs/client" \
  --docker-privileged

In the CI pipeline, add the docker:dind image as a service. This makes Docker available as a separate image linked to the job image. You will be able to use the docker command to build the image using the Docker instance in the docker:dind container.

services:
  - docker:dind

docker_build:
  stage: build
  image: docker:latest
  script:
    - docker build -t example-image:latest .

Using DinD gives you completely isolated builds that don't affect each other or your host. The main disadvantage is the more complex caching behavior: each job has a new environment and previously built layers will not be accessible. You can partially work around this by trying to pull a previous version of the image before building, then use the --cache-from build flag to make the layer of the pulled image available as a cache source:

docker_build:
  stage: build
  image: docker:latest
  script:
    - docker pull $CI_REGISTRY_IMAGE:latest || true
    - docker build --cache-from $CI_REGISTRY_IMAGE:latest -t $CI_REGISTRY_IMAGE:latest .

Mounting the host's Docker socket into the job environment is an alternative when using the Docker executor. This gives you seamless caching and eliminates the need to add the docker:dind service to the CI configuration.

docker-volumes To set this up, register your Runner with a flag that binds the host's Docker socket to the /var/run/docker.sock inner job container:

sudo gitlab-runner register -n \
  --url https://example.com \
  --registration-token $GITLAB_REGISTRATION_TOKEN \
  --executor docker \
  --description "Docker Runner" \
  --docker-image "docker:20.10" \
  --docker-volumes /var/run/docker.sock:/var/run/docker.sock

Now a job docker run with the image will be able to use the binaries normally. Actions will actually take place on your host, being a sibling of the job container rather than a subcontainer.

This is actually similar to using a shell executor in the host's Docker installation. The images will reside on the host machine for seamless use of regular docker build layer caching.

While this approach can result in higher performance, less configuration, and without the limitations of DinD, it also has its own unique set of problems. The most prominent of these are security implications: jobs can execute arbitrary Docker commands on your Runner host, so a malicious project in your GitLab instance could run docker run -it malicious-image:latest or docker rm -f $(docker ps -a) has devastating consequences.

GitLab also warns that socket bindings can cause problems when jobs are running concurrently. This happens when you rely on a container created with a specific name. If two instances of a job run in parallel, the second instance will fail because the container name already exists on your host.

If you expect any of these issues to be troublesome, you should consider DinD. While DinD is generally no longer recommended, it makes more sense for public-facing GitLab instances running concurrent CI jobs.

Push images to GitLab's registry

GitLab projects can optionally integrate a registry, which you can use to store images. You can view the contents of the registry by navigating to Packages & Registries > Container Registry in the project sidebar. If you don't see this link, enable the registry by going to Settings > General > Visibility, Projects, Features & Permissions and activating the Container Registry toggle.
insert image description here
GitLab automatically sets environment variables in your CI jobs, allowing you to reference your project's container registry. Adjust the script section to log into the registry and push your image:

script:
  - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
  - docker build -t $CI_REGISTRY_IMAGE:latest .
  - docker push $CI_REGISTRY_IMAGE:latest

GitLab generates a secure set of credentials for each of your CI jobs. The environment variable will contain an access token that the job can use to connect to the registry gitlab − ci − token as user CIJOBTOKEN . The registration server URL connects to the registry gitlab-ci-token as CI_JOB_TOKEN. The registration server URL starts withC IJO BTOK E N identity connected to registry g i t l a b _c ito k e n . _ The registry server URL starts with CI_REGISTRY .

The last variable , CIREGISTRYIMAGE, provides the full path to the project container registry. This is a suitable base for your image tags. You can expand this variable to create sub-repositories, for example CI_REGISTRY_IMAGE provides the full path to the project container registry. This is a suitable base for your image tags. You can expand this variable to create sub-repositories, e.g.C IREGISTRYIM A G E provides the full path to the project container registry . This is a suitable base for your image tags . You can expand this variable to create sub- repositories , such as CI_REGISTRY_IMAGE/production/api:latest.

Other Docker clients can pull images from the registry by authenticating with an access token. You can generate these on the project's Settings > Access Tokens screen. Add the read_registry scope, then use the displayed credentials to the docker login project's registry.

Using GitLab's dependency proxy

GitLab's Dependency Proxy provides a caching layer for upstream images you pull from Docker Hub. It only fetches image content when the image actually changes, helping you stay within Docker Hub's rate limits. This will also improve the performance of the build. insert image description here
Activate Dependency Proxy at the GitLab group level by going to Settings > Packages and Registry > Dependency Proxy. Once .gitlab-ci.yml is enabled, add prefix image references, $CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX to your file to pull them through the proxy:

docker_build:
  stage: build
  image: $CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX/docker:latest
  services:
    - name: $CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX/docker:dind
      alias: docker

It's all about it here! GitLab Runner automatically logs into the dependency proxy registry, so there is no need to manually supply your credentials.

GitLab will now cache your images, improving performance and resilience to network outages. Note that the services definition has to be adjusted too - environment variables don't work with the inline form used before, so name must specify the full image, and then reference a command in your partial alias. script

While we have now set up proxies for images that are used directly by the working stage, more work needs to be done to add support for base images in the Dockerfile build. Regular directives like this won't go through the proxy:

FROM ubuntu:latest

To add the last part, use Docker's build parameters to make the dependency proxy URL available when stepping through the Dockerfile:

ARG GITLAB_DEPENDENCY_PROXY
FROM ${GITLAB_DEPENDENCY_PROXY}/ubuntu:latest

Then modify your docker build command to define the value of the variable:

script:
  >
    - docker build \
        --build-arg GITLAB_DEPENDENCY_PROXY=${CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX} \
        -t example-image:latest .

Now your base image will also be pulled through the dependency proxy.

generalize

Docker image building is easy to integrate into your GitLab CI pipeline. After the initial Runner configuration, the docker build command in the job script section of your docker push is all you need to create an image in your repository Dockerfile. GitLab's built-in container registry provides you with private storage of your project's images. insert image description here
Beyond basic builds, it's worth integrating GitLab's dependency proxy to improve performance and avoid hitting Docker Hub rate limits. You should also check the security of your installation by evaluating whether the method you choose allows untrusted projects to run commands on your Runner host. While it has its own problems, Docker-in-Docker is the safest approach when your GitLab instance is publicly accessible or accessed by a large user base.

Guess you like

Origin blog.csdn.net/wlcs_6305/article/details/123294722