Welcome to my GitHub
https://github.com/zq2599/blog_demos
content: classification summary of all original articles and supporting source code, involving Java, Docker, Kubernetes, DevOPS, etc.;
About this article
The goal of this article is to prepare a distributed cache for the Gitlab Runner in the K8S environment and use the cache in pipeline scripts. Therefore, before reading this article, it is recommended that you have a certain understanding of GitLab CI. It is best to read or even write pipeline scripts;
About GitLab Runner
As shown in the figure below, after the developer submits the code to GitLab, he can trigger the CI script to execute on GitLab Runner. By writing the CI script, we can complete many functions: compile, build, generate docker images, push to private warehouses, etc.:
Gitlab Runner's distributed cache
- Official document address, you can refer to this article for details about caching: https://docs.gitlab.com/runner/configuration/autoscale.html#distributed-runners-caching
- The following is an official distributed cache example (config.toml file):
[[runners]]
limit = 10
executor = "docker+machine"
[runners.cache]
Type = "s3"
Path = "path/to/prefix"
Shared = false
[runners.cache.s3]
ServerAddress = "s3.example.com"
AccessKey = "access-key"
SecretKey = "secret-key"
BucketName = "runner"
Insecure = false
- Next, complete the distributed cache configuration through actual combat;
Environment and version information
This actual combat involves multiple services, and their version information is given below for your reference:
- GitLab:Community Edition 13.0.6
- GilLab Runner:13.1.0
- governors : 1.15.3
- Harbor:1.1.3
- Minio : 2020-06-18T02: 23: 35Z
- Helm:2.16.1
Deploy distributed cache
- Minio is a distributed cache that also uses S3, which is also officially recommended, as shown below:
- Minio is deployed as an independent service. I will use docker to deploy on the server: 192.168.50.43
- Prepare two directories on the server to store the configuration and files of minio respectively, and execute the following commands:
mkdir -p /var/services/homes/zq2599/minio/gitlab_runner \
&& chmod -R 777 /var/services/homes/zq2599/minio/gitlab_runner \
&& mkdir -p /var/services/homes/zq2599/minio/config \
&& chmod -R 777 /var/services/homes/zq2599/minio/config
- Execute the docker command to create the minio service, specify the service port to be 9000, and specify the access key (the shortest three digits) and the secret key (the shortest eight digits):
sudo docker run -p 9000:9000 --name minio \
-d --restart=always \
-e "MINIO_ACCESS_KEY=access" \
-e "MINIO_SECRET_KEY=secret123456" \
-v /var/services/homes/zq2599/minio/gitlab_runner:/gitlab_runner \
-v /var/services/homes/zq2599/minio/config:/root/.minio \
minio/minio server /gitlab_runner
- Browser access, enter the access key and secret key and log in successfully:
- As shown in the figure below, click the icon in the red box to create a bucket named runner:
- At this point, minio is ready, and then configure it on GitLab Runner;
Configure cache on GitLab Runner
- I am here using GitLab Runner deployed by helm, so I am modifying the value configuration of helm. If you don’t use helm, you can refer to the next operation to directly configure the config.toml file;
- After helm downloaded the GitLab Runner package, the visible configuration information was unlocked as follows:
- Open values.yaml and find the configuration of the cache. The current configuration of the cache is shown in the figure below. The braces with empty content are visible, and the rest of the information is annotated:
- The modified cache configuration is as shown in the figure below. The original curly brackets in red box 1 have been removed, the comment symbol in red box 2 has been removed, and the content remains unchanged. Red box 3 is filled in with the access address of minio, and red box 4 is The comment symbol is removed, and the content remains unchanged:
- The s3CacheInsecure parameter in the red box 4 in the above figure is equal to false, indicating that the request to minio is http (if it is true, it is https), but it turns out that the configuration in the current version of the chart is invalid, and it will still be accessed through the https protocol when it runs , The way to solve this problem is to modify the _cache.tpl file in the templates directory, open this file, and find the content in the red box in the figure below:
- Replace the content in the red box above with the red box below, that is, delete the original if judgment and the corresponding end two lines, and directly assign CACHE_S3_INSECURE:
- The above are only cache-related configurations. Please handle other settings for helm to deploy GitLab Runner by yourself. After all settings are completed, return to the directory where values.yam is located, and execute the following command to create GitLab Runner:
helm install \
--name-template gitlab-runner \
-f values.yaml . \
--namespace gitlab-runner
- After the configuration is completed and the Riglab Runner is successfully started, let's verify it together;
verification
- In the GitLab warehouse, add a file named .gitlab-ci.yml with the following content:
# 设置执行镜像
image: busybox:latest
# 整个pipeline有两个stage
stages:
- build
- test
# 定义全局缓存,缓存的key来自分支信息,缓存位置是vendor文件夹
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- vendor/
before_script:
- echo "Before script section"
after_script:
- echo "After script section"
build1:
stage: build
tags:
- k8s
script:
- echo "将内容写入缓存"
- echo "build" > vendor/hello.txt
test1:
stage: test
script:
- echo "从缓存读取内容"
- cat vendor/hello.txt
-
Submit the above script to GitLab, as shown in the figure below, it can be seen that the pipeline will be triggered and the status is pending because it is waiting for the runner to create the executor pod:
-
It will be executed successfully later, click to see the result:
-
Click on the icon of build1 to see the output information of this job:
-
Click the icon of test1, the corresponding console output can be seen, and the data written by the previous job is successfully read:
-
So far, it can be seen that the distributed cache has taken effect, and the cache function of pipeline syntax can also be used in the environment of multiple machines;