Table of contents
local Import local configuration
file introduces other project configurations
template introduces official configuration
remote introduces remote configuration
extends inherits job configuration
needs parallel stage
Jobs can be executed out of order, instead of running certain jobs in phase order, allowing multiple phases to run concurrently.
stages:
- build
- test
- deploy
module-a-build:
stage: build
script:
- echo "hello3a"
- sleep 10
module-b-build:
stage: build
script:
- echo "hello3b"
- sleep 10
module-a-test:
stage: test
script:
- echo "hello3a"
- sleep 10
needs: ["module-a-build"]
module-b-test:
stage: test
script:
- echo "hello3b"
- sleep 10
needs: ["module-b-build"]
If needs:
set to point to only/except
a job that was not instantiated due to a rule, or does not exist, there will be a YAML error when creating the pipeline.
Temporarily limits jobs to needs:
the maximum number of jobs that may be allocated, ci_dag_limit_needs
10 if the feature flag is enabled (default), and 50 if the feature is disabled.
Feature::disable(:ci_dag_limit_needs) # 50
Feature::enable(:ci_dag_limit_needs) # 10
Product download
In use needs
, the artifact download can be controlled by artifacts: true
or artifacts: false
. Not specified as true by default.
module-a-test:
stage: test
script:
- echo "hello3a"
- sleep 10
needs:
- job: "module-a-build"
artifacts: true
Pipeline artifact downloads in the same project can be used to download artifacts from different pipelines of the project
current project by setting the keyword to the name of the current project, and specifying a reference . needs
In the example below, ref build_job
is used to download the artifacts other-ref
for the latest successful job:build-1
build_job:
stage: build
script:
- ls -lhR
needs:
- project: group/same-project-name
job: build-1
ref: other-ref
artifacts: true
Downloading artifacts from jobs run by parallel: is not supported.
include
Can allow importing of external YAML files, files with extension .yml
or .yaml
. Use the merge function to customize and override CI/CD configurations containing local definitions. The same jobs will be merged, and the parameter values are subject to the source file.
local Import local configuration
Import files in the same repository, referenced using full paths relative to the root directory, used on the same branch as the configuration files.
ci/localci.yml: defines a job for publishing.
stages:
- deploy
deployjob:
stage: deploy
script:
- echo 'deploy'
.gitlab-ci.yml imports the local CI file 'ci/localci.yml'.
include:
local: 'ci/localci.yml'
stages:
- build
- test
- deploy
buildjob:
stage: build
script: ls
testjob:
stage: test
script: ls
file introduces other project configurations
Include files from another project.
include:
- project: demo/demo-java-service
ref: master
file: '.gitlab-ci.yml'
template introduces official configuration
Only use the officially provided templates: lib/gitlab/ci/templates master GitLab.org / GitLab GitLab
include:
- template: Auto-DevOps.gitlab-ci.yml
remote introduces remote configuration
Use to include files from other locations via HTTP/HTTPS and reference them using full URLs. The remote file must be publicly accessible through a simple GET request, because authentication schemes in remote URLs are not supported.
include:
- remote: 'https://gitlab.com/awesome-project/raw/master/.gitlab-ci-template.yml'
extends inherits job configuration
Inherit the template job.
stages:
- test
variables:
RSPEC: 'test'
.tests:
script: echo "mvn test"
stage: test
only:
refs:
- branches
testjob:
extends: .tests
script: echo "mvn clean test"
only:
variables:
- $RSPEC
After the merge:
testjob:
stage: test
script: mvn clean test
only:
variables:
- $RSPEC
refs:
- branches
extends & include
This will run a job called useTemplate which runs echo Hello! As defined in the .template job and uses the alpine Docker image defined in the local job.
Define an aa.yml:
deployjob:
stage: deploy
script:
- echo 'deploy'
only:
- dev
.template:
stage: build
script:
- echo "build"
only:
- master
include:
local: 'ci/localci.yml'
stages:
- test
- build
- deploy
variables:
RSPEC: 'test'
.tests:
script: echo "mvn test"
stage: test
only:
refs:
- branches
testjob:
extends: .tests
script: echo "mvn clean test"
only:
variables:
- $RSPEC
newbuildjob:
script:
- echo "123"
extends: .template
trigger pipeline trigger
A downstream pipeline is created when GitLab trigger
starts a job created from a definition. Allows creation of multi-project pipelines and sub-pipelines. Using it trigger
with when:manual
will cause an error.
Multi-project pipelines: Set up pipelines across multiple projects so that a pipeline in one project can trigger a pipeline in another project. [Microservice Architecture]
Parent-child pipelines: A pipeline in the same project can trigger a set of sub-pipelines to run concurrently. The sub-pipelines still execute each of their jobs in stage order, but are free to continue executing stages without waiting for unrelated jobs in the parent pipeline to complete.
multi-project pipeline
When the previous stage is completed, trigger the demo/demo-java-service project master pipeline. The user creating the upstream pipeline needs to have access to the downstream project. If it is found that the downstream project user does not have access to create pipelines in it, staging
the job will be marked as failed .
staging:
variables:
ENVIRONMENT: staging
stage: deploy
trigger:
project: demo/demo-java-service
branch: master
strategy: depend
project
Keyword specifying the full path to the downstream project. This branch
keyword specifies the name of the project branch specified by. Use variables
keywords to pass variables down the pipeline. Global variables are also passed down to downstream projects. Upstream pipelines take precedence over downstream pipelines. If two variables with the same name are defined in upstream and downstream projects, the variable defined in the upstream project will take precedence. By default, once a downstream pipeline is created, trigger
the job completes with success
status. strategy: depend
Merges own state from the triggered pipeline to the source job.
View pipeline information in downstream projects:
In this example, once the downstream pipeline is created, it staging
will be marked as successful.
parent-child pipeline
Create child pipeline ci/child01.yml:
stages:
- build
child-a-build:
stage: build
script:
- echo "hello3a"
- sleep 10
Trigger the child pipeline in the parent pipeline:
staging2:
variables:
ENVIRONMENT: staging
stage: deploy
trigger:
include: ci/child01.yml
strategy: depend
image
prepare the environment
This time we need to prepare a runner that registers the docker executor type when learning the syntax. You can refer to the following commands to specify: (modify according to the actual situation)
sudo gitlab-runner register \
--non-interactive \
--url "http://192.168.170.133/" \
--registration-token "GR1348941sUxNyye1qD4HcTSW-TMw" \
--executor "docker" \
--docker-image alpine:latest \
--description "docker-runner" \
--maintenance-note "Free-form maintainer notes about this runner" \
--tag-list "docker,aws" \
--run-untagged="true" \
--locked="false" \
--access-level="not_protected"
Modify the docker-runner pull strategy:
[root@run01 /home/gitlab-runner]# vim /etc/gitlab-runner/config.toml
······
[[runners]]
name = "docker-runner"
url = "http://192.168.170.133/"
id = 3
token = "F9JvXdw6AW4zoNsxRGaD"
token_obtained_at = 2023-07-19T07:06:45Z
token_expires_at = 0001-01-01T00:00:00Z
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
pull_policy = "if-not-present" # 增加这一行即可
tls_verify = false
image = "alpine:latest"
privileged = false
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
Finally, install Docker on the GitLab-Runner machine: [Cloud Native | Docker Basics] 02, CentOS 7 Install Docker Detailed Graphic Tutorial_centos7 Install docker_Stars.Sky's Blog-CSDN Blog image Syntax: By default, you need to fill in a basic image when registering the runner. Please remember that as long as the executor is a docker-type runner, all operations will run in the container. If images is specified globally then all jobs create containers with this image and run in them. The image is not specified globally, check again to see if it is specified in the job, if there is this job to create a container according to the specified image and run it, if not, use the default image specified when registering the runner.
#image: maven:3.6.3-jdk-8
before_script:
- ls
build:
image: maven:3.6.3-jdk-8
stage: build
tags:
- docker
script:
- ls
- sleep 2
- echo "mvn clean "
- sleep 10
deploy:
stage: deploy
tags:
- docker
script:
- echo "deploy"
services
Another Docker image to run during the job and link to image
the Docker image defined by the keyword. This way, you can access the service image during build.
A service image can run any application, but the most common use case is to run database containers, eg mysql
. mysql
It's easier and faster to use an existing image and run it as an additional container than to install it every time you install your project .
services:
- name: mysql:latest
alias: mysql-1
environment
Declare the deployed environment name and access address, which can be viewed directly in the gitlab environment variable later. Very convenient.
deploy to production:
stage: deploy
script: git push production HEAD:master
environment:
name: production
url: https://prod.example.com
you inherit
Use or disable globally defined environment variables (variables) or default values (default).
Use true, false to decide whether to use, the default is true
inherit:
default: false
variables: false
Inherit some of the variables or use the default value list
inherit:
default:
- parameter1
- parameter2
variables:
- VARIABLE1
- VARIABLE2
Previous article: [CI/CD practice based on GitLab] 04. GitLab Pipeline practice (in)_Stars.Sky's Blog-CSDN Blog