Detailed explanation of Jenkins pipeline

Source: u.kubeinfo.cn/ozoxBB

insert image description here


What is an assembly line

There are two types of pipelines in Jenkins 声明式流水线. 脚本化流水线The scripted pipeline is the pipeline script used by the old version of Jenkins. The new version of Jenkins recommends using the declarative pipeline. The documentation only describes the declaration pipeline.

Declarative pipeline

In the declarative pipeline syntax, the pipeline process is defined in Pipeline{}, Pipeline 块which defines all the work completed in the entire pipeline, such as

Parameter Description:
  • Agent any: Execute the pipeline or any of its stages on any available agent, that is, the location where the pipeline process is executed. It can also be specified to a specific node.

  • Stage: Defines the execution process of the pipeline (equivalent to a stage), such as Build, Test, and Deploy shown below. However, this name is defined based on the actual situation and is not a fixed name.

  • steps: Execute the specific steps of a certain stage.

//Jenkinsfile (Declarative Pipeline)
pipeline {
  agent any
    stages {
      stage('Build') {
        steps {
          echo 'Build'
        }
      }
      stage('Test') {
        steps {
          echo 'Test'
        }
      }
      stage('Deploy') {
        steps {
          echo 'Deploy'
      }
    }
  }
}
Scripted pipeline

In scripted pipeline syntax, there are one or more Node blocks that perform the core work throughout the pipeline.

Parameter Description:
  • node: Execute the pipeline or any of its stages on any available agent, and can also be specified to a specific node

  • stage: Consistent with the declarative meaning, it defines the stages of the pipeline. Stage blocks are optional in scripted pipeline syntax. However, implementing stage blocks in scripted pipelines can clearly display a subset of tasks for each stage in the Jenkins UI interface.

//Jenkinsfile (Scripted Pipeline)
node {
  stage('Build') {
    echo 'Build'
  }
  stage('Test') {
    echo 'Test'
  }
  stage('Deploy') {
    echo 'Deploy'
  }
}

Declarative pipeline

Declarative pipelines must be contained in a Pipeline block, such as a Pipeline block format

pipeline {
  /* insert Declarative Pipeline here */
}

Basic statements and expressions valid in a declarative pipeline follow the same rules as Groovy's syntax, with the following exceptions

  • The top level of the pipeline must be a block, that is, pipeline{}

  • The delimiter does not need a semicolon, but each statement must be on its own line.

  • Blocks can only be composed of Sections, Directives, Steps, or assignment statements

  • Property reference statements are treated as parameterless method calls, for example input will be treated as input().

Sections

Sections in the declarative pipeline is not a keyword or instruction, but a code area block containing one or more Agents, Stages, Posts, Directives and Steps.

1.Agent

Agent represents the location of steps and command execution in the entire pipeline or a specific stage. This part must be defined at the top level of the pipeline block and can be defined again in the stage, but the stage level is optional.

any

Execute the pipeline on any available agent, configuration syntax

pipeline {
  agent any
}
none

Indicates that the Pipeline script does not have global agent configuration. When the top-level agent is configured as none, each stage section needs to contain its own agent. Configuration syntax

pipeline {
  agent none
  stages {
    stage('Stage For Build'){
      agent any
    }
  }
}
label

Select a specific node in the form of a node label to execute the Pipeline command, for example: agent { label 'my-defined-label' }. Nodes need to configure labels in advance.

pipeline {
  agent none
    stages {
      stage('Stage For Build'){
        agent { label 'role-master' }
        steps {
          echo "role-master"
        }
      }
    }
}
node

Similar to label configuration, except that you can add some additional configurations, such as customWorkspace (set the default working directory)

pipeline {
  agent none
    stages {
      stage('Stage For Build'){
        agent {
          node {
            label 'role-master'
            customWorkspace "/tmp/zhangzhuo/data"
          }
        }
        steps {
          sh "echo role-master > 1.txt"
        }
      }
    }
}
dockerfile

Execution pipelines or stages using containers built from Dockerfiles included in the source code. At this time, the corresponding agent is written as follows

agent {
   dockerfile {
     filename 'Dockerfile.build'  //dockerfile文件名称
     dir 'build'                  //执行构建镜像的工作目录
     label 'role-master'          //执行的node节点,标签选择
     additionalBuildArgs '--build-arg version=1.0.2' //构建参数
   }
}
docker

Equivalent to dockerfile, you can directly use the docker field to specify the external image, which can save construction time. For example, use maven image for packaging, and you can specify args at the same time

agent{
  docker{
    image '192.168.10.15/kubernetes/alpine:latest'   //镜像地址
    label 'role-master' //执行的节点,标签选择
    args '-v /tmp:/tmp'      //启动镜像的参数
  }
}
kubernetes

Kubernetes related plug-ins need to be deployed, official documentation:

https://github.com/jenkinsci/kubernetes-plugin/

Jenkins also supports using Kubernetes to create Slave, which is often called dynamic Slave. Configuration example is as follows

  • cloud: The name of Configure Clouds, assigned to one of the k8s

  • slaveConnectTimeout: connection timeout

  • yaml: pod definition file, the configuration of the jnlp container must be configured without changing, and the rest of the containerd is specified according to your own situation

  • workspaceVolume: The working directory of persistent jenkins.

  • persistentVolumeClaimWorkspaceVolume: Mount an existing pvc.
workspaceVolume persistentVolumeClaimWorkspaceVolume(claimName: "jenkins-agent", mountPath: "/", readOnly: "false")
  • nfsWorkspaceVolume: mount nfs server directory
workspaceVolume nfsWorkspaceVolume(serverAddress: "192.168.10.254", serverPath: "/nfs", readOnly: "false")
  • dynamicPVC: dynamically apply for pvc and delete it after the task is executed.
workspaceVolume dynamicPVC(storageClassName: "nfs-client", requestsSize: "1Gi", accessModes: "ReadWriteMany")
  • emptyDirWorkspaceVolume: temporary directory, which will be deleted when the pod is deleted after the task execution. Its main function is to share the Jenkins working directory with multiple task containers.
workspaceVolume emptyDirWorkspaceVolume()
  • hostPathWorkspaceVolume: Mount the node's local directory. Pay attention to permission issues when mounting the local directory. You can create and set 777 permissions first. Otherwise, the default directory permissions created by kubelet are 755. By default, other users do not have write permissions, and the execution pipeline will report an error.
workspaceVolume hostPathWorkspaceVolume(hostPath: "/opt/workspace", readOnly: false)
Example
agent {
  kubernetes {
      cloud 'kubernetes'
      slaveConnectTimeout 1200
      workspaceVolume emptyDirWorkspaceVolume()
      yaml '''
kind: Pod
metadata:
  name: jenkins-agent
spec:
  containers:
  - args: [\'$(JENKINS_SECRET)\', \'$(JENKINS_NAME)\']
    image: '192.168.10.15/kubernetes/jnlp:alpine'
    name: jnlp
    imagePullPolicy: IfNotPresent
  - command:
      - "cat"
    image: "192.168.10.15/kubernetes/alpine:latest"
    imagePullPolicy: "IfNotPresent"
    name: "date"
    tty: true
  restartPolicy: Never
'''
  }
}
2.Agent configuration example

kubernetes example

pipeline {
  agent {
    kubernetes {
      cloud 'kubernetes'
      slaveConnectTimeout 1200
      workspaceVolume emptyDirWorkspaceVolume()
      yaml '''
kind: Pod
metadata:
  name: jenkins-agent
spec:
  containers:
  - args: [\'$(JENKINS_SECRET)\', \'$(JENKINS_NAME)\']
    image: '192.168.10.15/kubernetes/jnlp:alpine'
    name: jnlp
    imagePullPolicy: IfNotPresent
  - command:
      - "cat"
    image: "192.168.10.15/kubernetes/alpine:latest"
    imagePullPolicy: "IfNotPresent"
    name: "date"
    tty: true
  - command:
      - "cat"
    image: "192.168.10.15/kubernetes/kubectl:apline"
    imagePullPolicy: "IfNotPresent"
    name: "kubectl"
    tty: true
  restartPolicy: Never
'''
    }
  }
  environment {
    MY_KUBECONFIG = credentials('kubernetes-cluster')
  }
  stages {
    stage('Data') {
      steps {
        container(name: 'date') {
          sh """
            date
          """
        }
      }
    }
    stage('echo') {
      steps {
        container(name: 'date') {
          sh """
            echo 'k8s is pod'
          """
        }
      }
    }
    stage('kubectl') {
      steps {
        container(name: 'kubectl') {
          sh """
            kubectl get pod -A  --kubeconfig $MY_KUBECONFIG
          """
        }
      }
    }
  }
}

Example of docker

pipeline {
  agent none
  stages {
    stage('Example Build') {
      agent { docker 'maven:3-alpine' }
      steps {
        echo 'Hello, Maven'
        sh 'mvn --version'
      }
    }
    stage('Example Test') {
      agent { docker 'openjdk:8-jre' }
      steps {
        echo 'Hello, JDK'
        sh 'java -version'
      }
    }
  }
}
3.Post

Post is generally used for further processing after the end of the pipeline, such as error notifications, etc. Post can handle different results of the pipeline differently, just like the error handling of the development program, such as try catch in the Python language.

Post can be defined in Pipeline or stage, and currently supports the following conditions

  • always: Regardless of the completion status of the Pipeline or stage, the instructions defined in the post are allowed to run;

  • changed: Only if the completion status of the current Pipeline or stage is different from its previous run, the step is allowed to be run in the post part;

  • fixed: When this Pipeline or stage is successful and the previous build failed or was unstable, the instructions defined in this post are allowed to run;

  • regression: When the status of this Pipeline or stage is failed, unstable or terminated, and the status of the previous build is successful, the instructions defined in this post are allowed to run;

  • failure: Only if the completion status of the current Pipeline or stage is failure, the step is allowed to be run in the post part. This step is usually displayed in red in the web interface.

  • success: The current status is success, and the post step is executed. It is usually displayed in blue or green in the web interface.

  • unstable: The current status is unstable and the post step is executed. It is usually caused by test failure or code violation. It is displayed in yellow in the web interface.

  • aborted: The current status is aborted. Executing this post step is usually triggered by manual termination of the pipeline. At this time, it is displayed in gray in the Web interface;

  • unsuccessful: When the current status is not success, execute the post step;

  • cleanup: Allows the instructions defined in this post to run regardless of the completion status of the pipeline or stage. The difference from always is that cleanup will be executed after other executions.

Example

Generally, the post part is placed at the bottom of the pipeline. For example, in this example, regardless of the completion status of the stage, an I will always say Hello again! message will be output.

//Jenkinsfile (Declarative Pipeline)
pipeline {
  agent any
  stages {
    stage('Example1') {
      steps {
        echo 'Hello World1'
      }
    }
    stage('Example2') {
      steps {
        echo 'Hello World2'
      }
    }
  }
  post {
    always {
      echo 'I will always say Hello again!'
    }
  }
}

You can also write the post in the stage. The following example shows that the post fails to execute when Example1 fails.

//Jenkinsfile (Declarative Pipeline)
pipeline {
  agent any
  stages {
    stage('Example1') {
      steps {
        sh 'ip a'
      }
      post {
        failure {
          echo 'I will always say Hello again!'
        }
      }
    }
  }
}
4. the fence

The Steps part is one or more steps executed in a given stage instruction, such as executing a shell command defined in steps

//Jenkinsfile (Declarative Pipeline)
pipeline {
  agent any
  stages {
    stage('Example') {
      steps {
        echo 'Hello World'
      }
    }
  }
}

Or use the sh field to execute multiple instructions

//Jenkinsfile (Declarative Pipeline)
pipeline {
  agent any
  stages {
    stage('Example') {
      steps {
        sh """
           echo 'Hello World1'
           echo 'Hello World2'
        """
      }
    }
  }
}
Directives

Directives can be used to make some conditional judgments or preprocess some data when executing a stage. Consistent with Sections, Directives are not a keyword or instruction, but include configurations such as environment, options, parameters, triggers, stage, tools, input, and when.

1.Environment

Environment is mainly used for some environment variables configured in the pipeline. The scope of the environment variable is determined based on the location of the configuration. It can be defined in the pipeline as a global variable, or configured in a stage as an environment variable of the stage. This directive supports a special method credentials() which can be used to access predefined credentials by identifier in a Jenkins environment. For credentials of type Secret Text, credentials() can assign the text content in the Secret to environment variables. For standard account and password credentials, the specified environment variables are username and password, and two additional environment variables are also defined, namely MYVARNAME_USR and MYVARNAME_PSW.

Basic variable usage

//示例
pipeline {
  agent any
  environment {   //全局变量,会在所有stage中生效
    NAME= 'zhangzhuo'
  }
  stages {
    stage('env1') {
      environment { //定义在stage中的变量只会在当前stage生效,其他的stage不会生效
        HARBOR = 'https://192.168.10.15'
      }
      steps {
        sh "env"
      }
    }
    stage('env2') {
      steps {
        sh "env"
      }
    }
  }
}

Use a variable to reference the secret's credentials

//这里使用k8s的kubeconfig文件示例
pipeline {
  agent any
  environment {
    KUBECONFIG = credentials('kubernetes-cluster')
  }
  stages {
    stage('env') {
      steps {
        sh "env"  //默认情况下输出的变量内容会被加密
      }
    }
  }
}
Use a variable reference type that is a standard account and password type credential.

The HARBOR variable is used for demonstration here. By default, account and password credentials will automatically create 3 variables.

  • HARBOR_USR: The username value in the certificate will be assigned to this variable

  • HARBOR_PSW: The password value in the certificate will be assigned to this variable

  • HARBOR: The value assigned by default is usernamme:password

//这里使用k8s的kubeconfig文件示例
pipeline {
  agent any
  environment {
    HARBOR = credentials('harbor-account')
  }
  stages {
    stage('env') {
      steps {
        sh "env"
      }
    }
  }
}
2.Options

The Jenkins pipeline supports many built-in instructions. For example, retry can repeat a failed step n times, and can achieve different effects according to different instructions. The more commonly used instructions are as follows:

  • buildDiscarder: How many pipeline build records are retained?

  • disableConcurrentBuilds: Disable parallel execution of pipelines to prevent parallel pipelines from accessing shared resources at the same time, causing pipeline failure.

  • disableResume: If the controller is restarted, disable automatic recovery of the pipeline.

  • newContainerPerStage: When agent is docker or dockerfile, each stage will run in a new container on the same node, instead of all stages running in the same container.

  • quietPeriod: Silent period of the pipeline, that is, waiting for a while to execute after triggering the pipeline.

  • retry: The number of retries after pipeline failure.

  • timeout: Set the timeout time of the pipeline. If the pipeline time is exceeded, the job will automatically terminate. If the unit parameter is not added, the default value is 1 point.

  • timestamps: Output timestamp to the console.

defined in pipeline

pipeline {
  agent any
  options {
    timeout(time: 1, unit: 'HOURS')  //超时时间1小时,如果不加unit参数默认为1分
    timestamps()                     //所有输出每行都会打印时间戳
    buildDiscarder(logRotator(numToKeepStr: '3')) //保留三个历史构建版本
    quietPeriod(10)  //注意手动触发的构建不生效
    retry(3)    //流水线失败后重试次数
  }
  stages {
    stage('env1') {
      steps {
        sh "env"
        sleep 2
      }
    }
    stage('env2') {
      steps {
        sh "env"
      }
    }
  }
}

defined in stage

In addition to being written at the top level of the Pipeline, Option can also be written in the stage. However, options written in the stage only support retry, timeout, timestamps, or declarative options related to the stage, such as skipDefaultCheckout. Options at stage level are written as follows

pipeline {
  agent any
  stages {
    stage('env1') {
      options {   //定义在这里这对这个stage生效
        timeout(time: 2, unit: 'SECONDS') //超时时间2秒
        timestamps()                     //所有输出每行都会打印时间戳
        retry(3)    //流水线失败后重试次数
      }
      steps {
        sh "env && sleep 2"
      }
    }
    stage('env2') {
      steps {
        sh "env"
      }
    }
  }
}
3.Parameters

Parameters provides a list of parameters that the user should provide when triggering the pipeline. The values ​​of these user-specified parameters can be provided to the steps of the pipeline through the params object. Can only be defined at the top level of the pipeline.

Currently supported parameter types are as follows:
  • string: String type parameters.

  • text: Text parameter, generally used to define variables for multi-line text content.

  • booleanParam: Boolean parameter.

  • choice: Selective parameter, generally used to give several optional values ​​and then select one of them for assignment.

  • password: Password variable, generally used to define sensitive variables, will be output as * in the Jenkins console.

Plug-inParameters
  • imageTag: Image tag, you need to install the Image Tag Parameter plug-in before using it.

  • gitParameter: Obtain the git warehouse branch, which requires the Git Parameter plug-in.

Example
pipeline {
  agent any
  parameters {
    string(name: 'DEPLOY_ENV', defaultValue:  'staging', description: '1')   //执行构建时需要手动配置字符串类型参数,之后赋值给变量
    text(name:  'DEPLOY_TEXT', defaultValue: 'One\nTwo\nThree\n', description: '2')  //执行构建时需要提供文本参数,之后赋值给变量
    booleanParam(name: 'DEBUG_BUILD',  defaultValue: true, description: '3')   //布尔型参数
    choice(name: 'CHOICES', choices: ['one', 'two', 'three'], description: '4')  //选择形式列表参数
    password(name: 'PASSWORD', defaultValue: 'SECRET', description: 'A  secret password')  //密码类型参数,会进行加密
    imageTag(name: 'DOCKER_IMAGE', description: '', image: 'kubernetes/kubectl', filter: '.*', defaultTag: '', registry: 'https://192.168.10.15', credentialId: 'harbor-account', tagOrder: 'NATURAL')   //获取镜像名称与tag
    gitParameter(branch: '', branchFilter: 'origin/(.*)', defaultValue: '', description: 'Branch for build and deploy', name: 'BRANCH', quickFilterEnabled: false, selectedValue: 'NONE', sortMode: 'NONE',  tagFilter: '*', type: 'PT_BRANCH')
  }  //获取git仓库分支列表,必须有git引用
  stages {
    stage('env1') {
      steps {
        sh "env"
      }
    }
    stage('git') {
      steps {
        git branch: "$BRANCH", credentialsId: 'gitlab-key', url: '[email protected]:root/env.git'   //使用gitParameter,必须有这个
      }
    }
  }
}
4.Triggers

In Pipeline, triggers can be used to automatically trigger pipeline execution tasks. The pipeline can be triggered through Webhook, Cron, pollSCM and upstream.

Cron

Scheduled build If a certain pipeline takes a long time to build, or a certain pipeline needs to be executed regularly during a certain period of time, you can use cron to configure the trigger, for example, execute it every four hours from Monday to Friday.

Note: H does not mean HOURS, but the abbreviation of Hash. Mainly to solve the system load pressure caused by multiple pipelines running at the same time.

pipeline {
  agent any
  triggers {
    cron('H */4 * * 1-5')   //周一到周五每隔四个小时执行一次
    cron('H/12 * * * *')   //每隔12分钟执行一次
    cron('H * * * *')   //每隔1小时执行一次
  }
  stages {
    stage('Example') {
      steps {
        echo 'Hello World'
      }
    }
  }
}
Upstream

Upstream can decide whether to trigger the pipeline based on the execution results of the upstream job. For example, the pipeline is triggered when job1 or job2 is successfully executed.

Currently supported states are SUCCESS, UNSTABLE, FAILURE, NOT_BUILT, ABORTEDetc.

pipeline {
  agent any
  triggers {
    upstream(upstreamProjects: 'env', threshold: hudson.model.Result.SUCCESS)  //当env构建成功时构建这个流水线
  }
  stages {
    stage('Example') {
      steps {
        echo 'Hello World'
      }
    }
  }
}
5.Input

The Input field enables interactive operations in the pipeline, such as selecting the environment to be deployed, whether to continue executing a certain stage, etc.

Configuring Input supports the following options
  • message: required, prompt information that requires user input, such as: "Do you want to publish it to the production environment?";

  • id: optional, the identifier of the input, the default is the name of the stage;

  • ok: Optional, the display information of the confirmation button, such as: "OK", "Allow";

  • submitter: optional, the name of the user or group that is allowed to submit input operations. If it is empty, any logged in user can submit input;

  • parameters: Provide a parameter list for input.

Suppose you need to configure an input pipeline with a prompt message of "Do you want to continue?", a confirmation button of "Continue", a PERSON variable parameter, and an input pipeline that can only be submitted by logged-in users for alice and bob.

pipeline {
  agent any
  stages {
    stage('Example') {
      input {
        message "还继续么?"
        ok "继续"
        submitter "alice,bob"
        parameters {
          string(name: 'PERSON', defaultValue: 'Mr Jenkins', description: 'Who should I say hello to?')
        }
      }
      steps {
        echo "Hello, ${PERSON}, nice to meet you."
      }
    }
  }
}
6.when

The when instruction allows the pipeline to decide whether the stage should be executed based on given conditions. The when instruction must contain at least one condition. If when contains multiple conditions, all sub-conditions must return True before the stage can be executed.

When can also be combined with not, allOf, and anyOf syntax to achieve more flexible condition matching.

The most commonly used built-in conditions at present are as follows:
  • branch: This stage is executed when the branch being built matches the given branch. Note that branch only applies to multi-branch pipelines

  • changelog: Match the submitted changeLog to decide whether to build, for example:when { changelog '.*^\\[DEPENDENCY\\] .+$' }

  • environment: When the specified environment variable matches the given variable, execute this stage, for example:when { environment name: 'DEPLOY_TO', value: 'production' }

  • equals: When the expected value and the actual value are the same, this stage is executed, for example: when { equals expected: 2, actual: currentBuild.number };

  • expression: When the specified Groovy expression evaluates to True, this stage is executed, for example: when { expression { return params.DEBUG_BUILD } };

  • tag: If the value of TAG_NAME matches the given condition, execute this stage, for example: when { tag "release-" };

  • not: When an error occurs in the nested condition, this stage must be executed and must contain a condition, for example: when { not { branch 'master' } };

  • allOf: When all nested conditions are correct, this stage must contain at least one condition, such as: when { allOf { branch 'master'; environment name: 'DEPLOY_TO', value: 'production' } };

  • anyOf: When at least one nested condition is True, this stage is executed, for example: when { anyOf { branch 'master'; branch 'staging' } }.

Example: Execute the Example Deploy step when the branch is main

pipeline {
  agent any
  stages {
    stage('Example Build') {
      steps {
        echo 'Hello World'
      }
    }
    stage('Example Deploy') {
      when {
        branch 'main' //多分支流水线,分支为才会执行。
      }
      steps {
        echo 'Deploying'
      }
    }
  }
}

You can also configure multiple conditions at the same time. For example, if the branch is production and the value of the DEPLOY_TO variable is main, Example Deploy will be executed.

pipeline {
  agent any
  environment {
    DEPLOY_TO = "main"
  }
  stages {
    stage('Example Deploy') {
      when {
        branch 'main'
        environment name: 'DEPLOY_TO', value: 'main'
      }
      steps {
        echo 'Deploying'
      }
    }
  }
}

You can also use anyOf to match one of the conditions, such as executing Deploy when the branch is main or DEPLOY_TO is main or master.

pipeline {
  agent any
  stages {
    stage('Example Deploy') {
      when {
        anyOf {
          branch 'main'
          environment name: 'DEPLOY_TO', value: 'main'
          environment name: 'DEPLOY_TO', value: 'master'
        }
      }
      steps {
        echo 'Deploying'
      }
    }
  }
}

You can also use expression for regular matching. For example, Example Deploy will be executed when BRANCH_NAME is main or master, and DEPLOY_TO is master or main.

pipeline {
  agent any
  stages {
    stage('Example Deploy') {
      when {
        expression { BRANCH_NAME ==~ /(main|master)/ }
        anyOf {
          environment name: 'DEPLOY_TO', value: 'main'
          environment name: 'DEPLOY_TO', value: 'master'
        }
      }
      steps {
        echo 'Deploying'
      }
    }
  }
}

By default, if an agent of a certain stage is defined, the when condition of the stage will not be evaluated until the agent of that stage is entered, but this option can be changed through some options. For example, to evaluate when before entering the agent of the stage, you can use beforeAgent, and the stage will only be executed when when is true.

The currently supported preconditions are as follows:
  • beforeAgent: If beforeAgent is true, the when condition will be evaluated first. This stage will only be entered when the when condition is true.

  • beforeInput: If beforeInput is true, the when condition will be evaluated first. When the when condition is true, it will enter the input stage;

  • beforeOptions: If beforeInput is true, the when condition will be evaluated first. When the when condition is true, it will enter the options phase;

  • beforeOptions priority 大于beforeInput 大于beforeAgent

Example

pipeline {
  agent none
  stages {
    stage('Example Build') {
      steps {
        echo 'Hello World'
      }
    }
    stage('Example Deploy') {
      when {
        beforeAgent true
        branch 'main'
      }
      steps {
        echo 'Deploying'
      }
    }
  }
}
Parallel

You can use the Parallel field in the declarative pipeline to easily implement concurrent construction, such as parallel processing of branches A, B, and C.

pipeline {
  agent any
  stages {
    stage('Non-Parallel Stage') {
      steps {
        echo 'This stage will be executed first.'
      }
    }
    stage('Parallel Stage') {
      failFast true         //表示其中只要有一个分支构建执行失败,就直接推出不等待其他分支构建
      parallel {
        stage('Branch A') {
          steps {
            echo "On Branch A"
          }
        }
        stage('Branch B') {
          steps {
            echo "On Branch B"
          }
        }
        stage('Branch C') {
          stages {
            stage('Nested 1') {
              steps {
                echo "In stage Nested 1 within Branch C"
              }
            }
            stage('Nested 2') {
              steps {
               echo "In stage Nested 2 within Branch C"
              }
            }
          }
        }
      }
    }
  }
}

Backend management system + user applet implemented based on Spring Cloud Alibaba + Gateway + Nacos + RocketMQ + Vue & Element, supporting RBAC dynamic permissions, multi-tenancy, data permissions, workflow, three-party login, payment, SMS, mall and other functions

  • Project address: https://github.com/YunaiV/yudao-cloud

  • Video tutorial: https://doc.iocoder.cn/video/

Jenkinsfile usage

As mentioned above, the pipeline supports two syntaxes, namely declarative and scripting. Both syntaxes support the construction of continuous delivery pipelines. And can be used to define pipelines in the Web UI or Jenkinsfile, but the Jenkinsfile is usually placed in the code repository (of course it can also be managed in a separate code repository).

Creating a Jenkinsfile and placing it in the code repository has the following benefits

  • Facilitates review/iteration of code on the pipeline

  • Audit trail for pipelines

  • The real source code of the pipeline can be viewed and edited by multiple members of the project

environment variables
1. Static variables

Jenkins has many built-in variables that can be used directly in the Jenkinsfile, JENKINS_URL/pipeline/syntax/globals#enva full list can be obtained at. The currently more commonly used environment variables are as follows:

  • BUILD_ID: The ID of the current build, exactly the same as BUILD_NUMBER in Jenkins version 1.597+

  • BUILD_NUMBER: The ID of the current build, consistent with BUILD_ID

  • BUILD_TAG: Used to identify the version number of the build. The format is: jenkins-{BUILD_NUMBER}. The product can be named, such as the name of the produced jar package, the TAG of the image, etc.;

  • BUILD_URL: The complete URL of this build, such as: http://buildserver/jenkins/job/MyJobName/17/%EF%BC%9B

  • JOB_NAME: The name of the project built this time

  • NODE_NAME: The name of the current build node;

  • JENKINS_URL: Jenkins complete URL, needs to be set in SystemConfiguration;

  • WORKSPACE: The working directory where the build is executed.

Example: If a pipeline name is print_env, the second build, the values ​​of each variable.

BUILD_ID:2
BUILD_NUMBER:2
BUILD_TAG:jenkins-print_env-2
BUILD_URL:http://192.168.10.16:8080/job/print_env/2/
JOB_NAME:print_env
NODE_NAME:built-in
JENKINS_URL:http://192.168.10.16:8080/
WORKSPACE:/bitnami/jenkins/home/workspace/print_env

The above variables will be saved in a Map, and you can use env.BUILD_ID or env.JENKINS_URL to reference a built-in variable

pipeline {
  agent any
  stages {
    stage('print env') {
      parallel {
        stage('BUILD_ID') {
          steps {
            echo "$env.BUILD_ID"
          }
        }
        stage('BUILD_NUMBER') {
          steps {
            echo "$env.BUILD_NUMBER"
          }
        }
        stage('BUILD_TAG') {
          steps {
            echo "$env.BUILD_TAG"
          }
        }
      }
    }
  }
}
2. Dynamic variables

Dynamic variables are dynamically assigned based on the result of a certain instruction, and the value of the variable varies according to the execution result of the instruction. As follows

  • returnStdout: Assign the execution result of the command to a variable. For example, the following command returns clang, and the value of CC is "clang" at this time.

  • returnStatus: Assign the execution status of the command to a variable. For example, the execution status of the following command is 1, and the value of EXIT_STATUS is 1 at this time.

//Jenkinsfile (Declarative Pipeline)
pipeline {
  agent any
  environment {
    // 使用 returnStdout
    CC = """${sh(
         returnStdout: true,
         script: 'echo -n "clang"'   //如果使用shell命令的echo赋值变量最好加-n取消换行
         )}"""
    // 使用 returnStatus
    EXIT_STATUS = """${sh(
         returnStatus: true,
         script: 'exit 1'
         )}"""
  }
  stages {
    stage('Example') {
      environment {
        DEBUG_FLAGS = '-g'
      }
      steps {
        sh 'printenv'
      }
    }
  }
}
Credential management

Jenkins' declarative pipeline syntax has a credentials() function, which supports secret text (encrypted text), username and password (username and password), and secret file (encrypted file). Next, let’s take a look at some commonly used voucher processing methods.

1. Encrypted text

This example demonstrates assigning two Secret text credentials to separate environment variables to access Amazon Web Services. Credentials for these two files need to be created in advance (demonstration will be provided in the practical chapter). The contents of the Jenkinsfile file are as follows

//Jenkinsfile (Declarative Pipeline)
pipeline {
  agent any
  environment {
    AWS_ACCESS_KEY_ID = credentials('txt1')
    AWS_SECRET_ACCESS_KEY = credentials('txt2')
  }
  stages {
    stage('Example stage 1') {
      steps {
        echo "$AWS_ACCESS_KEY_ID"
      }
    }
    stage('Example stage 2') {
      steps {
        echo "$AWS_SECRET_ACCESS_KEY"
      }
    }
  }
}
2. Username and password

This example is used to demonstrate the use of credentials account and password, such as using a public account to access Bitbucket, GitLab, Harbor, etc. Assume that credentials in the form of username and password have been configured, and the credential ID is harbor-account.

//Jenkinsfile (Declarative Pipeline)
pipeline {
  agent any
  environment {
    BITBUCKET_COMMON_CREDS = credentials('harbor-account')
  }
  stages {
    stage('printenv') {
      steps {
        sh "env"
      }
    }
}

The above configuration will automatically generate 3 environment variables

  • BITBUCKET_COMMON_CREDS: Contains a colon-separated username and password in the format username:password

  • BITBUCKET_COMMON_CREDS_USR: Additional variable containing only username

  • BITBUCKET_COMMON_CREDS_PSW: Additional variable containing only the password.

3. Encrypt files

Credentials can also be used for files that need to be encrypted and saved, such as the kubeconfig file linked to the Kubernetes cluster.

If a kubeconfig file has been configured, you can reference the file in the Pipeline at this time

//Jenkinsfile (Declarative Pipeline)
pipeline {
  agent {
    kubernetes {
      cloud 'kubernetes'
      slaveConnectTimeout 1200
      workspaceVolume emptyDirWorkspaceVolume()
      yaml '''
kind: Pod
metadata:
  name: jenkins-agent
spec:
  containers:
  - args: [\'$(JENKINS_SECRET)\', \'$(JENKINS_NAME)\']
    image: '192.168.10.15/kubernetes/jnlp:alpine'
    name: jnlp
    imagePullPolicy: IfNotPresent
  - command:
      - "cat"
    image: "192.168.10.15/kubernetes/kubectl:apline"
    imagePullPolicy: "IfNotPresent"
    name: "kubectl"
    tty: true
  restartPolicy: Never
'''
    }
  }
  environment {
    MY_KUBECONFIG = credentials('kubernetes-cluster')
  }
  stages {
    stage('kubectl') {
      steps {
        container(name: 'kubectl') {
          sh """
            kubectl get pod -A  --kubeconfig $MY_KUBECONFIG
          """
        }
      }
    }
  }
}

Guess you like

Origin blog.csdn.net/asd54090/article/details/132472037