table of Contents
0 Introduction
First, we consider the question, Why do we need jenkins slave? In fact, in a production environment, if a single master, unless you're a single machine configuration is particularly high number of small and build the case, we can not consider the use of slave, however, in the number of building hundreds of times and jenkins master at running kubernetes environment, With flexibility kubernetes, it is strongly recommended to use slave, master automatically create the Slave Pod, then the task will be pushed to the Slave Pod, after the task is completed, Slave Pod is automatically recycled / destroyed.
Create a slave flow chart:
1, Jenkins deployment
jenkins delivered into kubernetes
1. Prepare the image file
$ docker pull jenkins/jenkins:v2.204.1
$ docker tag 6097aa0af96e harbor.od.com/public/jenkins:v2.204.1
$ docker push harbor.od.com/public/jenkins:v2.204.1
2. Resource Profiles
- rbac
$ vi /data/k8s-yaml/jenkins_slave/rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
namespace: infra
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins
rules:
- apiGroups: ["extensions", "apps"]
resources: ["deployments"]
verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]
- apiGroups: [""]
resources: ["services"]
verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: jenkins
namespace: infra
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
namespace: infra
Jenkins will automatically create slave pod, so it is necessary to bind the authority jenkins
- deployment
$ vi /data/k8s-yaml/jenkins_slave/dp.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: jenkins
namespace: infra
labels:
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
name: jenkins
template:
metadata:
labels:
app: jenkins
name: jenkins
spec:
serviceAccount: jenkins
volumes:
- name: data
nfs:
server: hdss7-200.host.com
path: /data/nfs-volume/jenkins_home
containers:
- name: jenkins
image: harbor.od.com/infra/jenkins:v2.204.1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
name: web
protocol: TCP
- containerPort: 50000
name: agent
protocol: TCP
env:
- name: JAVA_OPTS
value: "-Xms512G -Xmx512G -XX:PermSize=512m -XX:MaxPermSize=1024m -Duser.timezone=Asia/Shanghai"
- name: TRY_UPGRADE_IF_NO_MARKER
value: "true"
volumeMounts:
- name: data
mountPath: /var/jenkins_home
imagePullSecrets:
- name: harbor
securityContext:
runAsUser: 0
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
revisionHistoryLimit: 7
progressDeadlineSeconds: 600
Exposing web ports, also retains the agent port, the port agent is mainly used for communication between the master and slave for the Jenkins.
- service
$ vi /data/k8s-yaml/jenkins_slave/svc.yaml
kind: Service
apiVersion: v1
metadata:
name: jenkins
namespace: infra
spec:
ports:
- name: web
port: 80
targetPort: 8080
protocol: TCP
- name: agent
port: 50000
targetPort: 50000
protocol: TCP
selector:
app: jenkins
- ingress
$ vi /data/k8s-yaml/jenkins_slave/ingress.yaml
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: jenkins
namespace: infra
spec:
rules:
- host: jenkins.od.com
http:
paths:
- path: /
backend:
serviceName: jenkins
servicePort: 80
3. Application Resource Configuration Checklist
$ kubectl apply -f http://k8s-yaml.od.com/jenkins_slave/rbac.yaml
$ kubectl apply -f http://k8s-yaml.od.com/jenkins_slave/dp.yaml
$ kubectl apply -f http://k8s-yaml.od.com/jenkins_slave/svc.yaml
$ kubectl apply -f http://k8s-yaml.od.com/jenkins_slave/ingress.yaml
2, the dynamic slave configuration jenkins
After initialization jenkins, kubernetes need to install plug-ins.
1. After you install the plug-in is complete, click Manage Jenkins -> Configure System -> (drag the bottom most) Add a new cloud -> Select Kubernetes, then fill Kubernetes Jenkins and configuration information
Fill kubernetes internal access address of the cluster: https: //kubernetes.default.svc.cluster.local, click the Test Connection , if the message appears Connection test successful proof Jenkins has been normal and Kubernetes communication system, and then Jenkins URL address below : http: //jenkins.infra.svc.cluster.local: 8080
2. Create Pipeline building dynamic test
def label = "jenkins-slave-${UUID.randomUUID().toString()}"
podTemplate(label: label, cloud: 'kubernetes') {
node(label) {
stage('Run shell') {
sh 'sleep 10s'
sh 'echo hello world.'
}
}
}
3. Click to build pipeline
Jenkins can be seen in the namespaces automatically creates a corresponding agent pod is the equivalent of a node when the task execution jenkins complete this pod will automatically exit the pod will default to pull a jenkins/jnlp-slave:x.xx-xx-alpine
mirror image
[root@hdss7-21 ~]# kubectl get pods -n infra
NAME READY STATUS RESTARTS AGE
jenkins-77b9c47874-qjgfd 1/1 Running 1 13h
jenkins-slave-c07daa7b-31ef-41ea-825e-05c9c721edad-sb7h6-lpgwv 1/1 Running 0 18s
3, dubbo service building
When we build dubbo services, the need to compile dubbo mirrored and pushed to the harbor, this time we should use maven and docker command, so it is necessary to build their own base image, and then call in the pipeline
Note: This does not realize how dubbo service delivery to k8s, but demonstrate dynamic jenkins create slave to build dubbo Service
3.1, the mirror substrate package production dubbo
The bottom containment requires versatility, will refer to this bottomless bag when all dubbo micro service mirrored to make the bottom package is mainly to achieve some of the features, such as jmx monitoring (jmx_javaagent), and dubbo micro-services startup script, as well as dependent jdk surroundings
1. Prepare a Mirror (jdk environment)
$ docker pull stanleyws/jre8:8u112
$ docker tag stanleyws/jre8:8u112 harbor.od.com/public/jre:8u112
$ docker push harbor.od.com/public/jre:8u112
2. Custom Dockerfile
- /data/dockerfile/jre8/Dockerfile
FROM harbor.od.com/public/jre:8u112
RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\
echo 'Asia/Shanghai' >/etc/timezone
ADD config.yml /opt/prom/config.yml
ADD jmx_javaagent-0.3.1.jar /opt/prom/
WORKDIR /opt/project_dir
ADD entrypoint.sh /entrypoint.sh
CMD ["/entrypoint.sh"]
- config.yml (This is the configuration file jmx_agent read)
---
rules:
- pattern: '.*'
- jmx_javaagent-0.3.1.jar
# 采集jvm监控数据的jar包
$ wget https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.3.1/jmx_prometheus_javaagent-0.3.1.jar
- entrypoint.sh (do not forget to execute the file permissions)
#!/bin/sh
M_OPTS="-Duser.timezone=Asia/Shanghai -javaagent:/opt/prom/jmx_javaagent-0.3.1.jar=$(hostname -i):${M_PORT:-"12346"}:/opt/prom/config.yml"
C_OPTS=${C_OPTS}
JAR_BALL=${JAR_BALL}
exec java -jar ${M_OPTS} ${C_OPTS} ${JAR_BALL}
According to the above explanation of java startup options
-agentpath:<pathname>[=<options>] // 加载java代理
-Duser.timezone:<timezone> // 指定时区
C_OPTS=${C_OPTS} // 额外的启动参数,默认为空,可在k8s资源配置清单中添加额外参数
JAR_BALL=${JAR_BALL} // 启动的jar包名字,在k8s资源配置清单中指定
3. Make dubbo service docker bottom pack
$ ls -l
total 372
-rw-r--r-- 1 root root 405 Jan 16 15:26 Dockerfile
-rw-r--r-- 1 root root 41 Jan 16 15:28 config.yaml
-rwxr-xr-x 1 root root 234 Jan 16 15:37 entrypoint.sh
-rw-r--r-- 1 root root 367417 May 10 2018 jmx_prometheus_javaagent-0.3.1.jar
$ docker build . -t harbor.od.com/base/jre8:8u112
$ docker push harbor.od.com/base/jre8:8u112
3.2, making slave base image
We now have a dubbo projects, need to use maven to build the project, and then packaged by Docker into the mirror and pushed to Harbor, it is necessary to use two mirrors, Maven and Docker image.
3.2.1, Maven mirror
This image is mainly used to build a java application, selection here: maven: v3.3.9-jdk8
Ready image file (pushed to the local repository)
$ docker pull maven:3.3.9-jdk-8-alpine
$ docker tag dd9d4e1cd9db harbor.od.com/public/maven:v3.3.9-jdk8
$ docker push harbor.od.com/public/maven:v3.3.9-jdk8
3.2.2, Docker image
The image is designed for the packaging dubbo project into the mirror and pushed to the harbor, but it requires custom image, you need to have achieved a docker login to log on to harbor warehouse generated configuration file, the path is: /root/.docker/config.json
, along with the original image Docker Docker package generates a new image and pushed to the local repository.
1. Prepare the image file
$ docker pull docker:19.03
$ docker tag e036013d6d10 harbor.od.com/public/docker:v19.03
$ docker push harbor.od.com/public/docker:v19.03
2.Dockerfile
- vim /data/dockerfile/docker/Dockerfile
FROM harbor.od.com/public/docker:v19.03
USER root
ADD config.json /root/.docker/config.json
3. /root/.docker/config.json
copy the file to the directory Dockerfile
{
"auths": {
"harbor.od.com": {
"auth": "YWRtaW46SGFyYm9yMTIzNDU="
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/19.03.6 (linux)"
}
You need to access the file through the harbor warehouse
4. Create and pushes the mirror (pushed to the local repository)
$ docker build ./ -t harbor.od.com/public/docker:v19.03-1
$ docker push harbor.od.com/public/docker:v19.03-1
3.3, add git key
We used here to Jenkins' Git plug-in to pull the code, so you need to create a bunch of keys, and then add the public key to the git repository, then add the private key to jenkins credentials, as follows:
3.4, create dubbo pipeline
1. Add parametric Construction
The String Parameter:app_name
Describe: project name, for example dubbo-service
String Parameter:image_name
Describe: docker image name, the format: <repository name> / <image> For example: app / dubbo-demo-service
String Parameter:git_repo
Describe: Address git central warehouse where the project: as https://gitee.com/jasonminghao/dubbo-demo-service.git
String Parameter:git_ver
Describe: Project git branch central repository for the corresponding item number or version, for example, the master branch: * / master, commit ID: 903b4e6
String Parameter:image_ver
Describe: image version and git_ver agreed to, but do not have any special symbols
String Parameter:add_tag
DESCRIBE: part docker date stamp image tag, for example: 20200121_1734
String Parameter:mvn_dir
Default:./
Describe: Compile the project directory, the default root directory for the project
String Parameter:mvn_cmd
Default:mvn clean package -Dmaven.test.skip=true
Describe: mvn compile command used
Choice Parameter:base_image
Choice Value: base/jre8:8u112
Describe: docker bottom mirror package item using
2.pipeline
podTemplate(containers: [
containerTemplate(
name: 'maven',
image: 'harbor.od.com/public/maven:v3.3.9-jdk8',
ttyEnabled: true,
command: 'cat'),
containerTemplate(
name: 'docker',
ttyEnabled: true,
image: 'harbor.od.com/public/docker:v19.03-1')
],
volumes: [
nfsVolume(mountPath: '/root/.m2', readOnly: false, serverAddress: 'hdss7-200.host.com', serverPath: '/data/nfs-volume/maven_repo/'),
hostPathVolume(hostPath: '/run/docker.sock', mountPath: '/run/docker.sock')
])
{
node(POD_LABEL) {
stage('Get a Maven project') {
checkout([
$class: 'GitSCM',
branches: [[name: "${params.git_ver}"]],
doGenerateSubmoduleConfigurations: false,
extensions: [
[$class: 'AuthorInChangelog'],
[$class: 'CloneOption',depth: 0,honorRefspec: true, noTags: true, reference: '', shallow: false]
],
submoduleCfg: [],
userRemoteConfigs: [[
credentialsId: 'git',
url: "${params.git_repo}"]
]])
container('maven') {
stage('Build a Maven project') {
sh "ls -lha"
sh "${params.mvn_cmd}"
}
}
}
stage('Docker build') {
container('docker') {
stage('create dir') {
// /tmp目录创建一个临时用于构建镜像的工作目录,将jar包移动到该目录
sh "mkdir /tmp/${params.app_name}"
sh "cd ${params.target_dir} && mkdir /tmp/${params.app_name}/project_dir && mv *.jar /tmp/${params.app_name}/project_dir"
sh "ls -lha /tmp/${params.app_name}"
}
stage('docker build image') {
// 动态生成Dockerfile,构建镜像并推送到harbor
sh "cd /tmp/${params.app_name}/ && ls -lha"
sh """
echo "FROM harbor.od.com/${params.base_image}" >/tmp/${params.app_name}/Dockerfile
echo "ADD ./project_dir /opt/project_dir" >>/tmp/${params.app_name}/Dockerfile
"""
// writeFile 在当前pipeline脚本中不生效(抛弃使用)
// writeFile file: "/tmp/${params.app_name}/Dockerfile", text: """FROM harbor.od.com/${params.base_image} ADD ./project_dir /opt/project_dir"""
sh "ls -lha /tmp/${params.app_name}"
sh "cat /tmp/${params.app_name}/Dockerfile"
sh "cd /tmp/${params.app_name}/ && pwd && docker build ./ -t harbor.od.com/${params.image_name}:${params.git_ver}_${params.add_tag} && docker push harbor.od.com/${params.image_name}:${params.git_ver}_${params.add_tag} "
}
}
}
}
}
podTemplate only applies to pipeline
3.4, execution pipeline building
1. Fill in the corresponding parameter
2. Check the infra
Pod in the namespace
[root@hdss7-200 harbor]# kubectl get pods -n infra
NAME READY STATUS RESTARTS AGE
apollo-portal-57bc86966d-4tr6w 1/1 Running 8 37h
dubbo-demo-slave-16-trktm-8x2d7-bw5dr 3/3 Running 0 53s
dubbo-monitor-555c94f4b7-85plg 1/1 Running 32 7d14h
jenkins-75fbb46546-f5ltc 1/1 Running 6 18h
You can see jenkins slave will create a project called jenkins name to create a pod
3. Build results