kubernetes(21) Microservice link monitoring & automatic release

Microservice link monitoring & automatic release

Microservice full link monitoring

What is full link monitoring

With the popularity of microservice architecture, services are split according to different dimensions, and a request often involves multiple services. These services may be developed in different programming languages, developed by different teams, and many copies may be deployed. Therefore, some tools that can help understand system behavior and analyze performance problems are needed so that when a failure occurs, the problem can be quickly located and resolved. The full-link monitoring component was created under the background of this problem.
Full link performance monitoring displays various indicators from the overall dimension to the local dimension, and centrally displays the performance information of all call chains across applications, which can facilitate the measurement of overall and local performance, and facilitate the identification of the source of failure, which can be greatly reduced in production Troubleshooting time.
kubernetes(21) Microservice link monitoring & automatic release

What problem does full link monitoring solve

  • Request link tracking: Analyze service call relationships, draw runtime topology information, and visualize display
  • Call status measurement: performance analysis of each call link, such as throughput, response time, and number of errors
  • Container planning reference: capacity expansion/reduction, service degradation, flow control
  • Operation status feedback: alarm, and quickly locate error information through the call chain and business log

Selection basis for full link monitoring system

There are many full-link monitoring systems, and you should choose from these aspects:

  • The performance consumption of probes
    should be small enough for APM component services, data analysis should be fast, and performance should be small.
  • The intrusiveness of the code
    is also used as a business component, and there should be as little or no other business systems as possible, which is transparent to the user and reduces the burden on developers.
  • There are
    as many dimensions of monitoring dimension analysis as possible.
  • Scalability
    An excellent call tracking system must support distributed deployment and have good scalability
    . The more components that can be supported, the better.
  • Mainstream systems: zipkin, skywalking, pinpoint

pinpoint introduction

Pinpoint is an APM (application performance management) tool suitable for large distributed systems written in Java/PHP.

characteristic:

  • Server Map (ServerMap) understands the system topology by visualizing the modules of the distributed system and their interconnections. Clicking on a node will display the details of the module, such as its current status and the number of requests.
  • Realtime Active Thread Chart (Realtime Active Thread Chart): Real-time monitoring of active threads within the application.
  • Request/Response Scatter Chart: Long-term visualization of the number of requests and response patterns to locate potential problems. You can choose to request more detailed information by dragging on the chart.
  • CallStack (CallStack): Generate code-level views for each call in a distributed environment, and locate bottlenecks and failure points in a single view.
  • Inspector: View other detailed information on the application, such as CPU usage, memory/garbage collection, TPS, and JVM parameters.

pinpoint deployment

kubernetes(21) Microservice link monitoring & automatic release

Docker deployment:

First download the pinpoint image, and then import it by docker load

Link: https://pan.baidu.com/s/1-h8g7dxB9v6YiXMYVNv36Q Password: u6qb

If the github download is slow, you can directly clone the open source code to your own gitee, and then download it, which is faster

$ tar xf pinpoint-image.tar.gz && cd pinpoint-image
$ for i in $(ls ./); do  docker load < $i; done    # 导镜像
$ git clone https://github.com/naver/pinpoint-docker.git
$ cd pinpoint-docker && git checkout -b origin/1.8.5    #切换分支再操作
$ docker images
pinpointdocker/pinpoint-quickstart   latest              09fd7f38d8e4        8 months ago        480MB
pinpointdocker/pinpoint-agent        1.8.5               ac7366387f2c        8 months ago        27.3MB
pinpointdocker/pinpoint-collector    1.8.5               034a20159cd7        8 months ago        534MB
pinpointdocker/pinpoint-web          1.8.5               3c58ee67076f        8 months ago        598MB
zookeeper                            3.4                 a27dff262890        8 months ago        256MB
pinpointdocker/pinpoint-mysql        1.8.5               99053614856e        8 months ago        455MB
pinpointdocker/pinpoint-hbase        1.8.5               1d3499afa5e9        8 months ago        992MB
flink                                1.3.1               c08ccd5bb7a6        3 years ago         480MB
$ docker-compose pull && docker-compose up -d

Wait about 10 minutes to visit

kubernetes(21) Microservice link monitoring & automatic release

PinPoint Agent deployment

  • Return code directory (simple-microservice-dev4)
  • Add pingpoint agent and modify the configuration file (introduced by pinpoint)
eureka-service/pinpoint/pinpoint.config
gateway-service/pinpoint/pinpoint.config
order-service/order-service-biz/pinpoint/pinpoint.config
portal-service/pinpoint/pinpoint.config
product-service/product-service-biz/pinpoint/pinpoint.config 
stock-service/stock-service-biz/pinpoint/pinpoint.config
# 将上述的配置文件修改如下:(我的pinpoint是192.168.56.14部署的)
profiler.collector.ip=192.168.56.14
  • Project dockerfile modification
# eurake-server
$ vim eureka-service/Dockerfile    
FROM java:8-jdk-alpine
LABEL maintainer [email protected]
RUN  sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories &&\
     apk add -U tzdata && \
     rm -rf /var/cache/apk/* && \
     ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
COPY ./target/eureka-service.jar ./
COPY pinpoint /pinpoint
EXPOSE 8888
CMD java -jar -javaagent:/pinpoint/pinpoint-bootstrap-1.8.3.jar -Dpinpoint.agentId=${HOSTNAME} -Dpinpoint.applicationName=ms-eureka -Deureka.instance.hostname=${MY_POD_NAME}.eureka.ms /eureka-service.jar

# gateway-service
$ vim gateway-service/Dockerfile
FROM java:8-jdk-alpine
LABEL maintainer [email protected]
RUN  sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories &&\
     apk add -U tzdata && \
     rm -rf /var/cache/apk/* && \
     ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
COPY ./target/gateway-service.jar ./
COPY pinpoint /pinpoint
EXPOSE 9999
CMD java -jar -javaagent:/pinpoint/pinpoint-bootstrap-1.8.3.jar -Dpinpoint.agentId=$(echo $HOSTNAME | awk -F- '{print "gateway-"$NF}') -Dpinpoint.applicationName=ms-gateway /gateway-service.jar

# order-service-biz/Dockerfile
$ order-service/order-service-biz/Dockerfile
FROM java:8-jdk-alpine
LABEL maintainer [email protected]
RUN  sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories &&\
     apk add -U tzdata && \
     rm -rf /var/cache/apk/* && \
     ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
COPY ./target/order-service-biz.jar ./
COPY pinpoint /pinpoint
EXPOSE 8020
CMD java -jar -javaagent:/pinpoint/pinpoint-bootstrap-1.8.3.jar -Dpinpoint.agentId=$(echo $HOSTNAME | awk -F- '{print "order-"$NF}') -Dpinpoint.applicationName=ms-order /order-service-biz.jar

#  portal-service/Dockerfile 
$ vim  portal-service/Dockerfile
FROM java:8-jdk-alpine
LABEL maintainer [email protected]
RUN  sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories &&\
     apk add -U tzdata && \
     rm -rf /var/cache/apk/* && \
     ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
COPY ./target/portal-service.jar ./
COPY pinpoint /pinpoint
EXPOSE 8080
CMD java -jar -javaagent:/pinpoint/pinpoint-bootstrap-1.8.3.jar -Dpinpoint.agentId=$(echo $HOSTNAME | awk -F- '{print "portal-"$NF}') -Dpinpoint.applicationName=ms-portal /portal-service.jar

# product-service/product-service-biz/Dockerfile 
FROM java:8-jdk-alpine
LABEL maintainer [email protected]
RUN  sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories &&\
     apk add -U tzdata && \
     rm -rf /var/cache/apk/* && \
     ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
COPY ./target/product-service-biz.jar ./
COPY pinpoint /pinpoint
EXPOSE 8010
CMD java -jar -javaagent:/pinpoint/pinpoint-bootstrap-1.8.3.jar -Dpinpoint.agentId=$(echo $HOSTNAME | awk -F- '{print "product-"$NF}') -Dpinpoint.applicationName=ms-product /product-service-biz.jar

# stock-service/stock-service-biz/Dockerfile
FROM java:8-jdk-alpine
LABEL maintainer [email protected]
RUN  sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories &&\
     apk add -U tzdata && \
     rm -rf /var/cache/apk/* && \
     ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
COPY ./target/stock-service-biz.jar ./
COPY pinpoint /pinpoint
EXPOSE 8030
CMD java -jar -javaagent:/pinpoint/pinpoint-bootstrap-1.8.3.jar -Dpinpoint.agentId=$(echo $HOSTNAME | awk -F- '{print "stock-"$NF}') -Dpinpoint.applicationName=ms-stock /stock-service-biz.jar
  • Project database modification
主要在src/main/resources/application-fat.yml 配置文件中

1. msyql: order-service-biz,product-service-biz, stock-service-biz  (**一定要配置上default ns 下的mysql svc**)
    url: jdbc:mysql://java-demo-db-mysql.default:3306/tb_order?characterEncoding=utf-8
    username: root
    password: RRGynGS53N

2. eurake
   defaultZone: http://eureka-0.eureka.ms:8888/eureka,http://eureka-1.eureka.ms:8888/eureka,http://eureka-2.eureka.ms:8888/eureka
  • Scripted build release
$ cd microservic-code/simple-microservice-dev4/k8s
$ cd k8s && ls
docker_build.sh  eureka.yaml  gateway.yaml  order.yaml  portal.yaml  product.yaml  stock.yaml
$ vim docker_build.sh     #自动构建脚本
#!/bin/bash

docker_registry=hub.cropy.cn
kubectl create secret docker-registry registry-pull-secret --docker-server=$docker_registry --docker-username=admin --docker-password=Harbor12345 [email protected] -n ms

service_list="eureka-service gateway-service order-service product-service stock-service portal-service"
service_list=${1:-${service_list}}
work_dir=$(dirname $PWD)
current_dir=$PWD

cd $work_dir
mvn clean package -Dmaven.test.skip=true

for service in $service_list; do
   cd $work_dir/$service
   if ls |grep biz &>/dev/null; then
      cd ${service}-biz
   fi
   service=${service%-*}
   image_name=$docker_registry/microservice/${service}:$(date +%F-%H-%M-%S)
   docker build -t ${image_name} .
   docker push ${image_name}
   sed -i -r "s#(image: )(.*)#\1$image_name#" ${current_dir}/${service}.yaml
   kubectl apply -f ${current_dir}/${service}.yaml
done
$ rm -fr *.yaml    #删掉旧的yaml
$ cp ../../simple-microservice-dev3/k8s/*.yaml ./    #将之前改好的yaml放进来
$ ./docker_build.sh     # 自动构建并上传镜像,同时启动服务
$ kubectl get pod -n ms   # 查看构建之后的pod是否正常

Browser view status

kubernetes(21) Microservice link monitoring & automatic release
Resource consumption is relatively large, so only show these

kubernetes(21) Microservice link monitoring & automatic release

Pinpoint graphical interface indicators that need attention

  1. Number of requests/number of calls
  2. Heap memory (JVM information)
  3. Call information (stack trace)
  4. Response time
  5. Error rate
  6. Microservice call link topology

Automatic release

Release process design

kubernetes(21) Microservice link monitoring & automatic release

Basic environment preparation

K8s (Ingress Controller, CoreDNS, PV automatic supply)

For details:

Helm v3

1. Install the push plugin

https://github.com/chartmuseum/helm-push

helm plugin install https://github.com/chartmuseum/helm-push

If you can’t download it online, you can also directly unzip the package in the courseware:

# tar zxvf helm-push_0.7.1_linux_amd64.tar.gz
# mkdir -p /root/.local/share/helm/plugins/helm-push
# chmod +x /root/.local/share/helm/plugins/helm-push/bin/*
# mv bin plugin.yaml /root/.local/share/helm/plugins/helm-push

2. Configure Docker to be trusted on the Jenkins host. If it is HTTPS, copy the certificate

All nodes in the k8s cluster need to be configured

$ cat /etc/docker/daemon.json 
{
  "graph": "/data/docker",
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
  "insecure-registries": ["hub.cropy.cn"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}

$ ls /etc/docker/certs.d/hub.cropy.cn/hub.pem    #这就是harbor证书,放到docker目录下

3. Add repo

$ helm repo add  --username admin --password Harbor12345 myrepo https://hub.cropy.cn/chartrepo/library  --ca-file /etc/docker/certs.d/hub.cropy.cn/hub.pem    #需要制定自签证书否则会报错,建议k8s持续集成过程中habor关闭https,不然jenkins-slave部署过程中会报x509证书问题
$ helm repo add azure http://mirror.azure.cn/kubernetes/charts

4. Push and install Chart (example)

$ helm pull azure/mysql --untar
$ helm  package mysql
Successfully packaged chart and saved it to: /root/mysql-1.6.6.tgz
$ helm push mysql-1.6.6.tgz --username=admin --password=Harbor12345 https://hub.cropy.cn/chartrepo/library --ca-file /etc/docker/certs.d/hub.cropy.cn/hub.pem  #需要制定自签证书,否则会报错

Gitlab

192.168.56.17 This node is deployed, provided that docker is installed

mkdir gitlab
cd gitlab
docker run -d \
  --name gitlab \
  -p 8443:443 \
  -p 9999:80 \
  -p 9998:22 \
  -v $PWD/config:/etc/gitlab \
  -v $PWD/logs:/var/log/gitlab \
  -v $PWD/data:/var/opt/gitlab \
  -v /etc/localtime:/etc/localtime \
  --restart=always \
  lizhenliang/gitlab-ce-zh:latest

Visit address: http://IP:9999

The administrator password will be set for the first time, and then log in. The default administrator username is root, and the password is just set.

Create a project and submit test code

kubernetes(21) Microservice link monitoring & automatic release

Use the code from the dev3 branch of the previous lesson (which has been changed)

# 当时这个代码是放在k8s master节点上的
git config --global user.name "root"         
git config --global user.email "[email protected]" 
cd microservic-code/simple-microservice-dev3
find ./ -name target | xargs rm -fr       #删除之前的构建记录
git init
git remote add origin http://192.168.56.17:9999/root/microservice.git
git add .
git commit -m 'all'
git push origin master

Harbor, and enable the Chart storage function

It is recommended to turn off the https function, otherwise the self-signed certificate will affect the problem of helm pull

$ tar zxvf harbor-offline-installer-v2.0.0.tgz
$ cd harbor
$ cp harbor.yml.tmpl harbor.yml
$ vi harbor.yml     #需要证书的话可以自行cfssl,或者openssl生成
hostname: hub.cropy.cn
http:
  port: 80
#https:
#  port: 443
#  certificate: /etc/harbor/ssl/hub.pem
#  private_key: /etc/harbor/ssl/hub-key.pem
harbor_admin_password: Harbor12345
$ ./prepare
$ ./install.sh --with-chartmuseum
$ docker-compose ps

MySQL (microservice database)

The creation method is as follows, you also need to import the corresponding database

$ helm install java-demo-db --set persistence.storageClass="managed-nfs-storage" azure/mysql 
$ kubectl get secret --namespace default java-demo-db-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo
RRGynGS53N
mysql -h java-demo-db-mysql -pRRGynGS53N    # 获取访问方式

Eureka (Registration Center)

$ cd microservic-code/simple-microservice-dev3/k8s
$ kubectl apply  -f eureka.yaml 
$ kubectl get pod -n ms    # 查看eurake 创建

kenkins middle department jenkins

Reference: https://github.com/jenkinsci/kubernetes-plugin/tree/fc40c869edfd9e3904a9a56b0f80c5a25e988fa1/src/main/kubernetes

kubernetes(21) Microservice link monitoring & automatic release

$ unzip jenkins.zip && cd jenkins && ls ./   #压缩包地址
deployment.yml  ingress.yml  rbac.yml  service-account.yml  service.yml
$ kubectl apply -f .
$ kubectl get pod
$ kubectl get svc
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                        AGE
java-demo-db-mysql   ClusterIP   10.0.0.142   <none>        3306/TCP                       6d20h
jenkins              NodePort    10.0.0.146   <none>        80:30006/TCP,50000:32669/TCP   16m
kubernetes           ClusterIP   10.0.0.1     <none>        443/TCP                        27d

Visit http://192.168.56.11:30006/ to configure

Password and plug-in address update (very important)

  • Find the NFS server (192.168.56.13) configured with pv automatic provisioning, and enter the shared directory
$ cd /ifs/kubernetes/default-jenkins-home-pvc-fdc745cc-6fa9-4940-ae6d-82be4242d2c5/
$ cat secrets/initialAdminPassword   #这是默认密码,填写到http://192.168.56.11:30006 登陆界面即可
edf37aff6dbc46728a3aa37a7f3d3a5a
  • No plug-in
    kubernetes(21) Microservice link monitoring & automatic release

kubernetes(21) Microservice link monitoring & automatic release

  • Configure new account

kubernetes(21) Microservice link monitoring & automatic release

  • Modify the plugin source
# 默认从国外网络下载插件,会比较慢,建议修改国内源:
$ cd /ifs/kubernetes/default-jenkins-home-pvc-fdc745cc-6fa9-4940-ae6d-82be4242d2c5/updates/
sed -i 's/http:\/\/updates.jenkins-ci.org\/download/https:\/\/mirrors.tuna.tsinghua.edu.cn\/jenkins/g' default.json && \
sed -i 's/http:\/\/www.google.com/https:\/\/www.baidu.com/g' default.json
  • Rebuild pod
$ kubectl delete pod jenkins-754b6fb4b9-dxssj
  • Plug-in installation

Manage Jenkins->System Configuration-->Manage Plug-ins -->Search Git Parameter/Git/Pipeline/kubernetes/Config File Provider respectively, select and click install.

  • Git Parameter: Git parameterized construction
  • Extended Choice Parameter: Parameterized construction of multiple selection boxes
  • Git: Pull the code
  • Pipeline: Pipeline
  • kubernetes: connect to Kubernetes to dynamically create a Slave agent
  • Config File Provider: stores the kubeconfig configuration file used by kubectl to connect to the k8s cluster

Jenkins pipeline and parameterized construction

Jenkins Pipeline is a set of plug-ins that support integration and continuous delivery pipelines in Jenkins;

  • The pipeline models simple to complex transmission pipelines through specific syntax;

    • Declarative: Follow the same syntax as Groovy. pipeline {}
    • Scripting: It supports most of Groovy's functions and is also a very expressive and flexible tool. node {}
  • The definition of Jenkins Pipeline is written into a text file called Jenkinsfile.

kubernetes(21) Microservice link monitoring & automatic release

In the actual environment, there are often many projects, especially the microservice architecture. If an item is created for each service, it will inevitably increase the workload of operation and maintenance. Therefore, you can use Jenkins parameterized construction and manual interaction to confirm the release environment Configuration, expected state, etc.

kubernetes(21) Microservice link monitoring & automatic release

Custom build jenkins-slave image

kubernetes(21) Microservice link monitoring & automatic release

Jenkins dynamically creates agents in Kubernetes

kubernetes(21) Microservice link monitoring & automatic release

jenkins placement kubernetes

Kubernetes plugin: Jenkins runs dynamic agents in a Kubernetes cluster

Plug-in introduction: https://github.com/jenkinsci/kubernetes-plugin

kubernetes(21) Microservice link monitoring & automatic release

kubernetes(21) Microservice link monitoring & automatic release

  • Cloud configuration

kubernetes(21) Microservice link monitoring & automatic release

Build jenkins-slave image

See jenkins.zip for details

$ unzip jenkins-slave.zip && cd jenkins-slave && ls ./
Dockerfile  helm  jenkins-slave  kubectl  settings.xml  slave.jar
$ vim Dockerfile
FROM centos:7
LABEL maintainer [email protected]

RUN yum install -y java-1.8.0-openjdk maven curl git libtool-ltdl-devel && \
    yum clean all && \
    rm -rf /var/cache/yum/* && \
    mkdir -p /usr/share/jenkins

COPY slave.jar /usr/share/jenkins/slave.jar
COPY jenkins-slave /usr/bin/jenkins-slave
COPY settings.xml /etc/maven/settings.xml
RUN chmod +x /usr/bin/jenkins-slave
COPY helm kubectl /usr/bin/
ENTRYPOINT ["jenkins-slave"]

$ docker build -t hub.cropy.cn/library/jenkins-slave-jdk:1.8 .
$ docker push hub.cropy.cn/library/jenkins-slave-jdk:1.8

jenkins-slave build example

  1. Configure pipeline-demo: involves three selection parameters: NS (name space), SVC (published microservice name), RS (published copy number)

kubernetes(21) Microservice link monitoring & automatic release
kubernetes(21) Microservice link monitoring & automatic release

  1. Configure pipeline

kubernetes(21) Microservice link monitoring & automatic release

// Uses Declarative syntax to run commands inside a container.
pipeline {
    agent {
        kubernetes {
            label "jenkins-slave"
            yaml '''
apiVersion: v1
kind: Pod
metadata:
  name: jenkins-slave
spec:
  containers:
  - name: jnlp
    image: hub.cropy.cn/library/jenkins-slave-jdk:1.8
'''
        }
    }
    stages {
        stage('第一步: 拉取代码') {
            steps {
                echo "Code Clone ${SVC}"
            }
        }
        stage('第二步: 代码编译') {
            steps {
                echo "code build ${SVC}"
            }
        }
        stage('第三步: 构建镜像') {
            steps {
                echo "build image ${SVC}"
            }
        }
        stage('第四部: 发布到k8s平台') {
            steps {
                echo "deploy ${NS},replica: ${RS}"
            }
        }
    }
}
  1. Run test
    kubernetes(21) Microservice link monitoring & automatic release

    Build JenkinsCI system based on kubernetes

Pipeline syntax reference

![image-20200919150817707](/Users/wanghui/Library/Application Support/typora-user-images/image-20200919150817707.png)kubernetes(21) Microservice link monitoring & automatic release

Automatically generate grovy grammar based on the selected content

kubernetes(21) Microservice link monitoring & automatic release

Create certification

  • gitlab certification

kubernetes(21) Microservice link monitoring & automatic release

  • harbor certification

kubernetes(21) Microservice link monitoring & automatic release

  • kubernetes kubeconfig authentication
# 1. 使用ansible部署的k8s集群,可以在master找到原先的ansible-install-k8s目录,需要拷贝ca
$ mkdir ~/kubeconfig_file && cd kubeconfig_file
$ vim admin-csr.json
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}

$ cp ~/ansible-install-k8s/ssl/k8s/ca* ./
$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

# 2. 创建kubeconfig文件
# 设置集群参数
kubectl config set-cluster kubernetes \
  --server=https://192.168.56.11:6443 \
  --certificate-authority=ca.pem \
  --embed-certs=true \
  --kubeconfig=config

# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=cluster-admin \
  --kubeconfig=config

# 设置默认上下文
kubectl config use-context default --kubeconfig=config

# 设置客户端认证参数
kubectl config set-credentials cluster-admin \
  --certificate-authority=ca.pem \
  --embed-certs=true \
  --client-key=admin-key.pem \
  --client-certificate=admin.pem \
  --kubeconfig=config

Create a custom configuration

kubernetes(21) Microservice link monitoring & automatic release

Copy the k8s config configuration into the configuration content

kubernetes(21) Microservice link monitoring & automatic release

Change setting

#!/usr/bin/env groovy
// 所需插件: Git Parameter/Git/Pipeline/Config File Provider/kubernetes/Extended Choice Parameter
// 公共
def registry = "hub.cropy.cn"
// 项目
def project = "microservice"
def git_url = "http://192.168.56.17:9999/root/microservice.git"
def gateway_domain_name = "gateway.ctnrs.com"
def portal_domain_name = "portal.ctnrs.com"
// 认证
def image_pull_secret = "registry-pull-secret"
def harbor_registry_auth = "05a90138-03df-4ec7-bfc6-f335566c263a"
def git_auth = "1e6ac63f-3646-4385-b4a3-da8114013945"
// ConfigFileProvider ID
def k8s_auth = "16904ea6-9724-4364-bffb-394f7af1d881"

pipeline {
  agent {
    kubernetes {
        label "jenkins-slave"
        yaml """
apiVersion: v1
kind: Pod
metadata:
  name: jenkins-slave
spec:
  containers:
  - name: jnlp
    image: "${registry}/library/jenkins-slave-jdk:1.8"
    imagePullPolicy: Always
    volumeMounts:
      - name: docker-cmd
        mountPath: /usr/bin/docker
      - name: docker-sock
        mountPath: /var/run/docker.sock
      - name: maven-cache
        mountPath: /root/.m2
  volumes:
    - name: docker-cmd
      hostPath:
        path: /usr/bin/docker
    - name: docker-sock
      hostPath:
        path: /var/run/docker.sock
    - name: maven-cache
      hostPath:
        path: /tmp/m2
"""
        }

      }
    parameters {
        gitParameter branch: '', branchFilter: '.*', defaultValue: '', description: '选择发布的分支', name: 'Branch', quickFilterEnabled: false, selectedValue: 'NONE', sortMode: 'NONE', tagFilter: '*', type: 'PT_BRANCH'        
        extendedChoice defaultValue: 'none', description: '选择发布的微服务', \
          multiSelectDelimiter: ',', name: 'Service', type: 'PT_CHECKBOX', \
          value: 'gateway-service:9999,portal-service:8080,product-service:8010,order-service:8020,stock-service:8030'
        choice (choices: ['ms', 'demo'], description: '部署模板', name: 'Template')
        choice (choices: ['1', '3', '5', '7'], description: '副本数', name: 'ReplicaCount')
        choice (choices: ['ms'], description: '命名空间', name: 'Namespace')
    }
    stages {
        stage('拉取代码'){
            steps {
                checkout([$class: 'GitSCM', 
                branches: [[name: "${params.Branch}"]], 
                doGenerateSubmoduleConfigurations: false, 
                extensions: [], submoduleCfg: [], 
                userRemoteConfigs: [[credentialsId: "${git_auth}", url: "${git_url}"]]
                ])
            }
        }
        stage('代码编译') {
            // 编译指定服务
            steps {
                sh """
                  mvn clean package -Dmaven.test.skip=true
                """
            }
        }
        stage('构建镜像') {
          steps {
              withCredentials([usernamePassword(credentialsId: "${harbor_registry_auth}", passwordVariable: 'password', usernameVariable: 'username')]) {
                sh """
                 docker login -u ${username} -p '${password}' ${registry}
                 for service in \$(echo ${Service} |sed 's/,/ /g'); do
                    service_name=\${service%:*}
                    image_name=${registry}/${project}/\${service_name}:${BUILD_NUMBER}
                    cd \${service_name}
                    if ls |grep biz &>/dev/null; then
                        cd \${service_name}-biz
                    fi
                    docker build -t \${image_name} .
                    docker push \${image_name}
                    cd ${WORKSPACE}
                  done
                """
                configFileProvider([configFile(fileId: "${k8s_auth}", targetLocation: "admin.kubeconfig")]){
                    sh """
                    # 添加镜像拉取认证
                    kubectl create secret docker-registry ${image_pull_secret} --docker-username=${username} --docker-password=${password} --docker-server=${registry} -n ${Namespace} --kubeconfig admin.kubeconfig |true
                    # 添加私有chart仓库
                    helm repo add  --username ${username} --password ${password} myrepo http://${registry}/chartrepo/${project}
                    """
                }
              }
          }
        }
        stage('Helm部署到K8S') {
          steps {
              sh """
              echo "deploy"
              """
          }
        }
    }
}

Pipeline integrates helm to release microservice projects

Upload chart to harbor

helm push ms-0.1.0.tgz --username=admin --password=Harbor12345 http://hub.cropy.cn/chartrepo/microservice 

Configure helm automatic release

The pipeline is as follows:

#!/usr/bin/env groovy
// 所需插件: Git Parameter/Git/Pipeline/Config File Provider/kubernetes/Extended Choice Parameter
// 公共
def registry = "hub.cropy.cn"
// 项目
def project = "microservice"
def git_url = "http://192.168.56.17:9999/root/microservice.git"
def gateway_domain_name = "gateway.ctnrs.com"
def portal_domain_name = "portal.ctnrs.com"
// 认证
def image_pull_secret = "registry-pull-secret"
def harbor_registry_auth = "05a90138-03df-4ec7-bfc6-f335566c263a"
def git_auth = "1e6ac63f-3646-4385-b4a3-da8114013945"
// ConfigFileProvider ID
def k8s_auth = "16904ea6-9724-4364-bffb-394f7af1d881"

pipeline {
  agent {
    kubernetes {
        label "jenkins-slave"
        yaml """
apiVersion: v1
kind: Pod
metadata:
  name: jenkins-slave
spec:
  containers:
  - name: jnlp
    image: "${registry}/library/jenkins-slave-jdk:1.8"
    imagePullPolicy: Always
    volumeMounts:
      - name: docker-cmd
        mountPath: /usr/bin/docker
      - name: docker-sock
        mountPath: /var/run/docker.sock
      - name: maven-cache
        mountPath: /root/.m2
  volumes:
    - name: docker-cmd
      hostPath:
        path: /usr/bin/docker
    - name: docker-sock
      hostPath:
        path: /var/run/docker.sock
    - name: maven-cache
      hostPath:
        path: /tmp/m2
"""
        }

      }
    parameters {
        gitParameter branch: '', branchFilter: '.*', defaultValue: '', description: '选择发布的分支', name: 'Branch', quickFilterEnabled: false, selectedValue: 'NONE', sortMode: 'NONE', tagFilter: '*', type: 'PT_BRANCH'        
        extendedChoice defaultValue: 'none', description: '选择发布的微服务', \
          multiSelectDelimiter: ',', name: 'Service', type: 'PT_CHECKBOX', \
          value: 'gateway-service:9999,portal-service:8080,product-service:8010,order-service:8020,stock-service:8030'
        choice (choices: ['ms', 'demo'], description: '部署模板', name: 'Template')
        choice (choices: ['1', '3', '5', '7'], description: '副本数', name: 'ReplicaCount')
        choice (choices: ['ms'], description: '命名空间', name: 'Namespace')
    }
    stages {
        stage('拉取代码'){
            steps {
                checkout([$class: 'GitSCM', 
                branches: [[name: "${params.Branch}"]], 
                doGenerateSubmoduleConfigurations: false, 
                extensions: [], submoduleCfg: [], 
                userRemoteConfigs: [[credentialsId: "${git_auth}", url: "${git_url}"]]
                ])
            }
        }
        stage('代码编译') {
            // 编译指定服务
            steps {
                sh """
                  mvn clean package -Dmaven.test.skip=true
                """
            }
        }
        stage('构建镜像') {
          steps {
              withCredentials([usernamePassword(credentialsId: "${harbor_registry_auth}", passwordVariable: 'password', usernameVariable: 'username')]) {
                sh """
                 docker login -u ${username} -p '${password}' ${registry}
                 for service in \$(echo ${Service} |sed 's/,/ /g'); do
                    service_name=\${service%:*}
                    image_name=${registry}/${project}/\${service_name}:${BUILD_NUMBER}
                    cd \${service_name}
                    if ls |grep biz &>/dev/null; then
                        cd \${service_name}-biz
                    fi
                    docker build -t \${image_name} .
                    docker push \${image_name}
                    cd ${WORKSPACE}
                  done
                """
                configFileProvider([configFile(fileId: "${k8s_auth}", targetLocation: "admin.kubeconfig")]){
                    sh """
                    # 添加镜像拉取认证
                    kubectl create secret docker-registry ${image_pull_secret} --docker-username=${username} --docker-password=${password} --docker-server=${registry} -n ${Namespace} --kubeconfig admin.kubeconfig |true
                    # 添加私有chart仓库
                    helm repo add  --username ${username} --password ${password} myrepo http://${registry}/chartrepo/${project}
                    """
                }
              }
          }
        }
        stage('Helm部署到K8S') {
          steps {
              sh """
              common_args="-n ${Namespace} --kubeconfig admin.kubeconfig"

              for service in  \$(echo ${Service} |sed 's/,/ /g'); do
                service_name=\${service%:*}
                service_port=\${service#*:}
                image=${registry}/${project}/\${service_name}
                tag=${BUILD_NUMBER}
                helm_args="\${service_name} --set image.repository=\${image} --set image.tag=\${tag} --set replicaCount=${replicaCount} --set imagePullSecrets[0].name=${image_pull_secret} --set service.targetPort=\${service_port} myrepo/${Template}"

                # 判断是否为新部署
                if helm history \${service_name} \${common_args} &>/dev/null;then
                  action=upgrade
                else
                  action=install
                fi

                # 针对服务启用ingress
                if [ \${service_name} == "gateway-service" ]; then
                  helm \${action} \${helm_args} \
                  --set ingress.enabled=true \
                  --set ingress.host=${gateway_domain_name} \
                   \${common_args}
                elif [ \${service_name} == "portal-service" ]; then
                  helm \${action} \${helm_args} \
                  --set ingress.enabled=true \
                  --set ingress.host=${portal_domain_name} \
                   \${common_args}
                else
                  helm \${action} \${helm_args} \${common_args}
                fi
              done
              # 查看Pod状态
              sleep 10
              kubectl get pods \${common_args}
              """
          }
        }
    }
}

Post as follows

kubernetes(21) Microservice link monitoring & automatic release

The verification is as follows (k8s cluster view):

kubernetes(21) Microservice link monitoring & automatic release

 kubectl get pod -n ms
NAME                                  READY   STATUS    RESTARTS   AGE
eureka-0                              1/1     Running   3          2d23h
eureka-1                              1/1     Running   3          2d23h
eureka-2                              1/1     Running   3          2d23h
ms-gateway-service-659c8596b6-t4hk7   0/1     Running   0          48s
ms-order-service-7cfc4d4b74-bxbsn     0/1     Running   0          46s
ms-portal-service-655c968c4-9pstl     0/1     Running   0          47s
ms-product-service-68674c7f44-8mghj   0/1     Running   0          47s
ms-stock-service-85676485ff-7s5rz     0/1     Running   0          45s

kubernetes(21) Microservice link monitoring & automatic release

Configure jenkinsfile of jenkins to gitlab

Principle: Read the pipeline file in gitlab through jenkins to achieve version control to achieve automation

kubernetes(21) Microservice link monitoring & automatic release

配置jenkins pipeline from git

kubernetes(21) Microservice link monitoring & automatic release

Guess you like

Origin blog.51cto.com/13812615/2536254
Recommended