Delivery practice of cloud-native privatized PaaS platform

Author: Niu Yufu, an expert engineer of a well-known Internet company. Like open source / keen to share, have in-depth research on K8s and golang gateways.

This article will explain how to use cloud native to solve the problems in privatization delivery, and then build a PaaS platform to improve the reusability of the business platform. Before entering the topic, it is necessary to clarify two key words:

  • PaaS platform : Multiple core business services are encapsulated as a whole platform to provide services in the form of a platform.
  • Private delivery : The platform needs to be deployed in a private cloud environment and can still operate in the face of no network.

Traditional delivery pain points

As shown above: Private cloud will have clear security requirements

  1. Private cloud services cannot connect to the external network, and data can only be ferryed to the private cloud on the internal network through a one-way gatekeeper.
  2. The source code can only be stored in the company's computer room, and the private cloud only deploys the compiled files.
  3. The service will iterate from time to time, and in order to ensure the stability of the service, it is necessary to build independent business monitoring.

Based on the above requirements, the challenges faced are probably several:

  1. Poor architecture portability: The configuration between services is complex, the configuration files need to be modified for multiple heterogeneous languages, and there is no fixed service DNS.
  2. High cost of deployment and operation: The service-dependent environment needs to support offline installation, and the service update needs to be completed manually by local operation and maintenance personnel. In complex scenarios, a complete deployment may take several people/month.
  3. High cost of monitoring operation and maintenance: monitoring needs to support system-level/service-level/business-level monitoring, and notification methods need to support SMS, Webhook and other types.

Architecture scheme

Our principle is to embrace cloud native and reuse existing capabilities, and it is possible to use existing and mature technical solutions in the industry. We use KubeSphere+K8S as the service orchestration. Considering the security and simplicity, we re-develop the complete DevOps capability for Syncd. The monitoring system uses the Nightingale+Prometheus solution.

Architecture as shown above

  1. In the blue box is our underlying PaaS cluster. We have unified the service orchestration and upgrade of the general services of business services to solve the problem of poor architecture migration.
  2. In the red box, the monitoring system exists as an orchestration service, and all monitoring items are configured before delivery. It is used to solve the problem of high operation and maintenance cost of the monitoring system.
  3. In the purple box, service containers can be automatically pulled and deployed across network segments. It is used to solve the problem of high cost of service service deployment.

Below we will introduce these three parts.

Service Orchestration: KubeSphere

The vision of KubeSphere is to create a cloud-native distributed operating system with K8s as the core. Unified distribution and operation and maintenance management of cloud-native applications in multi-cloud and multi-cluster, and it also has an active community.

The reasons for choosing KubeSphere are as follows:

Customize your own privatized delivery solution based on the product

Privatized image file packaging

Create a product list:

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:
  name: sample
spec:
  arches:
  - amd64
...
  - type: kubernetes
    version: v1.21.5
  components:
    helm:
      version: v3.6.3
    cni:
      version: v0.9.1
    etcd:
      version: v3.4.13
    containerRuntimes:
    - type: docker
      version: 20.10.8
    crictl:
      version: v1.22.0
    harbor:
      version: v2.4.1
    docker-compose:
      version: v2.2.2
  images:
  - dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.22.1
...

Then we can export by command.

$ ./kk artifact export -m manifest-sample.yaml -o kubesphere.tar.gz

Private deployment

Create a deployment manifest:

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: kubesphere01.ys, address: 10.89.3.12, internalAddress: 10.89.3.12, user: kubesphere, password: "Kubesphere123"}
  - {name: kubesphere02.ys, address: 10.74.3.25, internalAddress: 10.74.3.25, user: kubesphere, password: "Kubesphere123"}
  - {name: kubesphere03.ys, address: 10.86.3.66, internalAddress: 10.86.3.66, user: kubesphere, password: "Kubesphere123"}
  - {name: kubesphere04.ys, address: 10.86.3.67, internalAddress: 10.86.3.67, user: kubesphere, password: "Kubesphere123"}
  - {name: kubesphere05.ys, address: 10.86.3.11, internalAddress: 10.86.3.11, user: kubesphere, password: "Kubesphere123"}
  roleGroups:
    etcd:
    - kubesphere01.py
    - kubesphere02.py
    - kubesphere03.py
    control-plane:
    - kubesphere01.py
    - kubesphere02.py
    - kubesphere03.py
    worker:
    - kubesphere05.py
    registry:
    - kubesphere04.py
  controlPlaneEndpoint:
    internalLoadbalancer: haproxy
    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.21.5
    clusterName: cluster.local
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    multusCNI:
      enabled: false
  registry:
    type: harbor
    auths:
      "dockerhub.kubekey.local":
        username: admin
        password: Kubesphere123
...

Execute the installation and deployment:

$ ./kk create cluster -f config-sample.yaml -a kubesphere.tar.gz --with-packages --with-kubesphere --skip-push-images

It turns out that a large number of complex K8s deployments, high-availability solutions, Harbor privatization mirror warehouses, etc. can be automatically installed, which greatly simplifies the deployment of K8s components in privatized delivery scenarios.

The visual interface greatly simplifies the operation process

  • Create deployment: Pipeline creation of deployment, storage, and service access for a container service.

  • Resource Limits: Limit container resource utilization & limit tenant resource utilization.

  • Remote login: container remote login function.

Sharing of business deployment experience based on KubeSphere

In the privatization scenario, high-availability service instance deployment is constructed to ensure that the failure of a single instance does not affect the overall use. We must ensure the following points.

1. Since services need to have fixed network identity and storage, we need to create a "stateful replica set deployment".

apiVersion: apps/v1
kind: StatefulSet
metadata:
  namespace: project
  name: ${env_project_name}
  labels:
    app: ${env_project_name}
spec:
  serviceName: ${env_project_name}
  replicas: 1
  selector:
    matchLabels:
      app: ${env_project_name}
  template:
    metadata:
      labels:
        app: ${env_project_name}
    spec:
      containers:
        - name: ${env_project_name}
          image: ${env_image_path}
          imagePullPolicy: IfNotPresent

2. The stateful replica set uses host anti-affinity to ensure that the service is distributed to different hosts.

....
affinity:
   podAntiAffinity:
     preferredDuringSchedulingIgnoredDuringExecution:
       - weight: 100
         podAffinityTerm:
           labelSelector:
             matchExpressions:
               - key: app
                 operator: In
                 values:
                   - ${env_project_name}
           topologyKey: kubernetes.io/hostname
....

3. The mutual calls between services are configured using the underlying DNS of K8s.

4. When the cluster relies on external resources, it needs to be set to Service, and then provide services internally.

kind: Endpoints
apiVersion: v1
metadata:
  name: redis-cluster
  namespace: project
subsets:
  - addresses:
      - ip: 10.86.67.11
    ports:
      - port: 6379
---
kind: Service
apiVersion: v1
metadata:
  name: redis-cluster
  namespace: project
spec:
  ports:
    - protocol: TCP
      port: 6379
      targetPort: 6379

5. Realize service dynamic domain name resolution debugging with the help of nip.io domain name. nip.io can automatically set the IP information according to the requested domain name to complete the IP information mapping of the response.

$ nslookup abc-service.project.10.86.67.11.nip.io
Server:         169.254.25.10
Address:        169.254.25.10:53

Non-authoritative answer:
Name:   abc-service.project.10.86.67.11.nip.io
Address: 10.86.67.11

So we can use this domain name directly when building the Ingress:

---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: gatekeeper
  namespace: project
spec:
  rules:
    - host: gatekeeper.project.10.86.67.11.nip.io
      http:
        paths:
          - path: /
            pathType: ImplementationSpecific
            backend:
              service:
                name: gatekeeper
                port:
                  number: 8000

6. Mount the directory to the host. Sometimes the container needs to be directly associated with the host directory. The specific operations are as follows.

...
spec:
    spec:
...
          volumeMounts:
            - name: vol-data
              mountPath: /home/user/data1
      volumes:
        - name: vol-data
          hostPath:
            path: /data0

7. Stateful deployment of workloads, mainly involving StatefulSet, Service, volumeClaimTemplates, Ingress, examples are as follows:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  namespace: project
  name: gatekeeper
  labels:
    app: gatekeeper
spec:
  serviceName: gatekeeper
  replicas: 1
  selector:
    matchLabels:
      app: gatekeeper
  template:
    metadata:
      labels:
        app: gatekeeper
    spec:
      containers:
        - name: gatekeeper
          image: dockerhub.kubekey.local/project/gatekeeper:v362
          imagePullPolicy: IfNotPresent
          ports:
            - name: http-8000
              containerPort: 8000
              protocol: TCP
            - name: http-8080
              containerPort: 8080
              protocol: TCP
          resources:
            limits:
              cpu: '2'
              memory: 4Gi
          volumeMounts:
            - name: vol-data
              mountPath: /home/user/data1
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchExpressions:
                    - key: app
                      operator: In
                      values:
                        - gatekeeper
                topologyKey: kubernetes.io/hostname
  volumeClaimTemplates:
    - metadata:
        name: vol-data
      spec:
        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 10Gi
---
apiVersion: v1
kind: Service
metadata:
  name: gatekeeper
  namespace: project
  labels:
    app: gatekeeper
spec:
  ports:
    - name: "http-8000"
      protocol: TCP
      port: 8000
      targetPort: 8000
    - name: "http-8080"
      protocol: TCP
      port: 8080
      targetPort: 8080
  selector:
    app: gatekeeper
  type: NodePort
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: gatekeeper
  namespace: project
spec:
  rules:
    - host: gatekeeper.project.10.86.67.11.nip.io
      http:
        paths:
          - path: /
            pathType: ImplementationSpecific
            backend:
              service:
                name: gatekeeper
                port:
                  number: 8000
    - host: gatekeeper.project.10.86.68.66.nip.io
      http:
        paths:
          - path: /
            pathType: ImplementationSpecific
            backend:
              service:
                name: gatekeeper
                port:
                  number: 8080

DevOps: Building service automation delivery based on Syncd

There are many options for DevOps. Here we do not use Jenkins, GitRunner, etc., but use the familiar Syncd within our team for secondary development. There are two reasons:

  1. For security reasons: our source code cannot be stored locally, so building and packaging solutions based on gitlab is not very useful to us, and using it is a waste of resources.
  2. Functional simplicity: Although Syncd has been suspended for more than 2 years, its core CICD functions are relatively complete and have strong front-end and back-end scalability. We can easily expand the corresponding functions.

Syncd core idea:

  1. From building a packaged image using a local toolchain, docker push can be understood as git push here.
  2. Pull the image package through Syncd to complete the deployment process, package and go online, and set the version number when packaging to facilitate service rollback.

Build a local toolchain

1. Create a directory based on the project

#创建目录
cd /Users/niuyufu/goproject/abc-service
mkdir -p devops
cd devops

2. Import Dockerfile, you can create it yourself based on your business. 3. Create the tool.sh file

cat >> tool.sh << EOF
#!/bin/sh
 
###########配置区域##############
 
#模块名称,可变更
module=abc-service
#项目名称
project=project1
#容器名称
container_name=${project}"_"${module}
#镜像名称
image_name=${project}"/"${module}
#服务端口映射:宿主机端口:容器端口,多个逗号间隔
port_mapping=8032:8032
#镜像hub地址
image_hub=dockerhub.kubekey.local
#镜像tag
image_tag=latest
 
###########配置区域##############
 
#构建工具
action=$1
case $action in
"docker_push")
  image_path=${image_hub}/${image_name}:${image_tag}
  docker tag ${image_name}:${image_tag} ${image_path}
  docker push ${image_path}
  echo "镜像推送完毕,image_path: "${image_path}
  ;;
"docker_login")
  container_id=$(docker ps -a | grep ${container_name} | awk '{print $1}')
  docker exec -it ${container_id} /bin/sh
  ;;
"docker_stop")
  docker ps -a | grep ${container_name} | awk '{print $1}' | xargs docker stop
  container_id=`docker ps -a | grep ${container_name} | awk '{print $1}' | xargs docker rm`
  if [ "$container_id" != "" ];then
    echo "容器已关闭,container_id: "${container_id}
  fi
 
  if [ "$images_id" != "" ];then
    docker rmi ${images_id}
  fi
 
  ;;
"docker_run")
  docker ps -a | grep ${container_name} | awk '{print $1}' | xargs docker stop
  docker ps -a | grep ${container_name} | awk '{print $1}' | xargs docker rm
  port_mapping_array=(${port_mapping//,/ })
  # shellcheck disable=SC2068
  for var in ${port_mapping_array[@]}; do
    port_mapping_str=${mapping_str}" -p "${var}
  done
  container_id=$(docker run -d ${port_mapping_str} --name=${container_name} ${image_name})
  echo "容器已启动,container_id: "${container_id}
  ;;
"docker_build")
  if [ ! -d "../output" ]; then
    echo "../output 文件夹不存在,请先执行 ../build.sh"
    exit 1
  fi
  cp -rf ../output ./
  docker build -f Dockerfile -t ${image_name} .
  rm -rf ./output
  echo "镜像编译成功,images_name: "${image_name}
  ;;
*)
  echo "可运行命令:
docker_build    镜像编译,依赖../output 文件夹
docker_run      容器启动,依赖 docker_build
docker_login    容器登陆,依赖 docker_run
docker_push     镜像推送,依赖 docker_build"
  exit 1
  ;;
esac
EOF

4. Execute project packaging, please ensure that the output is in ./output

$cd ~/goproject/abc-service/
$sh build.sh
abc-service build ok
make output ok
build done

5. Use the tool.sh tool for service debugging

The execution order of tools.sh is generally like this: ./output output→docker_build→docker_run→docker_login→docker_push

$cd devops
$chmod +x tool.sh
#查看可运行命令
$sh tool.sh
可运行命令:
docker_build    镜像编译,依赖../output 文件夹
docker_run      容器启动,依赖 docker_build
docker_login    容器登陆,依赖 docker_run
docker_push     镜像推送,依赖 docker_build
 
 
#docker_build举例:
$sh tool.sh docker_build
[+] Building 1.9s (10/10) FINISHED
 => [internal] load build definition from Dockerfile                                                                                      0.1s
 => => transferring dockerfile: 37B                                                                                                       0.0s
 => [internal] load .dockerignore                                                                                                         0.0s
 => => transferring context: 2B
...                                                                   0.0s
 => exporting to image                                                                                                                    0.0s
 => => exporting layers                                                                                                                   0.0s
 => => writing image sha256:0a1fba79684a1a74fa200b71efb1669116c8dc388053143775aa7514391cdabf                                              0.0s
 => => naming to docker.io/project/abc-service                                                                                         0.0s
 
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
镜像编译成功,images_name: project/abc-service
 
 
#docker_run举例:
$ sh tool.sh docker_run
6720454ce9b6
6720454ce9b6
容器已启动,container_id: e5d7c87fa4de9c091e184d98e98f0a21fd9265c73953af06025282fcef6968a5
 
 
#可以使用 docker_login 登陆容器进行代码调试:
$ sh tool.sh docker_login
sh-4.2# sudo -i
root@e5d7c87fa4de:~$
 
 
#docker_push举例:
$sh tool.sh docker_push                                                                                                              130 ↵
The push refers to repository [dockerhub.kubekey.local/citybrain/gatekeeper]
4f3c543c4f39: Pushed
54c83eb651e3: Pushed
e4df065798ff: Pushed
26f8c87cc369: Pushed
1fcdf9b8f632: Pushed
c02b40d00d6f: Pushed
8d07545b8ecc: Pushed
ccccb24a63f4: Pushed
30fe9c138e8b: Pushed
6ceb20e477f1: Pushed
76fbea184065: Pushed
471cc0093e14: Pushed
616b2700922d: Pushed
c4af1604d3f2: Pushed
latest: digest: sha256:775e7fbabffd5c8a4f6a7c256ab984519ba2f90b1e7ba924a12b704fc07ea7eb size: 3251
镜像推送完毕,image_path: dockerhub.kubekey.local/citybrain/gatekeeper:latest

#最后登陆Harbor测试镜像是否上传
https://dockerhub.kubekey.local/harbor/projects/52/repositories/gatekeeper

Service packaging and construction based on Syncd

1. Project configuration

Add item

Set the mirror address generated in tool.sh.

Set up build scripts.

Fill out the build script with reference to the stateful workload.

2. Create an online order

3. Build the deployment package to execute the deployment

4. Switch to KubeSphere to view the deployment effect.

So far, the functions of DevOps and KubeSphere have been connected.

Service Monitoring: Building Enterprise-Level Monitoring Based on Nightingale

Reason for selection

  1. Visualization Engine: Built-in templates, out of the box.

  1. Alarm analysis engine: flexible management, alarm self-healing, and out-of-the-box use.

  1. Support Helm Chart to complete application and service deployment with one click. In the privatization scenario, we only need to care about the localization of container integration.
$ git clone https://github.com/flashcatcloud/n9e-helm.git
$ helm install nightingale ./n9e-helm -n n9e --create-namespace

Actual rule configuration demo

  1. Configure alarm rules and seamlessly support PromQL to flexibly write various rules.

  1. Configuring an Alarm Receiving Group

  1. Actual reception of alarm messages and recovery messages

Summarize

Under privatization delivery, due to different business scenarios, the selection of cloud-native applications is also different. This article only introduces our own business scenarios. If you have any questions, please correct me. In addition, cloud native applications in other scenarios are also welcome to discuss with me at any time.

This article is published by OpenWrite , a multi-post blog platform !

{{o.name}}
{{m.name}}

Guess you like

Origin my.oschina.net/u/4197945/blog/5556886