Application practice of KubeSphere in the Internet medical industry

Author: Yuxuan Cibai, operation and maintenance R&D engineer, currently focusing on cloud native, Kubernetes, containers, Linux, operation and maintenance automation and other fields.

Preface

In 2020, my country's Internet medical companies ushered in the "first year of the outbreak". More and more residents were unable to go to the hospital for medical treatment during home isolation and had to resort to online diagnosis and treatment. While the rapid development of Internet medical companies has also exposed more shortcomings. As a development trend in the medical industry, Internet medical care has many implications for solving the contradiction between the imbalanced distribution of medical resources in China and people's growing medical and health needs. However, whether residents can effectively solve the problem of medical treatment and whether enterprises can achieve sustainable development are issues of great concern to the country and enterprises. Our company has been on this path for many years and has been committed to Internet medical services and has its own complete medical product platform and technology system.

Project Description

The goal of building

The business environment of third-party customers is IDC's self-built computer room environment, which provides virtualized server resources and plans to introduce Kubernetes technology to meet the needs of Internet medical care.

Current state of technology

It is reported that the existing architecture system of third-party customers can no longer meet the growing business volume and lacks a complete and flexible technical architecture system.

Platform architecture diagram

Online flat logic architecture diagram reference

file

The picture above is the structure diagram of our project production enterprise, which is logically divided into four major sections.

DevOps CI/CD Platform

I believe everyone knows a lot about CI/CD automation open source tools. Personally, the ones I am familiar with include Jenkins, GitLab, Spug, and KubeSphere, which I will introduce to you next. It can also complete enterprise-level CI/CD continuous delivery matters.

Kubernetes cluster

Due to business needs, the test and production environments are separated here to avoid mutual influence. As shown in the figure above, there are three Matsre nodes and five Node nodes. The Master node here is marked with stains to make the Pod unschedulable to avoid excessive load on the master node. In addition, the cluster size of the test environment is relatively small. The number of Master nodes is the same, but only two Node nodes are used. Since it is only for testing, there is no problem.

underlying storage environment

We did not deploy the underlying storage environment in a containerized manner, but in a traditional manner. This is also done for the sake of efficiency, and in Internet business, storage services have certain performance requirements to cope with high concurrency scenarios. Therefore deploying it on a bare metal server is the best choice. MySQL, Redis, and NFS have all implemented high availability to avoid single-point problems. NFS is used as the KubeSphere StorageClass storage class here. There are many options for StorageClass storage classes, such as Ceph, OpenEBS, etc. They are all open source underlying storage class solutions that KubeSphere can access. Ceph, in particular, has been favored by many major Internet companies. At this time, you may ask me why I chose NFS instead of Ceph. I can only say that in tool selection, there is only the most suitable, not the best. , you choose whatever is suitable for your business type, instead of choosing which tool is more popular based on what others say.

Distributed monitoring platform

A complete Internet application platform is naturally indispensable for monitoring and alarming. In the past few years, the Nagios, Zabbix, and Cacti that we are familiar with have all been established surveillance systems, and now they are gradually withdrawing from the stage of history. Nowadays, Prometheus stands out and is favored by major Internet companies. Combined with Grafana, it has to be said that it is really good. In this architecture system, I chose it without hesitation.

Background introduction

The customer's existing platform environment lacks a complete technical architecture system, and it is difficult to update and iterate the business version. Both the business and the technology platform have serious bottlenecks, which are not enough to support the existing business system. In order to avoid user loss, a complete architecture system needs to be re-formulated. Nowadays, Internet technology is constantly updated and iterated. As Kubernetes becomes more and more popular, KubeSphere has also emerged. The rise of a technology will definitely drive the development of the entire technology ecosystem. I believe that the emergence of KubeSphere can bring us far more value and convenience than you imagine.

Selection instructions

After the Kubernetes cluster is built, we then face a problem, which is how our internal R&D personnel manage and maintain it. New demands require version iteration, how do developers release and launch their own business code; how to better analyze, locate and handle problems when problems arise, and a series of other issues need to be considered. Is it difficult to allow them to log in to the server and use the command line? knock? Therefore, in order to solve the above problems, we need to introduce another Dashboard management platform.

Reasons for choosing KubeSphere

KubeSphere provides enterprise users with high-performance and scalable container application management services, aiming to help enterprises complete digital transformation driven by the new generation of Internet technology, accelerate rapid application iteration and business delivery, and meet the growing business needs of enterprises. The four main advantages of KubeSphere that I value are as follows:

1. Unified management of multiple clusters

With the increasing popularity of container applications, various enterprises deploy multiple clusters across clouds or in local environments, and the complexity of cluster management is also increasing. In order to meet users' needs for unified management of multiple heterogeneous clusters, KubeSphere is equipped with a new multi-cluster management function to help users manage, monitor, import and operate multiple clusters across multiple environments such as regions and clouds, comprehensively improving user experience. .

Multi-cluster functionality can be enabled before or after installing KubeSphere. Specifically, this function has two major features:

  • Unified management: Users can use direct or indirect connections to import Kubernetes clusters. With simple configuration, the entire process can be completed in minutes on KubeSphere's interactive web console. After the cluster is imported, users can monitor the cluster status and operate and maintain cluster resources through the unified central control plane.
  • High availability: In a multi-cluster architecture, one cluster can run the primary service and the other cluster acts as a backup cluster. Once the primary cluster goes down, the backup cluster can quickly take over related services. In addition, when clusters are deployed across regions, in order to minimize latency, requests can be sent to the nearest cluster, thereby achieving high availability across regions and clusters.
2. Powerful observability capabilities

KubeSphere's observability function has been fully upgraded in v3.0, further optimizing and improving important components, including monitoring logs, audit events, and alarm notifications. Users can use KubeSphere's powerful monitoring system to view various types of data in the platform. The main advantages of this system include:

  • Custom configuration: Users can customize the monitoring panel for the application, with a variety of templates and chart modes to choose from. Users can add the indicators they want to monitor as needed and even choose the color the indicators display on the chart. In addition, alarm policies and rules can also be customized, including alarm intervals, times, and thresholds.
  • Full-dimensional data monitoring and query: KubeSphere provides full-dimensional resource monitoring data, completely liberating the operation and maintenance team from complex data recording work. It is also equipped with an efficient notification system and supports multiple notification channels. Based on KubeSphere's multi-tenant management system, different tenants can query corresponding monitoring logs and audit events on the console, supporting keyword filtering, fuzzy matching and exact matching.
  • Graphical interactive interface design: KubeSphere provides users with a graphical web console to facilitate monitoring of various resources from different dimensions. Resource monitoring data will be displayed on interactive charts, recording resource usage in the cluster in detail. Resources at different levels can be sorted according to usage, making it easier for users to compare and analyze data.
  • High-precision second-level monitoring: The entire monitoring system provides second-level monitoring data to help users quickly locate component abnormalities. In addition, all audit events will be accurately recorded in KubeSphere to facilitate subsequent data analysis.
3. Automated DevOps CI/CD process mechanism

Automation is an important part of implementing DevOps. Automatic and streamlined pipelines provide good conditions for users to deliver applications through the CI/CD process.

  • Integrating Jenkins: The KubeSphere DevOps system has built-in Jenkins as an engine and supports a variety of third-party plug-ins. In addition, Jenkins provides a good environment for extended development. The entire workflow of the DevOps team can be seamlessly connected on a unified platform, including development testing, build deployment, monitoring logs and notifications, etc. KubeSphere accounts can be used to log in to the built-in Jenkins to meet the enterprise's needs for CI/CD pipelines and unified authentication multi-tenant isolation.
  • Convenient built-in tools: Users can quickly get started with automation tools, including Binary-to-Image and Source-to-Image, without having a deep understanding of the underlying operating principles of Docker or Kubernetes. Just define the image warehouse address and upload the binary file (such as JAR/WAR/Binary), and the corresponding service can be automatically published to Kubernetes without writing a Dockerfile.
4. Fine-grained permission control

KubeSphere provides users with different levels of permission control, including clusters, enterprise spaces, and projects. Users with specific roles can operate corresponding resources.

  • Custom roles: In addition to the system's built-in roles, KubeSphere also supports custom roles. Users can assign different permissions to roles to perform different operations to meet the enterprise's requirements for specific work distribution of different tenants, that is, each tenant can be defined The parts that should be responsible will not be affected by irrelevant resources. In terms of security, since tenants at different levels are completely isolated from each other, they will not affect each other while sharing some resources. The networks between tenants are also completely isolated to ensure data security.

Practice process

Infrastructure construction and planning

After the underlying cluster environment is ready, we need to consider the issue of (CI/CD) continuous integration delivery. In order to ensure that the final production service containerization is successfully deployed to Kubernetes and is more stable and controllable in the later period, I adopted a strategic plan:

  • Step 1: Deploy the IDC virtualization platform test/production environment simultaneously, and deploy the Kubernetes cluster in binary mode on the existing two sets of server resources.
  • Step 2: Then deploy the KubeSphere cloud native management platform in a minimal way based on the Kubernetes cluster. The purpose is to realize that the two sets of Kubernetes clusters are hosted by KubeSphere.
  • Step 3: Build a DevOps CI/CD pipeline mechanism and build an integrated pipeline platform for Jenkins, Harbor, and git platforms in the KubeSphere platform in Deployment mode.
  • Step 4: Configure the Pipeline script and integrate Jenkins into two sets of Kubernetes so that business function updates and iterations can be released and launched normally.

Devops CI/CD process analysis:

file

  • Phase 1: Checkout SCM: Check out the source code from the Git code repository.
  • Phase 2: Unit testing: The next phase will not proceed until the test is passed.
  • Stage 3: SonarQube Analysis: SonarQube code quality analysis (optional).
  • Phase 4: Build and push the snapshot image: Build the image according to the branch selected in the policy settings, and SNAPSHOT-$BRANCH_NAME-$BUILD_NUMBERpush the label to Docker Hub, where \$BUILD_NUMBERis the running sequence number in the pipeline activity list.
  • Phase 5: Push the latest image: Mark the SonarQube branch as latest and push it to the Harbor mirror warehouse.
  • Stage 6: Deploy to the development environment: Deploy the SonarQube branch to the development environment. This stage requires review.
  • Stage 7: Push with label: Generate a label and publish it to Git. The label will be pushed to the Harbor mirror repository.
  • Phase 8: Deploy to production environment: Deploy the published tags to the production environment.

Online DevOps pipeline reference

file

The services of stateless services in KubeSphere are shown in the figure below, including front-end and back-end services of the application layer. In addition, Minio is deployed in Deployment mode container.

file

Stateful services are mainly infrastructure services, as shown in the figure below: such as MySql, Redis, etc. I still choose to use virtual machine deployment. RocketMQ is more special and chooses the Statefulset method for deployment.

file

Corporate practical cases

Define Deployment resource yaml file

This resource class needs to be defined on git. When we run the KubeSphere DevOps pipeline deployment process, we need to call this yaml resource for service update iteration.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: boot-preject
  name: boot-preject
  namespace: middleware   #定义Namespace
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  selector:
    matchLabels:
      app: boot-preject
  strategy:
    rollingUpdate:
      maxSurge: 50%
      maxUnavailable: 50%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: boot-preject
    spec:
      imagePullSecrets:
        - name: harbor
      containers:
        - image: $REGISTRY/$HARBOR_NAMESPACE/boot-preject:SNAPSHOT-$BUILD_NUMBER #这里定义镜像仓库地址+kubesphere 构建变量值
          imagePullPolicy: Always
          name: app
          ports:
            - containerPort: 8080
              protocol: TCP
          resources:
            limits:
              cpu: 300m
              memory: 600Mi
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      terminationGracePeriodSeconds: 30

Define pipeline credentials:

file

1. Create a new DevOps project

file

2. Create pipeline wizard

file

file

3. Customized pipeline

KubeSphere 3.3.2 version provides existing templates, but we can try to define our own pipeline system. The graphics editing panel includes two areas: the canvas on the left and the content on the right. It will automatically generate a Jenkinsfile based on your configuration of different stages and steps, providing developers with a more user-friendly operating experience.

file

The first stage

This stage mainly pulls the git code environment. The stage name is named Pulling Code, specify the maven container. On the graphical editing panel, select node from the type drop-down list and select maven from the Label drop-down list:

file

second stage

Select the + sign to start defining the code compilation environment, define the name as Build compilation, and add steps.

file

The third phase

This stage is mainly the process of generating images through Dockerfile packaging. It is also necessary to specify the container before generation, then add nested steps, and specify the shell command to define the Dockerfile compilation process:

file

Stage 4

This stage is mainly based on the image compiled by Dockerfile and uploaded to the image warehouse:

docker tag boot-preject:latest $REGISTRY/$HARBOR_NAMESPACE/boot-preject:SNAPSHOT-$BUILD_NUMBER

file

The fifth stage

This stage is mainly about deploying the environment. The image is uploaded to the Harbor warehouse and the deployment work begins.

Here we need to define the Deployment resource in advance and kubectl apply -fexecute it according to the defined file.

The process is as follows: Select +, define the name Deploying to k8s, and select "Add step"--->"Add credentials"--->"Add nested step"--->"Specify container"--->"Add nested step" below "--->"shell".

This command is the specified git pre-defined yaml file.

envsubst < deploy/deploy.yml | kubectl apply -f -

file

file

Above, a complete assembly line is completed. Next we run to complete the compilation.

file

Workload display:

file

Attached production Jenkinsfile script

Here I attach to you a case of my own production Pipeline, which can be directly applied to the enterprise production environment.

pipeline {
  agent {
    node {
      label 'maven'
    }

  }
  stages {
    stage('Pulling Code') {
      agent none
      steps {
        container('maven') {
        //指定git地址
          git(url: 'https://gitee.com/xxx/test-boot-projext.git', credentialsId: 'gitee', branch: 'master', changelog: true, poll: false)
          sh 'ls'
        }

      }
    }

    stage('Build compilation') {
      agent none
      steps {
        container('maven') {
          sh 'ls'
          sh 'mvn clean package -Dmaven.test.skip=true'
        }

      }
    }

    stage('Build a mirror image') {
      agent none
      steps {
        container('maven') {
          sh 'mkdir -p repo/$APP_NAME'
          sh 'cp target/**.jar repo/${APP_NAME}'
          sh 'cp ./start.sh  repo/${APP_NAME}'
          sh 'cp ./Dockerfile  repo/${APP_NAME}'
          sh 'ls repo/${APP_NAME}'
          sh 'cd repo/${APP_NAME}'
          sh 'docker build -t boot-preject:latest . '
        }

      }
    }

    stage('Pack and upload') {
      agent none
      steps {
        container('maven') {
          withCredentials([usernamePassword(credentialsId : 'harbor' ,passwordVariable : 'DOCKER_PWD_VAR' ,usernameVariable : 'DOCKER_USER_VAR' ,)]) {
            sh 'echo "$DOCKER_PWD_VAR" | docker login $REGISTRY -u "$DOCKER_USER_VAR" --password-stdin'
            sh 'docker tag boot-preject:latest $REGISTRY/$HARBOR_NAMESPACE/boot-preject:SNAPSHOT-$BUILD_NUMBER'
            sh 'docker push  $REGISTRY/$HARBOR_NAMESPACE/boot-preject:SNAPSHOT-$BUILD_NUMBER'
          }

        }

      }
    }

    stage('Deploying to K8s') {
      agent none
      steps {
        withCredentials([kubeconfigFile(credentialsId : 'demo-kubeconfig' ,variable : 'KUBECONFIG' )]) {
          container('maven') {
            sh 'envsubst < deploy/deploy.yml | kubectl apply -f -'
          }

        }

      }
    }

  }
  environment {
    DOCKER_CREDENTIAL_ID = 'dockerhub-id' #定义docker镜像认证
    GITHUB_CREDENTIAL_ID = 'github-id'    #定义git代码仓库认证
    KUBECONFIG_CREDENTIAL_ID = 'demo-kubeconfig' #定义kubeconfig kubectl api认证文件
    REGISTRY = 'harbor.xxx.com' #定义镜像仓库地址
    HARBOR_NAMESPACE = 'ks-devopos'
    APP_NAME = 'boot-preject'
  }
  parameters {
    string(name: 'TAG_NAME', defaultValue: '', description: '')
  }
}

Storage and Networking

For business storage, we choose MySQl and Redis. MySQL combines with Xenon to implement a high-availability solution.

Effect

The introduction of KubeSphere has greatly reduced the burden of continuous integration and continuous deployment of the company's R&D, and greatly improved the project delivery efficiency of the entire R&D team in production. The R&D team only needs to implement functions locally and fix bugs, and then commit the code to git. Then DevOps based on KubeSphere can directly click to run and release the test environment/production environment project. At this time, the entire CI/CD continuous integration delivery workflow is completely completed, and the remaining joint debugging work is handed over to R&D.

Implementing DevOps based on KubeSphere has brought us the greatest efficiency highlights as follows:

  • Integrated platform management: In terms of service function iteration, you only need to log in to the KubeSphere platform and click on the project pipeline for which you are responsible, which greatly reduces the deployment workload. Although Jenkins can be combined with KubeSphere, project delivery can also be achieved. However, the entire process is relatively cumbersome, requiring attention not only to the construction of the Jenkins platform, but also to the KubeSphere delivery results; this has caused a lot of inconvenience and deviated from our original intention of delivery.
  • Significant improvement in resource utilization: The combination of KubeSphere and Kubernetes further optimizes system resource utilization, reduces usage costs, and maximizes DevOps resource utilization.

Future planning (improvement)

From the current point of view, through the introduction of the KubeSphere cloud native platform practice into this production project, we have found that it has indeed solved the problem of microservice deployment and management, greatly improving our convenience. Load balancing, application routing, automatic expansion and contraction, DevOps, etc., can all greatly promote our Kubernetes. In the future, we will continue to delve into the fields of cloud native and Kubernetes containerization, continue to promote the containerization of existing businesses, and embrace the concept of cloud native. Ecosystem protects our business.

This article is published by OpenWrite, a blog that publishes multiple articles !

Guess you like

Origin blog.csdn.net/zpf17671624050/article/details/132860564