Practice of KubeSphere-based application containerization in the field of intelligent networked vehicles

Company Profile

A national-level intelligent networked vehicle research center was established in 2018. It is a national-level innovation platform responsible for industrial development consultation and advice, a common technology research and development center, and the transformation of innovative achievements. It aims to improve my country's intelligent networked vehicles and related industries in the world position in the value chain.

At present, efforts are being made to build a smart car cloud operation control center platform based on big data and cloud computing. In the process of promoting the construction of the cloud operation control center, the integration, deployment, and operation and maintenance solutions of the operation control center platform have gone through three generations of upgrade iterations.

The first-generation deployment solution is to manually deploy the front-end and back-end modules of the platform on its own physical machines, and host the physical machines in the ICT computer room.

The second-generation solution is to virtualize the physical machine cluster with Vmware ESXi, and deploy the front-end and back-end modules of the platform on virtual machines, which improves resource utilization and reduces resource usage.

The third generation is currently deployed in the KubeSphere cluster of the public cloud in a containerized manner. Purchase the server resources of the public cloud, use KubeKey to install the KubeSphere cluster, and use the DevOps pipeline to release application-level services to the KubeSphere cluster in a containerized manner with one click, truly realizing continuous integration and continuous release. Application R&D engineers only need to implement features or fix bugs locally, then commit the code to GitLab, and then release it to the test environment or production environment with one click through KubeSphere's DevOps pipeline. By using KubeSphere to deploy services in a containerized manner, the release workload of R&D engineers is reduced and R&D resources are released.

Current team composition: 1 architect is responsible for overall work such as architecture design and project management, 4 R&D engineers are responsible for R&D work, and 1 DevOps engineer is responsible for DevOps construction and operation and maintenance. Such a small team can efficiently and smoothly complete large systems construction work.

background introduction

The development of cloud computing has gradually matured, and the development of big data and artificial intelligence industries based on cloud computing has become more and more mature. The integration and innovation of the automotive field with cloud computing, big data, and artificial intelligence is unstoppable, and autonomous driving has been developed in succession around the world. landing. Based on my country's national conditions and the development trend of the automobile industry, Chinese auto scientists have proposed a Chinese solution for self-driving cars, that is, a vehicle-road coordination solution, to make up for the lack of international single-vehicle intelligence solutions.

In the context of this industry development, promoting the construction of an autonomous driving cloud operation control center for vehicle-road coordination is an industry common key technology that needs to be broken through urgently.

In the process of building an autonomous driving cloud operation control center, many practical difficulties are faced, such as relatively tight software and hardware resources, very few R&D personnel, particularly heavy construction tasks, and the dependence of the operation control center platform on vehicle-side and road-side physical infrastructure. Comparing various factors, in order to improve the utilization of limited hardware resources such as storage, computing, and network, reduce the workload of limited R&D personnel, and complete the construction tasks of the operation and control center platform with high quality and efficiency, the integration and deployment of the construction team has experienced Physical machine deployment, virtual machine deployment, and the iteration and upgrade process of the current KubeSphere-based containerized deployment solution.

Selection instructions

In the process of researching cloud migration, I thought about directly purchasing Alibaba Cloud's K8s cluster, but since the company itself has some physical servers to use, it continued to research and finally chose KubeSphere as the containerized solution.

We chose KubeSphere for the following reasons:

  • Thanks to the installation tool KubeKey, it is more convenient to install, which is much simpler and easier than installing K8s before.
  • KubeSphere is equivalent to making a graphical interface for K8s. You can open and view the cluster status from the web interface. It is very convenient to operate and maintain the cluster, which is much simpler and clearer than typing commands on the command line.
  • KubeSphere supports the pipeline function, which can realize the continuous release function without installing additional software. The combination of continuous release and K8s can reduce many tedious operations.

practice process

Since it is the first time for project team members to use KubeSphere and K8s to deploy applications in a containerized manner, both senior expert architects and R&D and operation and maintenance personnel have done basic research and It is the first time to use after learning, so our road to application containerization is a process of using and improving during learning.

In order to ensure that the service containerization of the final production environment will be more stable and controllable, we have adopted a two-step strategy:

  • The first step is to deploy and run the private cloud test environment to accumulate experience. First build Harbor, KubeSphere, K8s, and Docker in the test environment, build a release pipeline for the test environment, and deploy various services in the test environment in a containerized manner, so that more than 60 front-end and back-end services can be stabilized in a containerized manner in the test environment Run, in this way, accumulate experience through the operation of the test environment, and wait for the container cloud operation of the test environment to be relatively stable, and after all kinds of pitfalls have been overcome, start to containerize the production environment.
  • The second step is the deployment of private cloud production environment services. First, all services are deployed on the physical machine to make the operation and control center platform on the physical machine run stably, so that the leader can check the operation of the online platform at any time. Secondly, a copy of KubeSphere, K8s, and Docker is made to deploy the operation and control in a containerized manner. center platform. For such a double production environment operation control center platform, when the production environment containerized operation control center platform is running stably, the external domain name of the operation control platform is bound to the containerized operation control center platform, and the physical machine is gradually disabled. The operation and control center platform deployed in

Infrastructure and Deployment Architecture

The KubeSphere deployment architecture of the test environment and the production environment is basically the same.

Cluster planning:

Node IP node role components
192.168.16.70 kp-master01 kube-apiserver
kube-Scheduler
kube-controller-manager
Etcd
192.168.16.80 kp-master02 api-server
Scheduler
controller-manager
Etcd
192.168.16.100 kp-node01 Kubelet
kube-proxy
Docker
192.168.16.110 kp-node02 Kubelet
kube-proxy
Docker
192.168.16.120 kp-node03 Kubelet
kube-proxy
Docker
192.168.16.140 kp-node05 Kubelet
kube-proxy
Docker

The specific deployment architecture diagram is shown in the following figure:

Online environment reference:

  • Stateful services are mainly some infrastructure services, such as MySQL, Redis, ClickHouse, etc. For these stateful services, virtual machines are still used for deployment.
  • The service of stateless service in KubeSphere is shown in the figure below, including the front-end module and back-end module of the application layer, which are deployed in the way of container deployment.

storage and network

Some routine business data of the operation control center platform is stored in MySQL, and ClickHouse is used to store analytical historical data for data aggregation OLAP analysis, and Hadoop and Flink are used to perform distributed analysis and processing of data in the data warehouse. ELK is used to collect service logs in the container cluster.

DevOps solution

Both the test environment and the production environment are built in the private cloud, and the two environments are basically identical.

The project code is unified using GitLab for configuration management, the Docker image is stored using Harbor, the DevOps project is established in KubeSphere, and the release pipeline is established for each module in the DevOps project. Each environment in the pipeline is specifically executed by a shell script on a publisher server.

Effect

By using KubeSphere, the release and deployment workload of engineers is significantly reduced, and personnel productivity and R&D efficiency are improved. R&D engineers only need to implement features or fix bugs locally, then commit codes to GitLab, and then click to run on KubeSphere's DevOps pipeline, and the deployment to the test environment or production environment is completely completed, which is very easy and simple.

By using KubeSphere to deploy services in a containerized manner, the most obvious benefits are as follows:

  • The only thing R&D engineers need to do in software deployment is to log in to KubeSphere and click to run the pipeline, which greatly reduces the workload of deployment, and no longer has to memorize various strange commands, saving worry and effort.
  • After using KubeSphere and K8s for application containerized deployment, the resource utilization of hardware is optimized and the cost is reduced.

future plan

Through the application practice of KubeSphere, it is found that K8s does solve many problems of distributed microservice systems, such as load balancing, automatic expansion, etc., and the DevOps pipeline function is especially practical.

In the future, we plan to further improve the containerization of the operation control center platform, containerize stateful services as much as possible, and add automated testing to the release pipeline.

This article is published by OpenWrite, a multi-post platform for blogging !

Guess you like

Origin blog.csdn.net/zpf17671624050/article/details/132318464