Peanut Haoche’s microservice architecture practice based on KubeSphere

Company Profile

Peanut Haoche was established in June 2015 and is committed to building the first brand of automobile travel solutions in the sinking market. Through self-built direct sales channels and targeting the sinking market, a new automobile retail platform has been formed with four major businesses: direct leasing, wholesale sales, leaseback, and new energy vehicle retail as the core driving force. Currently, it has more than 600 stores, covering There are a total of 25 central warehouses in more than 400 cities. At present, it has provided high-quality car services to more than 400,000 users, and has successfully led the first echelon in the industry with its omni-channel advantages and product richness.

Background introduction

The company uses kvm as the underlying virtual machine management for the physical servers in its self-built IDC computer room. As the business increases, some problems arise in the system, so this implementation of the underlying infrastructure transformation was initiated.

like:

  • Utilization is not saturated: CPU utilization of various types of servers is generally not saturated, utilization is low during idle times, and busy and idle are uneven;
  • High energy consumption: The demand for servers is large, and the utilization rate of cabinets, networks, servers, etc. is low;
  • The basic resources are complex: the underlying standardization is inconsistent and cannot be passed on;
  • Insufficient resource sharing: the chimney construction model, resources are isolated from each other and the fixed investment cost is high. In order to meet the business peak, a large number of data expansion server products need to be purchased;
  • Storage capacity continues to rise, logical storage devices increase, and management becomes more complex and intensive;
  • The business network lacks an overall development plan, the functional positioning of some systems or platforms is unclear, and the cross-department, cross-region, and cross-system process interfaces are blurred;
  • The system development and launch cycle is long, and the later maintenance and problem locating costs are high. The independent construction of the platform is mostly a chimney-style construction and an islanded solution;
  • Business processes, platform structures and interfaces lack unified specifications and requirements.

Platform selection

As a DevOps operation and maintenance team, we need to provide a self-service comprehensive operation and maintenance platform. When selecting an open source platform, the company finally chose KubeSphere:

  1. Completely open source, no charges, and secondary development is possible;
  2. It is rich in functions, simple to install, supports one-click upgrade and expansion, and has a complete DevOps tool chain;
  3. Supports multi-cluster management, users can use direct or indirect connections to import Kubernetes clusters;
  4. Integrated observability, you can add indicators and alarms you want to monitor as needed, as well as log queries;
  5. Customize roles and audit functions to facilitate subsequent data analysis.

Compared with other platforms, KubeSphere better avoids the complexity of Kubernetes itself and reduces the workload of integrating various open source tools. This allows us to focus more on operation and maintenance automation and self-service platform construction, without the need to separately manage the underlying infrastructure and services. Provide full-stack IT automated operation and maintenance capabilities to simplify the enterprise's DevOps workflow. Therefore KubeSphere became our best choice to meet the company's needs.

Practice process

Infrastructure construction and planning

Kubernetes cluster

Due to business needs, we separate the testing and production environments to avoid mutual influence. As shown in the figure above, production consists of three Matsre nodes, and currently there are thirteen Node nodes. Here, the Master node is marked with taint to make the Pod unschedulable to avoid situations such as excessive load on the master node.

The production environment uses the officially recommended Keepalived and HAproxy to create a high-availability Kubernetes cluster. The high-availability Kubernetes cluster can ensure that there will be no service interruption when the application is running, which is also one of the requirements of production.

The picture above is the official document , please refer to it for detailed introduction.

Release workflow diagram:

underlying storage environment

For the underlying storage environment, we did not deploy it in a containerized way, but in a traditional way. This is also done for the sake of efficiency, and in Internet business, storage services have certain performance requirements to cope with high concurrency scenarios. Therefore deploying it on a bare metal server is the best choice.

MySQL, Redis, and NFS have all implemented high availability to avoid single point problems. Ceph is mounted through cephfs as a KubeSphere StorageClass storage class. Currently, most of them are stateless applications. Subsequent deployment of stateful applications will further optimize storage.

monitoring platform

In order to use KubeSphere efficiently on a daily basis, we have configured integrated monitoring alarms. Most of them are currently sufficient for use. As for nodes, daily problems can be viewed through separate PMM monitoring.

Alarm example:

Monitoring example:

Effect

The introduction of KubeSphere has greatly reduced the burden of continuous integration and continuous deployment of the company's R&D, and greatly improved the project delivery efficiency of the entire R&D team in production. The R&D team only needs to implement the function locally and fix bugs, then commit the code to git, and then release the test environment/production environment project based on Jenkins. At this time, the entire CI/CD continuous integration delivery workflow is completely completed, and the rest The joint debugging work is left to R&D.

Implementing DevOps based on KubeSphere has brought us the greatest efficiency highlights as follows:

  • Integrated platform management: In terms of service function iteration, you only need to log in to the KubeSphere platform and click on the project you are responsible for, which greatly reduces the deployment workload. You can also combine KubeSphere with Jenkins to achieve project delivery, but the entire set of The process is relatively cumbersome, requiring attention not only to the construction of the Jenkins platform, but also to the delivery results of KubeSphere. This causes a lot of inconvenience and deviates from our original intention of delivery. In the future, we may use KubeSphere's own custom pipeline for unified management.

  • Significant improvement in resource utilization: The combination of KubeSphere and Kubernetes further optimizes system resource utilization, reduces usage costs, and maximizes DevOps resource utilization.

Future planning (improvement)

At present, through the introduction of the KubeSphere cloud native platform practice into this production project, we have found that it has indeed solved the problem of microservice deployment and management. The migration of cloud native architecture based on the capabilities of the KubeSphere platform has greatly improved our convenience. Load balancing, application routing, automatic expansion and contraction, DevOps, etc.

With the help of the platform, our R&D and operation and maintenance efficiency has been significantly improved. We believe that using KubeSphere's cloud-native platform, service grid governance, canaries, grayscale publishing, and link tracking will provide a solid foundation for the company's next business growth.

This article is published by OpenWrite, a blog that publishes multiple articles !

Lei Jun: The official version of Xiaomi’s new operating system ThePaper OS has been packaged. A pop-up window on the Gome App lottery page insults its founder. The U.S. government restricts the export of NVIDIA H800 GPU to China. The Xiaomi ThePaper OS interface is exposed. A master used Scratch to rub the RISC-V simulator and it ran successfully. Linux kernel RustDesk remote desktop 1.2.3 released, enhanced Wayland support After unplugging the Logitech USB receiver, the Linux kernel crashed DHH sharp review of "packaging tools": the front end does not need to be built at all (No Build) JetBrains launches Writerside to create technical documentation Tools for Node.js 21 officially released
{{o.name}}
{{m.name}}

Guess you like

Origin my.oschina.net/u/4197945/blog/10120894