K8S | Core Principle Analysis

Understand the process and principles as a whole;

1. Background

In a distributed architecture, there are many services that need to be managed, whether it is the number of services or the division of the system;

From the perspective of service capabilities, layered management and control can be carried out, but there are quite a few service layers in which the frequency of changes and updates is very low, so the perception is not obvious;

30f410cf1177177e04d4549342473951.png

Take the system I am currently participating in the research and development as an example;

There are nearly a hundred services managed by K8S, some of which adopt the cluster mode. Even for a system of this scale, it is almost impossible to rely on purely manual operation and maintenance, and automated processes are essential;

2. Continuous integration

I wrote a complete practical case on this topic before, mainly focusing on the use level of components such as Jenkins, Docker, and K8S, and summarizing the automatic management process of source code compilation, packaging, image construction, and deployment;

8f6e8724a09d64ea1380d592a27e78c7.png

Jenkins : It is a very scalable software for automating various tasks, including building, testing and deployment;

Docker : As an open source application container engine, it can package the application and its related dependencies to generate an Image image file. It is a standard operating environment and provides sustainable delivery capabilities;

Kubernetes : As an open source container orchestration engine, it is used to automate the deployment, expansion and management of containerized applications;

3. K8S architecture

1. Core components

d96cca9f5425fa823febf9606a9645ce.png

Control-Plane-Components: Control Plane Components

Make global decisions on the cluster, such as: resource scheduling, detection, incident response, can run on any node in the cluster;

  • api: open the API of K8S, and the components interact through the API, which is equivalent to the front end of the control plane;

  • controllermanager: Run the controller process, which is logically a separate process;

  • scheduler: monitor the newly created Pods that do not specify a running node, and select a running node for the Pod;

  • etcd: a key-value database with both consistency and high availability, as a backend library for storing K8S data;

Node: node component

This component will run on each node, responsible for maintaining the running Pod and providing the Kubernetes operating environment;

  • kubelet: an agent running on each node to ensure that the containers are running in the Pod;

  • kube-proxy: a network proxy running on each node, maintaining the network rules on the node;

Container-Runtime: container runtime

The software responsible for running containers supports multiple container operating environments such as Docker, containerd, and CRI-O, as well as any interface that implements the Kubernetes-CRI container operating environment;

2. Hierarchical structure

Considering the overall function, the K8S cluster can be divided into three modules: user, control plane, and node;

7b77882f9eabcc25f5640d036f1c1a3c.png

User side : Whether it is the CLI command line or the UI interface, it will interact with the APIserver of the control panel, and the APIserver will interact with other components, and finally execute the corresponding operation command;

Control plane : also known as Master before, the core components include APIserver, controller, scheduler, etcd, which are mainly used to schedule the entire cluster and make global decisions;

Node : The workload is executed by putting the container into the Pod running on the node. A simple understanding of the workload is various applications, etc. The core components on the node include Pod, kubelet, Container-Runtime, kube-proxy, etc.;

3. Core competencies

From the perspective of research and development, K8S provides extremely powerful application service management capabilities;

3.1 Discovery and load

Service Service can expose network applications running on one or a group of Pods as network services, usually using tags to filter resource objects;

00b7acbb33642582365e7add8f5bb05b.png

3.2 Scheduling

The scheduler uses the monitoring mechanism to discover Pods that are newly created in the cluster and have not yet been scheduled to the node. Since the containers in the Pod and the Pod itself may have different resource requirements, the scheduler will place the Pod on the appropriate node;

fc04aa41b3654e3ecb2b72285c518e27.png

3.3 Automatic scaling

K8S can check the resource requirements of the workload through indicators, such as CPU utilization, response time, memory utilization, or others, so as to determine whether scaling is required. The vertical dimension can be more resource allocation, and the horizontal dimension can be more cluster deployment;

6ce8ef14140a052b031ecebeb6794acf.png

K8S can be automatically scaled and has the ability to automatically repair. When a node fails or an application service is abnormal, it will be checked and the node may be migrated or restarted;

4. Application cases

1. Service deployment

In the previous practical cases, the deployment action was completed by using the CLI command line and script files, and the whole process involved the cooperation of multiple components of the cluster, multiple communications and scheduling;

kubectl create -f pod.yaml

efa9d50d455ddb671710c800fe7ba362.png

2. Interaction process

f329e8c3a8bdde4420f49ef5de360968.png

[1] The CLI command line and UI interface interact with the internal components of the cluster through the APIserver interface, such as the above-mentioned Pod deployment operation;

[2] After the APIserver receives the request, it will write the object in the serialized state to etcd to complete the storage operation;

[3] The Scheduler discovers Pods that are newly created in the cluster and have not yet been scheduled to the node through the Watch mechanism;

[4] Find all schedulable nodes of a Pod in the cluster, score these schedulable nodes, select the node with the highest score to run the Pod, and then the scheduler notifies the APIserver of the scheduling decision;

[5] After the APIserver completes the information storage, it then notifies the Kubelet of the corresponding node;

[6] Kubelet works based on PodSpec to ensure that the containers described in these PodSpecs are running and in good health. Each PodSpec is a YAML or JSON object describing a Pod;

[7] Pod is the smallest deployable computing unit that can be created and managed in Kubernetes, including one or more containers;

Guess you like

Origin blog.csdn.net/g6U8W7p06dCO99fQ3/article/details/131297663