Basic concepts of kubernetes for cloud native

Table of contents

1. Cloud in the current market

2. Main service models of cloud computing

3. Kubernetes

1. The benefits of using Kubernetes

2. Explorer

1、k8s

2. The basic concept of k8s

4. Basic concepts and terminology of Kubernetes

1. Two management roles of Kubernetes cluster: Master and Node

1.1、master

1.1.1, master concept

1.1.2, the process running on the master node

1.2、node

1.2.1, node concept

1.2.2, the process running on the node node

2. How does kubernetes implement an efficient and balanced resource scheduling strategy

3、Pod

3.1. There are several types of containers in the pod

3.2, pod structure

4. The relationship between pod and container

①, Pod self-healing ability

②. Elastic scaling triggering conditions: (elastic scaling is defined once and controlled automatically)

5. kubernetes service discovery and load balancing

5.1, pod network

5.2. How does the pod provide access to the outside world

5.3, pod load balancing

6. The process of creating a pod

1. Cloud in the current market

Domestic: Alibaba Cloud, Huawei Cloud, Baidu Cloud (private cloud), Microsoft Cloud, other clouds

Abroad: AWS Google

2. Main service models of cloud computing

SaaS: Software as a Service, software as a service, the role of this layer is to provide applications as services to customers.

PaaS: Platform as a Service, platform as a service, the role of this layer is to provide the development platform as a service to users.

IaaS: Infrastructure as a Service, infrastructure as a service, the role of this layer is to provide virtual machines or other resources as services to users.

3. Kubernetes

1. The benefits of using Kubernetes

①. Complex systems can be developed "lightly"

②. Using Kubernetes is to embrace the microservice architecture in an all-round way

③. Our system can be "relocated" to the public cloud as a whole anytime and anywhere

④、Kubernetes system architecture has a super horizontal expansion capability

2. Explorer

1、k8s

First of all, k8s performs batch management on clustered containers (add, delete, modify, check)

2. The basic concept of k8s

K8s regards all manageable objects as "resources", even if the service is not internal to K8s, if you want to use it as a regular component of K8, K8 also provides an API for us to define "controllers" (aggregation plug-ins) ; K8S is also a large ecosystem

4.  Basic concepts and terminology of Kubernetes

Most concepts in Kubernetes such as Node , Pod , Replication Controller , Service , etc. can be regarded as a
A kind of "resource object", almost all resource objects can be compiled through the kubectl tool provided by Kubernetes (or API
program call) to perform operations such as adding, deleting, modifying, and checking, and save them in etcd for persistent storage. From this point of view, Kubernetes is actually a highly automated resource control system. It realizes automatic control and automatic correction by tracking and comparing the difference between the "expected resource state" stored in the etcd library and the "actual resource state" in the current environment. Wrong advanced features.

1.  Two management roles of Kubernetes cluster: Master and Node

1.1、master

1.1.1, master concept

The Master in Kubernetes refers to the cluster control node, and each Kubernetes cluster needs to have a Master
The node is responsible for the management and control of the entire cluster, basically all control commands of Kubernetes are sent to it, and it is responsible for specific
The execution process of the body, all the commands we execute later are basically run on the Master node. The Master node will usually
Occupy an independent server ( 3 servers are recommended for high-availability deployment), the main reason is that it is so important that the entire set
If the "head" of the group is down or unavailable, the management of container applications in the cluster will be invalid.

1.1.2, the process running on the master node

①、Kubernetes API Serverkube-apiserver

Function: Provides the key service process of the HTTP Rest interface. It is the only entry point for operations such as addition, deletion, modification, and query of all resources in Kubernetes, and it is also the entry point process for cluster control.

②、Kubernetes Controller Manager(kube-controller-manager)

Function: The automatic control center of all resource objects in Kubernetes can be understood as the "big manager" of resource objects.

③、Kubernetes Schedulerkube-scheduler

Role: Responsible for the process of resource scheduling ( Pod scheduling), which is equivalent to the "dispatching room" of the bus company.

④, etcd service

Function: The data storage location of all resource objects in Kubernetes.

1.2、node

1.2.1, node concept

In addition to the Master , other machines in the Kubernetes cluster are called Node nodes, and the Node nodes are the workload nodes in the Kubernetes cluster. Each Node will be assigned some workloads ( Docker containers) by the Master . When a Node goes down, the workload on it will be automatically transferred to other nodes by the Master .

1.2.2, the process running on the node node

①、kubelet

Role: Responsible for tasks such as creating, starting and stopping the containers corresponding to the Pod , and at the same time closely cooperate with the Master node to realize the basic functions of cluster management.

②、be a proxy

Role: An important component to realize the communication and load balancing mechanism of Kubernetes Service (load balancing of L4 layer)

③、Docker Enginedocker

Role: Docker engine, responsible for local container creation and management

2. How does kubernetes implement an efficient and balanced resource scheduling strategy

The node node can be dynamically added to the k8s cluster during operation, provided that the node has been correctly installed, configured and started key processes. By default, kublet will register itself with the master, which is also the management method recommended by k8s. After the node is included in the cluster management scope, the kubelet process will regularly report the self-generated status to the master node, such as the operating system, Docker version, CPU and memory of the machine, and which Pods are currently running. In this way, the Master can know the resource usage of each Node and implement an efficient and balanced resource scheduling strategy. When a Node fails to report information within the specified time, it will be judged as "disconnected" by the Master , and the status of the Node will be marked as unavailable ( Not Ready ), and then the Master will trigger the automatic process of "big workload transfer".

3、Pod

Pod is the smallest deployable computing unit that can be created and managed in kubernetes (pod is the smallest resource unit of K8s)

3.1. There are several types of containers in the pod

name effect
init-container Initialize the container environment

pause container (root container)

Provide network namespace and storage volume support in the pod
Business/Application Container Provide business operation

3.2, pod structure

A pod is equivalent to a container. A pod has an independent IP address and its own hostname. It uses namespace for resource isolation, which is equivalent to an independent sandbox environment.

The pod internally encapsulates the container, which can encapsulate one or more containers (usually a group of related containers)

There can be multiple containers in each pod, which can be divided into the following:

①. The container where the user program is located is the business container. The number can be more or less, and they are started in parallel.

②, pause container, this is a root container that every pod will have, it has two functions:

        1. You can use him as a basis to evaluate the health status of the entire pod

        2. You can set the IP address on the root container, and other containers use this IP (pod ip) to achieve network communication within the pod

③, init initialization container

        The init container must run to completion before the application (business) container starts, and the application container runs in parallel, so

Init containers provide an easy way to block or delay the startup of application containers.

         init container features:

①. The Init container always runs until it completes successfully

②, each Init container must be successfully completed before the next Init container is started, and they are started serially

③. Before the pause container and the business containers run, they will first run an init initialization container. If the Pod's Init container fails, k8s will continue to restart the Pod (in order to allow the init container to start and complete) until the Init container succeeds. However, if the Pod's corresponding restart policy (restartPolicy) is Never, it will not restart.

Therefore, it can be concluded that the smallest deployment unit of pod is: basic container (pause), initialization container (init initialization container), and business container.

4. The relationship between pod and container

Containers are encapsulated in Pods

①, Pod self-healing ability

When the replica set is 3, that is, the number of pods is 3, the master controller manages the replica controllers, and the replica controllers manage their respective pods.

If the pod_1 container is down at this time. The replica set controller will report to master controller A, master controller A will re-create replica controller C after receiving it, and the new replica controller C will also generate a new pod, and then the master controller will delete replica controller A ( The replica set controllers correspond to their respective pods. If the pods are down, the replica set controllers are also down and removed). Finally, replica controller C replaces replica controller A; the new pod replaces pod_1. (During this process, the user feels that the device is "restarted", but this is not a restart, but a process of deleting pods and replica set controllers and creating new ones)

Pod elastic scalability

Preconditions:

Condition 1:

CPU usage: 20%~70%

20% is the lowest tolerable load rate

70% is the highest tolerable load rate

Condition 2:
docker cgroup resource limit: cpu is 300M, mem is 300M

②. Elastic scaling triggering conditions: (elastic scaling is defined once and controlled automatically)

For example:

When the cpu usage rate of pod_3 is lower than the usage rate of 300x20%=60M, the condition will be triggered and Pod_3 will be deleted. Assign its tasks to the remaining pods.

When the memory usage of pod_1, pod_2, and pod_3 is above 300x70%=210M, the condition will be triggered to generate a master controller D and replica set controller pod_4

5. kubernetes service discovery and load balancing

5.1, pod network

①, pod has its own independent IP address

②. The containers inside the pod are accessed through localhost

5.2. How does the pod provide access to the outside world

 If a pod wants to provide external services, it must be bound to a physical machine port (that is, open the port on the physical machine and map this port with the port of the pod), so that data packets can be forwarded through the physical machine.

5.3, pod load balancing

 Several methods of pod load balancing

1、iptables

2. ipvs (by default, this method is generally used. Use kube-proxy (software form) for load balancing on the fourth layer

3、uperspace

A pod is a process with a life cycle. Once it goes down or the version is updated, a new pod will be created (the IP address will change, and the hostname will change). At this time, it is not appropriate to use Nginx for load balancing, because it does not Knowing that the pod has changed, the request cannot be accepted. So it doesn't know that the service has changed, Nginx cannot discover the service, and Nginx cannot be used for load balancing.

6. The process of creating a pod

 ①. First, convert kubectl into json and submit a request to api-server to create a pod

② After api-server receives the pod creation request, it will record the information in etcd

③. After the scheduler listens to the processing request of the api-server, it then applies to the api-server for backend node information

④. After receiving the request from the scheduler, the api-server submits the application to etcd to obtain the back-end node information. And return the result to scheduler

⑤. The scheduler performs pre-selection, scoring, and then submits the results to the api-server

⑥, controller-manager listens to the request information processed by api-server, and gives the required controller resources to api-server

⑦. The api-server interacts with the kubelet of the node node

⑧, kubelet calls resources to create pods, and returns statistical information to api-server

⑨, api-server records information in etcd

Guess you like

Origin blog.csdn.net/m0_62948770/article/details/127593312