[K8S Series] Understanding k8s and k8s architecture

1. What is k8s?

Kubernetes, referred to as k8s, is a platform that supports cloud native deployment. K8s is essentially used to simplify the development and deployment of microservices. It is an open source container orchestration technology used to automate the deployment, expansion and management of containerized applications. Traditional docker actually also provides the container orchestration technology docker-compose, but docker-compose can only manage containers on one host, while k8s can manage containers on multiple hosts.

2. Changes in application deployment methods

2.1. Traditional deployment (resources are not isolated):

Advantages: Simple, no other technology required

Disadvantages: Resource usage boundaries cannot be defined for applications, it is difficult to allocate computing resources reasonably, and programs are prone to influence. When a certain process has a large number of user visits, it will cause uneven distribution of server resources and competition for resources.

2.2. Virtualization deployment (running multiple virtual machines on one physical machine, too much virtualization):

Advantages: Program environments do not affect each other, providing a certain degree of security

Disadvantages: Adding an operating system and wasting some resources

2.3. Containerized deployment (similar to virtualization, but sharing the operating system):

Advantages: It can ensure that each container has its own file system, CPU, memory, process space, etc., and the resources required to run the application are packaged in the container and decoupled from the underlying infrastructure.

 3. Characteristics of k8s

(1) Self-healing: Once a container crashes, it can quickly start a new container in about 1 second.
(2) Elastic scaling: The cluster can be automatically adjusted as needed. Adjust the number of running containers
(3) Service discovery: The service can find the services it depends on through automatic discovery
(4) Load balancing : If a service starts multiple containers, it can automatically realize load balancing of requests
(5) Version rollback: If you find a problem with the newly released program version, you can immediately roll back to the original one. Version
(6) Storage orchestration: Storage volumes can be automatically created according to the needs of the container itself

4. Container management architecture upgrade

4.1.Borg architecture

4.2.kubernet architecture

5. Interpretation of K8S architecture

5.1.master node: the control plane of the cluster, responsible for cluster decision-making (management)

 (1) ApiServer: The only entrance for resource operations, receives commands entered by users, and provides mechanisms such as authentication, authorization, API registration and discovery.

  (2) Scheduler: Responsible for cluster resource scheduling and scheduling Pods to corresponding node nodes according to predetermined scheduling policies.

  (3) ControllerManager: Responsible for maintaining the status of the cluster, such as program deployment arrangements, fault detection, automatic expansion, rolling updates, etc.

(4) Etcd: Responsible for storing information about various resource objects in the cluster.

5.2.node node: the data plane of the cluster, responsible for providing the operating environment (work) for the container

(1) Kubelet: Responsible for maintaining the life cycle, storage, and network of containers.

(2) KubeProxy: Responsible for providing service discovery and load balancing within the cluster.

(3) Docker: Responsible for various operations of the container on the node, serving as the runtime environment of the container.

(4) Pod: Multiple Pods can be deployed on one node, and multiple containers can be deployed on one Pod.

Guess you like

Origin blog.csdn.net/m0_73901077/article/details/134902458