[Cloud Native K8s] Kubernetes Principle

Table of contents

introduction

1. The origin of K8S

1. Description of public cloud types: IAAS, PAAS, SAAS

2. The birth of resource manager

2.1 MESOS

2.2 Docker Swarm

2.3 Kubernetes

2. Why is Kubernetes needed and what can it do?

3. Characteristics of Kubernetes

4. Kubernetes architecture

1. K8S workflow

2. K8S creation Pod process


introduction

What does Kubernetes mean? Why is it also called K8S?

The name Kubernetes comes from Greek, meaning "helmsman" or "navigator." K8s is the abbreviation of replacing the 8 letters "ubernete" with "8"

Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services that facilitates declarative configuration and automation. Kubernetes has a large and rapidly growing ecosystem. Kubernetesd services, support and tools are widely available

1. The origin of K8S

1. Description of public cloud types: IAAS, PAAS, SAAS

In the concept of cloud computing, we can divide public cloud into three levels, IAAS (Infrastructure as a Service), PAAS (Platform as a Service), and SAAS (Software as a Service)

  • IAAS: Infrastructure as a Service.
    Infrastructure-as-a-Service (IAAS), the best in China is Alibaba Cloud

  • Platform-as-a-Service (PAAS), sometimes called middleware, PAAS companies provide a variety of solutions for developing and distributing applications online, such as virtual servers and operating systems.
    Some large PAAS providers include Google App Engine, Microsoft Azure, Force.com, Heroku, Engine Yard, etc.
    The best one in China is Sina Cloud.

  • SAAS: Software as a Service.
    Software-as-a-Service (SAAS), to name some examples: Google Apps, Dropbox, Salesforce, Cisco WebEx, Concur and GoToMeeting, etc. The
    better one is Microsoft Office 365

2. The birth of resource manager

After having the above public clouds, we need to manage their resources, so the resource manager was born: MESOS–Docker Swarm–Kubernetes

2.1 MESOS

MESOS: Mesos is an open source distributed resource management framework owned by Apache. It is called the kernel of distributed systems and was later widely used on twitter.

Twitter is also the largest customer of Mesos, but around May 2019, Twitter announced that it would no longer use MESOS and switch to Kubernetes. At this point, Mesos has been slowly eliminated.

2.2 Docker Swarm

Docker Swarm is a very lightweight cluster management tool, only a few dozen MB in size

Swarm is a cluster management tool officially provided by Docker. Its main function is to abstract several Docker hosts into a whole and uniformly manage various Docker resources on these Docker hosts through one entrance.

However, Swarm is similar to Kubernetes. Because it is lighter, it has fewer functions than Kubernetes.

Around July 2019, Alibaba Cloud announced that Docker Swarm would be removed from the selection list, which also means that in the near future, Docker Swarm will be slowly eliminated like Mesos.

2.3 Kubernetes

Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services that facilitates declarative configuration and automation. Kubernetes has a large and rapidly growing ecosystem. Kubernetesd services, support and tools are widely available

An open source system for automatically deploying, scaling, and managing containerized applications.
It can be understood that K8S is a cluster responsible for automated operation and maintenance of multiple containerized programs (such as docker). It is a container orchestration framework tool with rich ecological machines.

2. Why is Kubernetes needed and what can it do?

Containers are a great way to package and run applications. In a production environment, you need to manage the containers running your application and ensure no downtime. For example, if one container fails, another container needs to be started. Wouldn't it be easier if the system handled this behavior?

This is how Kubernetes comes to solve these problems! Kubernetes provides you with a framework for running distributed systems resiliently. Kubernetes takes care of your scaling requirements, failover, deployment modes, and more.

K8S is Google's open source container cluster management system. Based on container technologies such as Docker, it provides a series of complete functions such as deployment and operation, resource scheduling, service discovery and dynamic scaling for containerized applications, improving the efficiency of large-scale container cluster management. Convenience. Its main functions are as follows:

  1. Use container technologies such as Docker to package, instantiate, and run applications.
  2. Run and manage containers across machines in a cluster.
  3. Solve the communication problem between Docker containers across machines.
  4. K8S's self-healing mechanism ensures that the container cluster always runs in the state expected by the user.

3. Characteristics of Kubernetes

  1. Auto-scaling: Use commands, UI, or automatically and quickly expand and shrink application instances based on CPU usage to ensure high availability during application business peak concurrency; recycle resources during low business peaks to run services at minimal cost.
  2. Self-healing: Restart failed containers when a node fails, replace and redeploy to ensure the expected number of copies; kill containers that fail health checks and will not process client requests until they are ready to ensure that online services are not Interrupt.
  3. Service discovery and load balancing: K8S provides a unified access entrance (internal IP address and a DNS name) for multiple containers, and load balances all associated containers, so that users do not need to consider container IP issues.
  4. Automatic release (default rolling release mode) and rollback: K8S uses a rolling strategy to update applications, updating one Pod one at a time instead of deleting all Pods at the same time. If a problem occurs during the update process, the changes will be rolled back to ensure that the upgrade does not affect the business. .
  5. Centralized configuration management and key management: Manage confidential data and application configurations without exposing sensitive data to mirrors, improving the security of sensitive data and storing some commonly used configurations in K8S to facilitate application use. .
  6. Storage orchestration: supports external storage and orchestrates external storage resources, and mounts external storage systems, whether from local storage, public cloud (such as AWS), or network storage (such as NFS, Glusterfs, Ceph) as cluster resources Partial use greatly improves storage usage flexibility.
  7. Task batch processing and running: Provides one-time tasks and scheduled tasks to meet batch data processing and analysis scenarios.

k8s solves several pain points of running docker naked:

  1. It can be used on a single machine and cannot be effectively clustered.
  2. As the number of containers rises, management costs rise
  3. There is no effective disaster recovery and self-healing mechanism
  4. Without preset orchestration templates, rapid and large-scale container scheduling cannot be achieved
  5. There is no unified configuration management center tool
  6. No management tools for container lifecycle
  7. No graphical operation and maintenance management tool

4. Kubernetes architecture

K8S belongs to the master-slave device model (Master-Slave architecture), that is, the Master node is responsible for the scheduling, management, and operation and maintenance of the cluster, and the Slave node is the computing workload node in the cluster.

In K8S, the main node is generally called the Master node, and the slave node is called the Worker Node. Each Node will be assigned some workload by the Master.

The Master component can run on any computer in the cluster, but it is recommended that the Master node occupies a separate server. Because the Master is the brain of the entire cluster, if the node where the Master is located goes down or becomes unavailable, all control commands will become invalid. In addition to the Master, other machines in the K8S cluster are called Worker Nodes. When a Node goes down, the workload on it will be automatically transferred to other nodes by the Master.

components effect
master node
apiserver Access to all services
controller-manager Responsible for creating pods based on preset templates and maintaining the desired number of copies of pods and other resources.
scheduler Responsible for scheduling pods and selecting the most appropriate node to allocate pods through pre-selection strategies and preferred strategies.
etcd Distributed key-value database, responsible for storing important information of K8S cluster (persistence)
work nodenode
Kubelet Communicate with the apiserver to report the resource usage and status on the current node node, accept the apiserver's instructions, and interact with the container engine to implement container life cycle management.
It was a proxy Implement the pod's network proxy on the node node, maintain network rules and four-layer load balancing rules, and be responsible for writing rules to iptables or ipvs to implement service mapping access
Container runtime docker Run the container and be responsible for the creation and management of local containers.

1. K8S workflow

First, the operation and maintenance personnel use the kubectl command line tool to send a request to the API Server. After receiving the request, the API Server will write it into etcd. The API Server will ask the Controller-manager to create the pod according to the preset template. The Controller-manager will use the API to create the pod. The server reads the user's default information in etcd, and then finds the Scheduler through the API Server to select the most appropriate node for the newly created pod. The scheduler will use the pre-selection and optimization strategies to select the optimal node based on the stored node node meta-information, remaining resources, etc. in the etcd storage center through the API Server.

After the scheduler determines the node, it passes it to kubele on the node through the API Server to create pod resources. Kubele calls the container engine to interactively create pods, and at the same time stores the pod monitoring information into etcd through the API Server.


When a user accesses, through kube-proxy load and forwarding, it is the Controller-manager that decides to create the pod list when accessing the corresponding pod , while kubelet and container engine are all responsible for the work.


2. K8S creation Pod process

kubectl creates a Pod (converted to json when submitting)

  1. First, it is authenticated by auth (authentication), and then passed to the API Server for processing.
  2. API Server submits the request information to etcd
  3. The scheduler and controller-manager will watch (listen) the API Server and listen for requests.
  4. After the scheduler and controller-manager listen to the request, the scheduler will submit a list to the API Server -----> including obtaining node node information.
  5. At this time, the API Server will obtain the back-end node information from etcd. After obtaining it, it will be monitored by the scheduler, and then the scheduler will pre-select and score, and finally send the results to the API Server.
  6. At this time, the API Server will also be monitored by the controller-manager. The controller-manager will create the Pod's configuration information (what controller is required) according to the request, and then give the controller resources to the API Server.
  7. At this time, the API Server will submit the list to the kubelet (agent) of the corresponding node.
  8. The kubelet agent interacts with the container's interface (such as containerd) through K8S. Assuming it is a docker container, then kubelet will interact with the docker daemon docker-server through the dockershim and runc interfaces to create the corresponding container and then generate The corresponding Pod.
  9. Kubelet will also use the metric server to collect all status information of this node, and then submit it to the API Server. Finally, API Server will submit the list to etcd for storage (finally, api-server will maintain the data in etcd).

Simple version

1. First, kubectl is converted into json, and then a Pod creation request is submitted to api-server
. 2. api-server records the request information in etcd.
3. Scheduler monitors the requests processed by api-server, and then applies to api-server for back-end node information.
4. After the api-server obtains the back-end node information from etcd, it gives the scheduler 
5. The scheduler performs pre-selection, scoring, and then submits the results to the api-server.
6. The controller-manager monitors the request information processed by the api-server and sends The required controller resources are given to api-server
7. api-server connects to the kubelet of the node node
8. kubelet calls the resources to create a pod and returns the statistical information to api-server
9. api-server records the information in etcd 

Guess you like

Origin blog.csdn.net/weixin_71429844/article/details/127603416