[K8S series] In-depth analysis of k8s: getting started guide (2)

Table of contents

preamble

Recap:

4. K8S architecture

4.1 Declarative system VS imperative system

4.2 k8s-declarative system

 4.2.1 Declaration method - yaml

4.3 Basic concepts of Kubernetes

1. Cluster

2. Node

3. Container

4.Pod

5.Service

6.Deployment

question:

4.4 K8S Core Components

4.4.1 for apiserver

4.4.2  kube-scheduler

4.4.3  kube-controller-manager

4.4.4  etcd

4.4.5 kubelet

4.4.6 to proxy

4.4.7 docker/Container Runtime

in conclusion:

 4.5 k8s capability display

4.5.1 Resource Scheduling

4.5.2 Horizontal scaling

 4.5.3 Failure Recovery

 5. Summary

6. vote

preamble

Any one thing, as long as you stick to it for more than six months, you can see a qualitative leap.

Today, I will learn about K8s introductory content. I hope this article can help readers have a preliminary understanding of K8s

Article tag color description:

  • Yellow : important headlines
  • Red : used to mark conclusions
  • Green : Used to mark first-level arguments
  • Blue : Used to mark secondary arguments

Recap:

This article is written after the previous one. Before reading this article, it is recommended to read this article first: 

Previous address: http://t.csdn.cn/ayZXg

4. K8S architecture

4.1 Declarative system VS imperative system

Let's first look at the comparison between declarative systems and imperative systems

 

look at the picture above

The air conditioner remote control on the left is a declarative system.

For example: we set the target temperature, no matter how he adjusts the process, the final result is 25 degrees.

The one on the right is the TV remote control, it belongs to the command type, it has many buttons,

For example: we ordered channel 1, the TV should switch to channel 1 immediately,

in conclusion:

Declarative systems: focus on goals - "what to do"

In the field of software engineering, a declarative system means that the program code describes what the system should do rather than how to do it.

It is limited to describing what purpose to achieve, and how to achieve the purpose is handed over to the system.

Imperative systems: focus on process - "how to do it"

In the field of software engineering, an imperative system is writing out definite steps to solve a problem, accomplish a task, or achieve a goal.

This method explicitly states that the system should execute an instruction and expects the system to return the desired result.

4.2 k8s-declarative system

All management capabilities of Kubernetes are built on the basis of object abstraction ,

Core objects include:

Node : the abstraction of computing nodes, used to describe the resource abstraction, health status, etc. of computing nodes

Namespace : The basic unit of resource isolation, which can be simply understood as the directory structure in the file system

Ingress : It is an API object that manages external access to services in the cluster. The typical access method is HTTP

Service : how the service publishes the application as a service, which is essentially a declaration of load balancing and domain name service

Pod : used to describe the application instance, including mirror address, resource requirements, etc. The core object in Kubernetes is also the secret weapon to connect applications and infrastructure

 4.2.1 Declaration method - yaml

k8s is a declarative system, so how does it do it? He realizes the statement by customizing yaml. Just look at some explanations of the fields in yaml.

 Apiversion : define the api version label

 Kind : Define the type and role of resources, including the commonly used types (deployment, service, Ingress, Job, Map)

 Metadata : Define the metadata information of the resource, such as resource name, namespace, label, etc.

 Spec : Define the parameter attributes required by resources, such as the number of copies, label selectors, business versions, whether containers are required, container image versions, container startup names, startup strategies, hardware resource restrictions, etc., are all carried out through the subitems of the spec definition

4.3 Basic concepts of Kubernetes

1. Cluster

A Kubernetes cluster is a collection of multiple nodes , which can be on the same physical machine or on different physical machines.

A cluster consists of Master nodes and Worker nodes. Master nodes are used to manage the entire cluster, while Worker nodes are used to host application containers.

Each node in the cluster has an agent (kubelet) that communicates with the master node to manage containers.

2. Node

A Kubernetes node is a computer in a cluster, which can be a physical computer or a virtual machine. A node can be a Master node or a Worker node.

Master nodes are responsible for managing the entire cluster, while Worker nodes are responsible for hosting application containers.

3. Container

Kubernetes containers are a lightweight virtualization technology that can package an application and its dependencies into a portable image and run it anywhere at runtime.

Docker is a commonly used container technology, and Kubernetes also supports other container technologies.

4.Pod

A Kubernetes Pod is the smallest deployment unit of one or more tightly coupled containers.

A Pod contains one or more application containers that share the same network and storage resources and can communicate via local interprocess communication (IPC) and a shared file system.

A Pod can also contain one or more init containers, which run before application containers to prepare the Pod's environment for execution.

A simple explanation:

In a real operating system, processes do not run alone "lonely", but are "principled" organized together in the form of process groups, as shown in the following figure:

We already know that the essence of a container is a process, and k8s is the operating system

And what k8s does is to map the concept of "process group" to container technology, which produces pod

So, pod is a logical concept . He is a group of containers that share certain resources .

Specifically: All containers in the Pod share the same Network Namespace, and can declare to share the same Volume.

These shared containers are associated with Infra containers through Join Network Namespace.

Such an organizational relationship as a whole is called a pod

It can also be understood that Pod plays the role of a "virtual machine" in a traditional deployment environment .

This design is to make the migration of users from the traditional environment (virtual machine environment) to Kubernetes (container environment) smoother.

5.Service

A Kubernetes Service is an abstraction that defines a logical collection of Pods that can be accessed as a single unit. A Service provides a stable network endpoint through which other applications can access Pods. Service can also define load balancing rules to distribute traffic among multiple Pods.

give a simple example

For example, the access relationship between the front-end web application and the back-end service. Two applications like this are often deliberately not deployed on the same machine, so that even if the machine where the web application is located goes down, the back-end service will not be affected at all. .

We know that for a container, its IP address and other information are not fixed, and the backend IP will change every time it releases, so how can the web application find the Pod of the backend service container?

Therefore, the practice of the Kubernetes project is to bind a Service service to the Pod, and the IP address and other information declared by the Service service are "unchanged for life".

Therefore: the main function of this Service is to serve as the proxy entrance of the Pod, thereby exposing a fixed network address to the outside instead of the Pod. In this way, for the Pod of the web application, all it needs to care about is the Service information of the backend service Pod.

6.Deployment

Kubernetes Deployment is a controller that automates container deployment and updates. A Deployment uses a Pod template to define the specification of an application container, and then creates and manages replicas of the Pod. If a Pod fails or is deleted, the Deployment will automatically create a new Pod to replace it.

question:

Through the introduction of the above objects, we should have a little understanding of these objects now, so let’s think about this question, "How does K8S complete business description through object combination? " How do they do it?

In the k8s project, the recommended usage method is:

First, describe the application you are trying to manage through an "arrangement object", such as Pod, Job, etc.;

Then, define some "service objects" for it, such as Service, Secret, and ingress. These objects will be responsible for specific platform-level functions.

This usage method is the so-called " declarative API ".

The "arrangement object" and "service object" corresponding to this API are both API objects (API Object) in the k8s project

4.4 K8S Core Components

The architecture of the k8s project consists of two nodes, Master and Node, which correspond to control nodes and computing nodes respectively.

4.4.1   for apiserver

kube-apiserver : kube-apiserver is the control panel of the k8s cluster, providing a RESTful API to manage cluster status and configuration. It is one of the core components in k8s, all other components communicate with kube-apiserver. kube-apiserver is the user-oriented interface of k8s, which can be interacted with using tools such as kubectl.

4.4.2  kube-scheduler

kube-scheduler : kube-scheduler is the scheduler of k8s, responsible for assigning Pods to available nodes. kube-scheduler uses Node resources and Pods requirements to make scheduling decisions. kube-scheduler also supports custom scheduling strategies to meet specific business needs.

4.4.3  kube-controller-manager

kube-controller-manager : kube-controller-manager is the controller of k8s, which is responsible for monitoring the status of the cluster and ensuring that the objects in the cluster are in the desired state. kube-controller-manager includes multiple controllers, such as Replication Controller and Endpoint Controller, which are responsible for managing objects such as ReplicaSets and Endpoints.

4.4.4  etcd

etcd : etcd is a high-availability key-value storage system for storing all data in the k8s cluster. etcd can store the state of objects such as cluster configuration, Pods, and Services. etcd is a distributed system that can achieve high availability through multiple nodes

4.4.5  kubelet

kubelet : kubelet is an agent on the k8s node, responsible for managing Pods on the node. The kubelet communicates with the kube-apiserver to obtain information about the Pods that need to run on the node and ensure that the Pods are running. The kubelet is also responsible for monitoring the health of Pods and restarting them if necessary.

4.4.6  to proxy

kube-proxy : kube-proxy is the network proxy of k8s, which is responsible for load balancing and service discovery within the cluster. kube-proxy can expose Pods as Kubernetes Services, and use mechanisms such as IPVS or iptables for load balancing.

4.4.7 docker/Container Runtime

docker/Container Runtime: The running environment of the Worker Node, responsible for image management and the actual operation of Pods and containers

in conclusion:

Among them, the control node, that is, the Master node, is composed of four closely coordinated independent components, which are:

kube-apiserver responsible for API service, kube-scheduler responsible for scheduling, and kube-controller-manager responsible for container arrangement. The persistent data of the entire cluster is processed by kube-apiserver and stored in Etcd.

The core part of the computing node is a component called kubelet.

In the k8s project, kubelet is mainly responsible for dealing with container runtimes (such as the Docker project). What this interaction relies on is a remote call interface called CRI (Container Runtime Interface) , which defines various core operations of the container runtime, such as: all the parameters required to start a container.

This is why, the Kubernetes project does not care what container runtime you deploy or what technology you use. As long as your container runtime can run standard container images, it can be connected to the Kubernetes project by implementing CRI. .

 4.5 k8s capability display

4.5.1 Resource Scheduling

A one-sentence description is to put the pod on the appropriate node , but this is appropriate, and it needs to meet these four aspects:

  1. First meet the resource requirements of the pod
  2. Second, it must meet the requirements of some special relationships of pods
  3. Once again, some restrictions on node must be met
  4. Finally, it is necessary to make reasonable use of the entire cluster resources

Of course, these judgments are all controlled by k8s itself. We only need to describe the target pod clearly in a declarative way. Here is a simple animation demonstration:

4.5.2 Horizontal scaling

Kubernetes has the capability of HPA auto-scaling. Currently, it supports triggering auto-scaling when the CPU index and user-defined (such as TPS or QPS) reach a certain level. When the request peak passes, the pod can return to the original level. As shown in the figure below, it is detected that the load of the white node is too high, and the service is automatically copied twice and distributed to other nodes for operation:

 4.5.3 Failure Recovery

The Controller-Manager component will detect at all times, including failure detection of course. When it detects that a certain node is unavailable, it will automatically put the pod on a normal node to run. like the picture below

  

 5. Summary

When using Kubernetes, you need to be familiar with the core concepts and mechanisms of Kubernetes , and learn how to use API and CLI tools to operate.

At the same time, it is also necessary to master container technologies such as Docker in order to package applications into container images and deploy them in Kubernetes clusters.

Kubernetes is a powerful container orchestration platform that helps developers manage containerized applications more easily.

The road to learning is also a long way to go, let's work together! !

6. vote

Guess you like

Origin blog.csdn.net/weixin_36755535/article/details/129612169