Kubernetes - Introduction to roles, components, and running processes

1. Division of roles in Kubernetes

 

K8s is mainly divided into two major roles ,         one is Master , and the other is Node . The Master is responsible for the control and management of the entire cluster, while the Node is the node responsible for executing and running the workload . Through the collaboration of Master and Node, K8s realizes functions such as automated deployment, elastic scaling, and fault recovery of distributed applications. In fact, the two roles of Master and Node are not official statements, but they have become our conventional terms.

        A group of worker machines, called nodes, run containerized applications. Each cluster has at least one worker node. The worker node will host the Pod, and the Pod is the component that serves as the application load. The control plane manages worker nodes and Pods in the cluster. In a production environment, the control plane typically runs across multiple computers, and a cluster often runs multiple nodes to improve fault tolerance and high availability. Pod is the smallest deployment unit of k8s.

2.Kubernetes architecture

        2.1 Components in Master

        to apiserver

        The API server is a component of the Kubernetes control plane (commonly known as Master). This component is responsible for exposing the Kubernetes API and processing the work of accepting requests. The API server is the front end of the Kubernetes control plane .

        The API server provides the only entrance for K8s cluster resource operations , and provides authentication, authorization, access control, API registration and discovery mechanisms.

        etcd

etcd is a key-value database          with both consistency and high availability . It can be used as a backend database to save all Kubernetes cluster data (such as the number of Pods , status , namespace, etc.), API objects and service discovery details. In production-level k8s, etcd usually exists in a cluster. For security reasons, it can only be accessed from the API server.

       kube-scheduler

        kube-scheduler is responsible for monitoring newly created Pods that are not designated to run Node , and determines a node for the pod to run on.

        For example, if an application requires 1GB of memory and 2 CPU cores, then the application's pods will be scheduled on nodes with at least these resources. The scheduler runs every time a pod needs to be scheduled. The scheduler must know the total resources available and the resources allocated to existing workloads on each node.

        kube-controller-manager

        k8s runs many different controller processes in the background , and when the service configuration changes (for example, replacing the image running a pod, or changing parameters in the configuration yaml file), the controller notices the change and starts working towards the new desired state .

        Logically, each controller is a separate process, but to reduce complexity, they are all compiled into the same executable and run in a single process.

        There are many different types of controllers. Here are some examples:

  • Node Controller: Responsible for notifying and responding when a node fails
  • Job Controller: Monitors Job objects that represent one-time tasks, and then creates Pods to run these tasks until completion
  • EndpointSlice controller: Populates the EndpointSlice object (to provide the link between the Service and the Pod).
  • ServiceAccount controller: Create a default service account (ServiceAccount) for the new namespace.

        cloud-controller-manager

        The Cloud Controller Manager allows you to connect your cluster to the cloud provider's API and separate the cloud platform interaction components from the components in the local cluster.

        The following controllers all include dependencies on cloud platform drivers:

  • Node Controller: used to check the cloud provider after the node terminates the response to determine whether the node has been deleted
  • Route Controller: used to set up routing in the underlying cloud infrastructure
  • Service Controller: used to create, update, and delete cloud provider load balancers

        2.2 Components in Node

        Kubelet

        An agent that runs on each node in the cluster. It ensures that containers are all running in Pods. Kubelet regularly receives new or modified pod specifications PodSpecs (mainly through kube-apiserver) and ensures that pods and containers are healthy and running in the required state. This component also reports the health of the host it is running on to kube-apiserver.

        The kubelet will not manage containers not created by Kubernetes.

        be a proxy

        kube-proxy is a network proxy running on each node in the cluster and implements part of the Kubernetes service concept. Used to handle subnetting of individual hosts and exposing services to the outside world. It forwards requests to the correct pod/container across various isolated networks in the cluster.

        kube-proxy maintains network rules on nodes. These network rules allow network communication to the Pod from network sessions inside or outside the cluster.

        If the operating system provides a packet filtering layer and is available, kube-proxy will implement network rules through it. Otherwise, kube-proxy only forwards the traffic itself.

        Container Runtime      

        The container runtime is responsible for creating the container running environment.

        Kubernetes supports multiple container runtimes: Docker (soon to be deprecated), containerd, CRI-O, and any runtime that implements the Kubernetes CRI (Container Runtime Environment Interface).

        3.Kubernetes operation processComponents of Kubernetes

Components of Kubernetes

  • 1. The kubectl client first converts CLI commands into RESTful API calls, and then sends them to kube-apiserver.
  • 2. After verifying these API calls, kube-apiserver stores the task meta information in etcd, and then calls kube-scheduler to start deciding a Node node for the job.
  • 3. Once kube-scheduler returns a target node suitable for scheduling, kube-apiserver stores the node information of the task in etcd and creates the task.
  • 4. At this time, the kubelet in the target node is listening to the apiserver. When it hears that a new task needs to be scheduled to this node, the kubelet creates a task container through the local runtime and executes the job.
  • 5. Then kubelet returns the task status and other information to apiserver and stores it in etcd.
  • 6. In this way, our task is already running. At this time, the control-manager plays a role to ensure that the task is always in the state we expect. 

         The above is an introduction to k8s roles, components and running processes.

Guess you like

Origin blog.csdn.net/m0_53891399/article/details/132439066