k8s study notes (2): introduction to k8s components

Introduction to k8s components

Architecture diagram

Insert image description here

Role

  • Master: It is the gateway and central hub of the cluster . Its main function is to expose API interfaces, track the health status of other servers, schedule loads in an optimal way, and orchestrate communication between other components . A single Master node can complete all functions, but considering the pain point of single point of failure, multiple Master nodes are usually deployed in a production environment to form a Cluster. Includes all control plane components
  • Node: It is the working node of Kubernetes. It is responsible for receiving work instructions from the Master, creating and destroying Pod objects accordingly according to the instructions, and adjusting network rules for reasonable routing and traffic forwarding . In a production environment, there can be N number of Node nodes. Includes all node nodes.

Control Plane Components

Control plane components make global decisions for the cluster, such as resource scheduling. replicasAs well as detecting and responding to cluster events, such as starting a new pod when a deployed field is not met ).

Control plane components can run on any node in the cluster. However, for simplicity, the setup script typically starts all control plane components on the same machine and does not run user containers on this machine.

to apiserver

The API server is a component of the Kubernetes control plane . This component is responsible for exposing the Kubernetes API and handling the work of accepting requests . The API server is the front end of the Kubernetes control plane.

apiserver is the gateway of the entire cluster

The only external interface of K8S provides HTTP/HTTPS RESTful API, namely kubernetes API. All requests need to communicate through this interface. It is mainly responsible for receiving, verifying and responding to all REST requests. The result status is persistently stored in etcd, which is the only entry for all resource additions, deletions and modifications .

port

[root@k8s-master ~]# netstat -anplut|egrep LISTEN.*apiserver

Insert image description here

container

[root@k8s-master ~]# docker ps|grep apiserver
c09166f7c313   ca9843d3b545                                        "kube-apiserver --ad…"   3 minutes ago   Up 3 minutes             k8s_kube-apiserver_kube-apiserver-k8s-master_kube-system_2fc27474246553d4d73cbb5d364b9726_6
8f9429e1d4e6   registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                  5 minutes ago   Up 5 minutes             k8s_POD_kube-apiserver-k8s-master_kube-system_2fc27474246553d4d73cbb5d364b9726_5

etcd

A consistent and highly available key-value store used as the backend database for all Kubernetes cluster data

Responsible for saving the configuration information of the k8s cluster and the status information of various resources . When the data changes, etcd will quickly notify the relevant k8s components. etcd is an independent service component and is not affiliated with the K8S cluster. In a production environment, etcd should be run in a cluster to ensure service availability.

etcd is not only used to provide key-value data storage, but also provides a watch mechanism for monitoring and pushing changes. In the K8S cluster system, changes in etcd key values ​​will be notified to the API Server , which will output it to the client through the watch API.

port

[root@k8s-master ~]# netstat -anplut|egrep LISTEN.*etcd

Insert image description here

container

[root@k8s-master ~]# docker ps |grep etcd
c66803416c06   0369cf4303ff                                        "etcd --advertise-cl…"   6 minutes ago   Up 6 minutes             k8s_etcd_etcd-k8s-master_kube-system_6b17573732d23fd599a67ffecd227455_5
f2847eb09da2   registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                  6 minutes ago   Up 6 minutes             k8s_POD_etcd-k8s-master_kube-system_6b17573732d23fd599a67ffecd227455_5

kube-scheduler

kube-schedulerIt is a component of the control plane , responsible for monitoring newly created Pods that do not specify a running node (node) , and selecting nodes for Pods to run on .

Factors considered in scheduling decisions include resource requirements of individual Pods and Pods collections, software, hardware, and policy constraints, affinity and anti-affinity specifications, data location, interference between workloads, and deadlines.

container

[root@k8s-master ~]# docker ps|grep scheduler
c179d65f1940   3138b6e3d471                                        "kube-scheduler --au…"   7 minutes ago   Up 7 minutes             k8s_kube-scheduler_kube-scheduler-k8s-master_kube-system_0378cf280f805e38b5448a1eceeedfc4_5
2dbe2c73b36b   registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                  7 minutes ago   Up 7 minutes             k8s_POD_kube-scheduler-k8s-master_kube-system_0378cf280f805e38b5448a1eceeedfc4_5

port

[root@k8s-master ~]# netstat -anplut|egrep LISTEN.*scheduler

Insert image description here

kube-controller-manager

kube-controller-manager is a component of the control plane and is responsible for running the controller process.

Responsible for managing various resources of the cluster and ensuring that the resources are in the expected state.

Logically, each controller is a separate process, but to reduce complexity, they are all compiled into the same executable and run in the same process.

These controllers include:

  • Node Controller: Responsible for notifying and responding when a node fails
  • Job Controller: Monitors Job objects that represent one-time tasks, and then creates Pods to run these tasks until completion
  • EndpointSlice controller: Populates the EndpointSlice object (to provide the link between the Service and the Pod).
  • ServiceAccount controller: Create a default service account (ServiceAccount) for the new namespace.
  • Replica controller: Specify the number of containers in the pod. If there are a few missing ones, they will be automatically filled in.
  • Deployment Controller: Helps deploy software
  • Scheduled Task Controller: Helps monitor scheduled tasks

port

[root@k8s-master ~]# netstat -anplut|egrep LISTEN.*kube-controlle

Insert image description here

container

[root@k8s-master ~]# docker ps |grep kube-controlle
3ea62f3d0381   b9fa1895dcaa                                        "kube-controller-man…"   10 minutes ago   Up 10 minutes             k8s_kube-controller-manager_kube-controller-manager-k8s-master_kube-system_5c575d17517839b576ab4817fd06353f_5
8d65ce545508   registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                  10 minutes ago   Up 10 minutes             k8s_POD_kube-controller-manager-k8s-master_kube-system_5c575d17517839b576ab4817fd06353f_5

cloud-controller-manager

A Kubernetes control plane component that embeds cloud platform-specific control logic. The Cloud Controller Manager allows you to connect your cluster to a cloud provider's API and separate the components that interact with the cloud platform from the components that interact with your cluster.

cloud-controller-managerOnly run cloud platform-specific controllers. So if you are running Kubernetes in your own environment, or running a learning environment on your local computer, the deployed cluster does not need a Cloud Controller Manager.

Similar to kube-controller-manager, cloud-controller-managerseveral logically independent control loops are combined into the same executable file for you to run as the same process. You can scale it horizontally (run more than one replica) to improve performance or enhance fault tolerance.

The following controllers all include dependencies on cloud platform drivers:

  • Node Controller: used to check the cloud provider after the node terminates the response to determine whether the node has been deleted
  • Route Controller: used to set up routing in the underlying cloud infrastructure
  • Service Controller: used to create, update, and delete cloud provider load balancers

Node component

Kubelet

It ensures that containers are all running in Pods.

A separate program runs on the host, not in the container

kubeletWill run on every node in the cluster . It ensures that containers (containers) are all running in Pods .

Kubelet is the agent of a node. When the Scheduler determines to run a Pod on a Node, it will send the specific configuration information (image, volume, etc.) of the Pod to the kubelet of the node. The kubelet will create and run the container based on this information and send it to The master reports the running status .

port

[root@k8s-master ~]# netstat -anplut|grep kubelet

Insert image description here

process

[root@k8s-master ~]# ps aux|grep kubelet

Insert image description here

It was a proxy

A separate program runs on the host, not in the container

kube-proxy is a network proxy running on each node in the cluster and implements part of the Kubernetes service concept.

kube-proxy maintains some network rules on the nodes that allow network communication with Pods from network sessions inside or outside the cluster.

If the operating system provides an available packet filtering layer, kube-proxy will implement network rules through it. Otherwise, kube-proxy only forwards traffic.

Proxy mode (load balancing)

  • ipvs, it is recommended to enable ipvs in production environment
  • iptables, used by default, low efficiency

port

[root@k8s-master ~]# netstat -anplut|grep LISTEN.*kube-proxy

Insert image description here

process

[root@k8s-master ~]# ps aux|grep kube-proxy
root       3485  0.5  1.5 744064 28764 ?        Ssl  12:41   0:04 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=k8s-master
root      14987  0.0  0.0 112824   992 pts/0    S+   12:54   0:00 grep --color=auto kube-proxy

core plugin

Network communication components

Calico

Achieve communication between hosts in the overlay network type, suitable for large-scale clusters

All k8s hosts will run

flannel

Suitable for small-scale clusters

Core DNS

Schedule and run Pods that provide DNS services in a K8S cluster. Other Pods in the same cluster can use this DNS service to resolve host names . K8S uses the CoreDNS project by default since version 1.11 to provide dynamic name resolution services for service registration and service discovery for the cluster.

Dashboard

All functions of the K8S cluster must be based on the Web UI to manage the applications in the cluster and the cluster itself.

Guess you like

Origin blog.csdn.net/qq_57629230/article/details/131344027