K8s architecture principle and installation of CKA note finishing (eight)

 

By default, nerdctl uses the default namespace, which can be modified to k8s.io. The namespace used by k8s is k8s.io. Although k8s itself has many namespaces (namespace), it is essentially using k8s.io of containerd.

The container created in k8s is called pod-pod. Of course, pod is different from ordinary container.

The so-called pod is a container with a layered shell. We can set multiple policies in the pod to facilitate the management of containers. At the same time, a pod can include multiple containers, but generally only one container is set. Pod is the smallest scheduling unit under k8s. 

When we want to create a pod, we first connect to the kube-apiserver of the matser node (control plane control plane node) through kubectl or other visible tools, and the apiserver sends the request to the kube-scheduler (scheduler), and the scheduler will follow the Its own algorithm assigns work to qualified workers according to the resources of each worker node.

Then the scheduler will feed back to the apiserver, and the apiserver will feed back the request to kubectl on the worker, and kubectl will mobilize the runtime to generate pods. Since k8s1.24, docker is no longer used as runtime, but containerd is used.

Kubelet needs to be installed on each worker node to control the runtime to generate containers and pods. Kubelet is equivalent to the runtime client.

Managing pods requires the use of controllers. There are many controllers in k8s, and the component that manages all controllers is kube-controller-manager.

All operations done under k8s require a database to record (etcd). etcd does not belong to the master but runs on the master. In practice, etcd needs to be highly available.

External clients cannot directly access the pod. Although it can be set, the IP address will change if the pod is restarted after the pod is suspended. Therefore, generally the outside world will not directly access the pod. We need to build something like a load balancer (service svc). The svc will be associated with the back-end pod, from which the outside world can access the svc, and then the svc will send the information to the pod. Svc implements forwarding through iptables or ipvs technology, and decides which technology to use is implemented by the configuration component (kube-proxy), which is iptables by default. Therefore, a kube-proxy needs to be configured on each worker node.

If you want to communicate between pods across nodes, you can use technologies such as BGP, Vxlan, and openvswitch to establish high-speed communication tunnels. But we don’t actually need to learn the above technologies because of this. We can use products packaged by others: calico, flannel, etc. All of the above products follow the CNI standard (container network interface standard).

Install:

The structure of a master plus a worker is used for demonstration.

master worker
26.91 26.92

All nodes:

Configure the hosts file, close swap, close the firewall, and configure the yum source.

parameter settings:

/etc/modules-load.d/containerd.conf

overlay
br_netfilter
/etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

modprobe overlay   

modprobe br_netfilter

sysctl -p /etc/sysctl.d/k8s.conf
All nodes: install runtime

yum install containerd.io cri-tools -y

Configure containerd:

crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock
rm -rf /etc/containerd/config.toml ; wget ftp://ftp.rhce.cc/cka/cka-1.25.2/config.toml -P /etc/containerd/

systemctl enable containerd --now

Install nerdctl

tar zxf nerdctl-1.1.0-linux-amd64.tar.gz -C /usr/bin/ nerdctl ; chmod +x /usr/bin/nerdctl

mkdir -p /opt/cni/bin/ /etc/nerdctl/ /etc/containerd/certs.d/docker.io

tar zxf cni-plugins-linux-amd64-v1.1.1.tgz -C /opt/cni/bin/
rm -rf /etc/nerdctl/nerdctl.toml ; wget ftp://ftp.rhce.cc/cka/cka-1.25.2/nerdctl.toml -P /etc/nerdctl/
Configure the accelerator:
mkdir -p /etc/containerd/certs.d/docker.io
wget ftp://ftp.rhce.cc/cka/cka-1.25.2/hosts.toml -P /etc/containerd/certs.d/docker.io/
restart containerd
systemctl restart containerd
All nodes: install kubernetes
yum install -y kubelet-1.25.2-0 kubeadm-1.25.2-0 kubectl-1.25.2-0 --disableexcludes = kubernetes
systemctl enable kubelet ; systemctl restart kubelet
On matser:
Initialize the cluster:

kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.25.2 --pod-network-cidr=10.244.0.0/16

Then choose one of the two options to create a directory and authorize it as required.

Obtain the command for worker nodes to join the cluster:

kubeadm token create -- print - join - command
After execution, the system will give a command, which can be executed on the worker node.
You can use kubectl get pod to view cluster nodes
Install calico to complete the communication between pods between nodes, and use the image provided by the environment here.
Import image on all nodes: nerdctl load - i soft / calico / calico - img - 3.23 . tar
Then install calico only on the master, and execute the following command to generate a pod:
kubectl apply - f soft / calico / calico - v3 . 23. yaml

If you do not have this yaml, you can download it at the following address: wget https://docs.projectcalico.org/manifests/calico.yaml

The network segment where the pod is located and the network card need to be modified to avoid the inconvenience caused by multiple network cards.

If you want to use the tab completion command, you can modify / etc / profile and add

source <(kubectl completion bash)

wq,source /etc/profile

Note: All commands to the cluster are executed on the master.

#查看节点数
kubectl get nodes
#查看集群信息
kubectl cluster-info
#查看集群版本
kubectl version --short
#查看集群配置
kubectl get cm kubeadm-config -n kube-system -o yaml
#想查看所有资源类型名的简写,比如pod--po,namespace--ns
kubectl api-resources

When accessing the apiserver, we need to know its address, which can be viewed through kubectl cluster-info.

At this point, a k8s cluster is installed.

Kick the worker node from the cluster on the master:

First set the node to be schedulable (maintenance mode), at which point the cluster will expel the pods running on the node to run on other nodes. kubectl drain vms92.rhce.cc --ignore - daemonsets --delete - emptydir - data _ _   _ _

Then delete the node: kubectl delete nodes vms92 . rhce . cc

At this time, kubectl get nodes will find that the node in the list disappears.

If you need to add the node to the cluster again, you first need to clear the configuration on the node:

kubeadm  reset

Then go to the master to execute the command to obtain the join cluster command: kubeadm token create --print-join-command, and execute the feedback command again. If you still remember the command to join the cluster before, you can skip this step and execute it directly after clearing the configuration. But if the master is reformatted (reset to join the cluster), then calico needs to be reinstalled.

If there is an error in joining the cluster again, you need to clear the contents of the two directories /etc/kubernetes/pki and /var/lib/kubelet/, and then join the cluster again. Of course, it is also possible that the containerd service is not set to self-start or there is no containerd.sock file in /var/run/containerd/ under the worker. You need to execute crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock again, If no file is generated after execution, you need to restart the containerd service and execute it again.

Guess you like

Origin blog.csdn.net/qq_52676760/article/details/129115829