Preliminary deployment of k8s based on kubernetes components

kubernetes components

Here we first get to know some components of k8s and understand the functions of each component, which will help us deploy and install a simple k8s environment first, so as to help us better understand the working and design principles of the k8s architecture.

Management Master component

to apiserver

The API server is a component of the Kubernetes control plane , which is responsible for exposing the Kubernetes API and handling the work of accepting requests

kube-controller-manager

Responsible for running the controller process

What are the controllers

  • Node Controller: responsible for notifying and responding when a node fails
  • Job Controller: monitors Job objects representing one-off tasks, then creates Pods to run those tasks to completion
  • EndpointSlice controller: Populates an EndpointSlice object (to provide links between Services and Pods).
  • Service Account Controller (ServiceAccount controller): Create a default service account (ServiceAccount) for a new namespace.

kube-scheduler

Responsible for monitoring newly created Pods that do not specify a running node (node) , and select the node on which the Pod will run.

etcd

A consistent and highly available key-value store that serves as the backend database for all Kubernetes cluster data.

plug-in

(1)DNS

Cluster DNS is a DNS server that works with other DNS servers in the environment to provide DNS records for Kubernetes services.

(2) Web Dashboard

Dashboard is a generic, web-based user interface for Kubernetes clusters. It enables users to manage and troubleshoot the applications running in the cluster as well as the cluster itself

(3) Container resource monitoring

Container resource monitoring Save some common time series metrics about containers into a centralized database and provide an interface to browse these data

(4) Container logs

​The cluster -level log mechanism is responsible for saving the log data of the container in a centralized log storage, which provides a search and browse interface

network plugin

is a software component that implements the Container Networking Interface (CNI) specification. They are responsible for assigning IP addresses to Pods and enabling those Pods to communicate with each other within the cluster.

node component

Kubelet

Receive a set of PodSpecs provided to it through various mechanisms, and ensure that the containers described in these PodSpecs are running and healthy

be a proxy

The network proxy running on each node implements part of the Kubernetes service concept

docker

rkt

rkt runs containers as an alternative to the docker tool.

supervisord

fluentd

Containers can be divided into two categories according to the continuous running time:

service container

Continuously provide services and keep running: like web services (Tomcat, Nginx, Apache), database services (Mysql, Oracle, Sqlserver), monitoring services (zabbix, Prometheus)

work container

One-time tasks: such as batch tasks, the container exits after the instruction is executed

Kubernetes simplifies deployment and installation

Master operation

environmental inspection

1. Ensure the uniqueness of MAC address and product_uuid on each node

product_uuid 校验
[root@master ~]# cat /sys/class/dmi/id/product_uuid
8BBA4D56-0CC0-790A-81F7-B1E1D4ED40C4

获取网络接口的 MAC 地址
[root@master ~]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:ed:40:c4 brd ff:ff:ff:ff:ff:ff

2. Make sure the port is not occupied

Port scanning, if the port is connected, it will return succeeded, if the port fails, it will return rejected

[root@master ~]# nc -v -w 5 127.0.0.1 6443
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connection refused.

nc usage

-l 用于指定nc将处于侦听模式。指定该参数,则意味着nc被当作server,侦听并接受连接,而非向其它地址发起连接。
-p <port>	暂未用到(老版本的nc可能需要在端口号前加-p参数,下面测试环境是centos6.6,nc版本是nc-1.84,未用到-p参数)
-s 指定发送数据的源IP地址,适用于多网卡机 
-u	指定nc使用UDP协议,默认为TCP
-v	输出交互或出错信息,新手调试时尤为有用
-w	超时秒数,后面跟数字 
-z	表示zero,表示扫描时不发送任何数据

illustrate:

Docker Engine does not implement CRI , which is required for container runtimes to work in Kubernetes. For this, an additional service cri-dockerd must be installed . cri-dockerd is a project based on traditional built-in Docker engine support, which was removed from kubelet in version 1.24 .

Runtime Unix domain sockets
containerd unix:///var/run/containerd/containerd.sock
CRY IT unix:///var/run/crio/crio.sock
Docker Engine (using cri-dockerd) unix:///var/run/cri-dockerd.sock

3. Close Selinux

Until kubelet's support for SELinux is upgraded

[root@master ~]# getenforce
Disabled

# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

4. Disable the swap partition

vim /etc/fstab  # 注释掉swap一行

Install and configure Containerd

Install Containerd

The default runtime used by the new version is no longer docker, so only containerd can be installed here

sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sudo sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
sudo yum makecache fast

Install a new version of container-selinux

[root@master ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@master ~]# yum install epel-release -y
[root@master ~]# yum clean all # 这一步必须清理yum 缓存
[root@master ~]# yum install container-selinux -y

insert image description here

Configure yum source

修改为https
[root@master ~]# vim /etc/yum.repos.d/CentOS-Base.repo
baseurl=https://mirrors.aliyun.com/centos/$releasever/extras/$basearch/
https://mirrors.aliyuncs.com/centos/$releasever/extras/$basearch/
https://mirrors.cloud.aliyuncs.com/centos/$releasever/extras/$basearch/

insert image description here

Install containerd

[root@master ~]# yum install containerd.io -y

configure hosts

192.168.10.101 master

Configure containerd

View configurations enabled by default

[root@master ~]# containerd config default

containerd default configuration file content

[root@master ~]# vim /etc/containerd/config.toml

insert image description here

Command line production initial configuration file

[root@master ~]# containerd config default > /etc/containerd/config.toml

Modify the containerd configuration file

[root@master ~]# vim /etc/containerd/config.toml
[plugins]
	...
	[plugins."io.containerd.grpc.v1.cri"]
	...
    sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"   # 配置软件源

	[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
			...
            SystemdCgroup = true													# 配置cgroup类型

start containerd

[root@master ~]# systemctl start containerd

Install and configure Kubeadm

Install Kubeadm

1. Configure yum software source

official source

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

Alibaba Cloud Source

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

2. Install kubeadm

sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

Configure Circtl

At this time, since there is no runtime entry, we can modify the crictl configuration file to obtain the sock information of containerd

[root@master ~]# crictl config runtime-endpoint unix:///run/containerd/containerd.sock
[root@master ~]# crictl config image-endpoint unix:///run/containerd/containerd.sock
[root@master ~]# cat /etc/crictl.yaml
runtime-endpoint: "unix:///run/containerd/containerd.sock"
image-endpoint: "unix:///run/containerd/containerd.sock"
timeout: 0
debug: false
pull-image-on-create: false
disable-pull-on-run: fals

Configure the cgroup driver

Both the container runtime and the kubelet have a property named "cgroup driver" , which is very important for managing CGroups on a Linux machine.

Note: You need to make sure that the container runtime and kubelet are using the same cgroup driver , otherwise the kubelet process will fail

Configure container runtime cgroup driver

Since kubeadm manages kubelet as a system service , it is recommended to use the driver for kubeadm-based installations , and the defaultsystemd kubelet driver is not recommended .cgroupfs

In version 1.22 and later, if the user doesn't KubeletConfigurationset cgroupDriverthe field in the field, kubeadmit will be set to the default valuesystemd

Configuration example

# kubeadm-config.yaml
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
kubernetesVersion: v1.21.0
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd

kubeadm configuration

Generate initial configuration file

[root@master ~]# kubeadm config print init-defaults > kubeadm.yml

View the list of required mirrors

[root@master ~]# kubeadm config images list --config kubeadm.yml
registry.k8s.io/kube-apiserver:v1.27.0
registry.k8s.io/kube-controller-manager:v1.27.0
registry.k8s.io/kube-scheduler:v1.27.0
registry.k8s.io/kube-proxy:v1.27.0
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/coredns/coredns:v1.10.1

Kubeadm modify configuration file

[root@master ~]# vim kubeadm.yml 

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.10.101		# 改为master的IP
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: master    # 修改名称
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {
    
    }
dns: {
    
    }
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers		# 改为阿里云源
kind: ClusterConfiguration
kubernetesVersion: 1.27.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {
    
    }

Start the kubelet service

Currently starting kubelet reports an error

[root@master ~]# systemctl start kubelet

[root@master systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; disabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since Fri 2023-07-07 10:53:24 CST; 1s ago
     Docs: https://kubernetes.io/docs/
  Process: 27867 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
 Main PID: 27867 (code=exited, status=1/FAILURE)

Jul 07 10:53:24 master systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
Jul 07 10:53:24 master systemd[1]: Unit kubelet.service entered failed state.
Jul 07 10:53:24 master systemd[1]: kubelet.service failed.

pull image

kubeadm config images pull --config kubeadm.yml
[root@master ~]# crictl images
IMAGE                                                             TAG                 IMAGE ID            SIZE
registry.aliyuncs.com/google_containers/coredns                   v1.10.1             ead0a4a53df89       16.2MB
registry.aliyuncs.com/google_containers/etcd                      3.5.7-0             86b6af7dd652c       102MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.27.0             6f707f569b572       33.4MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.27.0             95fe52ed44570       31MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.27.0             5f82fc39fa816       23.9MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.27.0             f73f1b39c3fe8       18.2MB
registry.aliyuncs.com/google_containers/pause                     3.9                 e6f1816883972       322kB

View the pulled image

[root@master ~]# crictl image list
IMAGE                                                             TAG                 IMAGE ID            SIZE
registry.aliyuncs.com/google_containers/coredns                   v1.10.1             ead0a4a53df89       16.2MB
registry.aliyuncs.com/google_containers/etcd                      3.5.7-0             86b6af7dd652c       102MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.27.0             6f707f569b572       33.4MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.27.0             95fe52ed44570       31MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.27.0             5f82fc39fa816       23.9MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.27.0             f73f1b39c3fe8       18.2MB
registry.aliyuncs.com/google_containers/pause                     3.9                 e6f1816883972       322kB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause         3.6                 6270bb605e12e       302kB

Initialize the cluster

[root@master ~]# kubeadm init --config=/data/kubeadm.yml --upload-certs --v=6
...
[addons] Applied essential addon: kube-proxy
I0709 13:49:53.255150   29494 loader.go:373] Config loaded from file:  /etc/kubernetes/admin.conf
I0709 13:49:53.255484   29494 loader.go:373] Config loaded from file:  /etc/kubernetes/admin.conf

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.101:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:6ef00961d04157a81e8e60e04673835350a3995db28f5fb5395cfc13a21bb8f8

insert image description here

The next job

[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

View kubelet status:

At this time, kubelet is up, but there is still an error: the network is not ready, and the network plug-in has not been activated.

[root@master ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; disabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Sun 2023-07-09 13:49:52 CST; 20min ago
     Docs: https://kubernetes.io/docs/
 Main PID: 29979 (kubelet)
    Tasks: 11
   Memory: 44.0M
   CGroup: /system.slice/kubelet.service
           └─29979 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint...

Jul 09 14:09:48 master kubelet[29979]: E0709 14:09:48.250500   29979 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady messag...t initialized"
Jul 09 14:09:53 master kubelet[29979]: E0709 14:09:53.251686   29979 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady messag...t initialized"
Jul 09 14:09:58 master kubelet[29979]: E0709 14:09:58.252762   29979 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady messag...t initialized"

insert image description here

View started containers

[root@master ~]# crictl ps
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
f79b2b82e24aa       5f82fc39fa816       11 minutes ago      Running             kube-proxy                0                   a05ef7d2d7a3b       kube-proxy-bdzh6
f48bd5d031a38       95fe52ed44570       12 minutes ago      Running             kube-controller-manager   0                   451a59c46907e       kube-controller-manager-master
4ca2bf73d8a19       6f707f569b572       12 minutes ago      Running             kube-apiserver            0                   e27c124e4422e       kube-apiserver-master
7c6edea985069       86b6af7dd652c       12 minutes ago      Running             etcd                      0                   285d4ec481dae       etcd-master
d1feeb8768dbd       f73f1b39c3fe8       12 minutes ago      Running             kube-scheduler            0                   f66a11a4ebd7f       kube-scheduler-master

View node status

[root@master ~]# kubectl get nodes
NAME     STATUS     ROLES           AGE   VERSION
master   NotReady   control-plane   13m   v1.27.3

Operating the command line

Order docker ctr(containerd) crictl(kubernetes)
View running containers docker ps ctr task ls/ctr container ls crictl ps
view mirror docker images ctr image ls crictl images
View container logs docker logs none crictl logs
View container data information docker inspect ctr container info crictl inspect
View container resources docker stats none crictl stats
Start/shutdown an existing container docker start/stop ctr task start/kill crictl start/stop
run a new container docker run ctr run None (the smallest unit is pod)
label docker tag ctr image tag none
create a new container docker create ctr container create crictl create
import image docker load ctr image import none
export image docker save ctr image export none
delete container docker rm ctr container rm crictl rm
delete mirror docker rmi ctr image rm crictl rmi
pull image docker pull ctr image pull ctictl pull
push image docker push ctr image push none
Log in or execute commands inside the container docker exec none crictl exec
empty unused containers docker image prune none crictl rmi --prune

Install the network plugin

[root@master data]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
--2023-07-09 14:34:07--  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... \185.199.110.133, 185.199.109.133, 185.199.111.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4615 (4.5K) [text/plain]
Saving to: ‘kube-flannel.yml’

100%[=======================================================================================================================================================================>] 4,615       --.-K/s   in 0s      

2023-07-09 14:34:07 (89.3 MB/s) - ‘kube-flannel.yml’ saved [4615/4615]

[root@master data]# ll
total 12
-rw-r--r-- 1 root root  840 Jul  9 13:23 kubeadm.yml
-rw-r--r-- 1 root root 4615 Jul  9 14:34 kube-flannel.yml
[root@master data]# 
[root@master data]# 
[root@master data]# sed -i 's/quay.io/quay-mirror.qiniu.com/g' kube-flannel.yml

[root@master data]# 
[root@master data]# 
[root@master data]# kubectl apply -f /data/kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

insert image description here
insert image description here

Node operation

NodeAdd

[root@node1 ~]# kubeadm join 192.168.10.101:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:6ef00961d04157a81e8e60e04673835350a3995db28f5fb5395cfc13a21bb8f8
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Error resolution

kubeadm pulls the image and reports an error

[root@master ~]# kubeadm config images pull --config kubeadm.yml
failed to pull image "registry.k8s.io/kube-apiserver:v1.27.0": output: E0707 11:05:07.804670   28217 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/containerd/containerd.sock: connect: no such file or directory\"" image="registry.k8s.io/kube-apiserver:v1.27.0"
time="2023-07-07T11:05:07+08:00" level=fatal msg="pulling image: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/containerd/containerd.sock: connect: no such file or directory\""
, error: exit status 1

Reason: The containerd service is not started

solve:

[root@master ~]# systemctl start containerd

report error

[root@master ~]# kubeadm config images pull --config kubeadm.yml
failed to pull image "registry.k8s.io/kube-apiserver:v1.27.0": output: time="2023-07-07T11:06:14+08:00" level=fatal msg="validate service connection: CRI v1 image API is not implemented for endpoint \"unix:///var/run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService"
, error: exit status 1
To see the stack trace of this error execute with --v=5 or higher

[root@master ~]# ll /var/run/containerd/containerd.sock
srw-rw---- 1 root root 0 Jul  7 11:05 /var/run/containerd/containerd.sock


crictl is a part of kubernetes cri-tools, which is specially made for kubernetes to use containerd, and provides management commands for resources such as pods, containers, and images.

insert image description here

It should be noted that crictl cannot see and debug containers and images created by other non-kubernetes. For example, containers that run without specifying a namespace after ctr run cannot be seen by crictl. Of course, ctr can use -n k8s.io to specify the namespace of the operation as k8s.io, so that you can see/operate resources such as containers and images in the kubernetes cluster. It can be understood as: during crictl operation, the namespace of containerd is specified as k8s.io.

Containerd, which replaces the Docker runtime, can be directly integrated with Kubelet as early as Kubernetes1.7, but most of the time we are familiar with Docker and use the default dockershim when deploying clusters. The kubelet in V1.24the current version is completely removed dockershim, and it is used by default Containerd. Of course, the adapter is also used cri-dockerdto Docker Engineintegrate with Kubernetes

insert image description here

After replacing Containerd, the docker commands we used to use in the past are no longer used, replaced by crictland ctrtwo command clients respectively.

[root@master ~]# ctr -v
ctr containerd.io 1.6.21
  • crictlIt is a command-line tool that follows the CRI interface specification, and is usually used to inspect and manage kubeletcontainer runtimes and images on nodes.
  • ctris containerda client tool for .
  • ctr -vThe output is containerdthe version of , and crictl -vthe output is the current version of k8scrictl . From the result, it is obvious that you can think that is for k8s.
  • Generally speaking, the crictl command will appear on the command line only after k8s is installed on one of your hosts. And ctr has nothing to do with k8s, you can operate the ctr command after the containerd service is installed on your host.

Initialization error

[root@master ~]# kubeadm init --config=kubeadm.yml --upload-certs --v=6
[preflight] Some fatal errors occurred:
	[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
error execution phase preflight

solve:

[root@master ~]# modprobe br_netfilter
[root@master ~]# cat /proc/sys/net/ipv4/ip_forward
0
[root@master ~]# echo 1 > /proc/sys/net/ipv4/ip_forward
[root@master ~]# cat /proc/sys/net/ipv4/ip_forward
1

Reference: Kubernetes Chinese Community

An error is reported when using crictl pull: "unknown service runtime.v1alpha2.ImageService"

Containerd ctr, crictl, nerdctl client command introduction and actual operation

Guess you like

Origin blog.csdn.net/baidu_33864675/article/details/131628273