[Kubernetes Deployment] Kubeadm way to build K8s cluster version 1.26.0

1. Cluster planning and structure

Official documentation:

Binary download address

Environmental planning:

  • Pod network segment: 10.244.0.0/16
  • service network segment: 10.10.0.0/16
  • Note: The pod and service network segments cannot conflict. If there is a conflict, the K8S cluster installation will fail.
  • The container runtime uses containerd this time.
CPU name IP address operating system
master-1 16.32.15.200 CentOS7.8
node-1 16.32.15.201 CentOS7.8
node-2 16.32.15.202 CentOS7.8

2. System initialization preparation (synchronous operation of all nodes)

1. Turn off the firewall

systemctl disable firewalld --now
setenforce 0
sed  -i -r 's/SELINUX=[ep].*/SELINUX=disabled/g' /etc/selinux/config

2. Configure domain name resolution

cat  >> /etc/hosts << EOF
16.32.15.200 master-1
16.32.15.201 node-1
16.32.15.202 node-2
EOF

Modify the hostname on the specified host

hostnamectl set-hostname master-1 && bash
hostnamectl set-hostname node-1 && bash
hostnamectl set-hostname node-2 && bash

3. Configure server time to be consistent

yum -y install ntpdate
ntpdate ntp1.aliyun.com

Add timing synchronization to automatically synchronize time at 1 am every day

echo "0 1 * * * ntpdate ntp1.aliyun.com" >> /var/spool/cron/root
crontab -l

4. Disable the swap partition (kubernetes is mandatory to disable)

swapoff --all

Prohibit booting from the boot swap swap partition

sed -i -r '/swap/ s/^/#/' /etc/fstab

5. Modify Linux kernel parameters, add bridge filter and address forwarding function

cat >> /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

sysctl -p /etc/sysctl.d/kubernetes.conf

Load the bridge filter module

modprobe br_netfilter
lsmod | grep br_netfilter # 验证是否生效

6. Configure ipvs function

Service has two proxy models in kubernetes, one is based on iptables, the other is based on ipvs, the performance of the two is higher than that of ipvs, if you want to use the ipvs model, you need to manually load the ipvs module

yum -y install ipset ipvsadm

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4  
EOF

chmod +x /etc/sysconfig/modules/ipvs.modules 
# 执行脚本
/etc/sysconfig/modules/ipvs.modules

# 验证ipvs模块
lsmod | grep -e ip_vs -e nf_conntrack_ipv4

7. Install Docker container components

Note: Docker is used for operations such as downloading images and building images with Dockerfile, and does not conflict with Containerd.

curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum makecache

# yum-utils软件用于提供yum-config-manager程序
yum install -y yum-utils

# 使用yum-config-manager创建docker阿里存储库
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum install docker-ce-20.10.6 docker-ce-cli-20.10.6 -y

Docker configuration acceleration source:

mkdir /etc/docker
cat <<EOF > /etc/docker/daemon.json
{
  "registry-mirrors": ["https://aoewjvel.mirror.aliyuncs.com"]
}
EOF

# 启动docker并设置开机自启
systemctl enable docker --now
systemctl status docker

8. You can skip restarting the server

reboot

3. Install and configure the Containerd container runtime

Three servers operate simultaneously

1. Install containerd

yum -y install containerd.io-1.6.6

2. Generate the containerd configuration file and modify the configuration file

mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml

Modify the configuration file, mainly modify the following configurations:

vim /etc/containerd/config.toml

SystemdCgroup = true
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.7"

3. Start containerd

systemctl enable containerd  --now

4. Add the configuration file of the crictl tool

crictl is a command line tool for interacting with CRI (Container Runtime Interface) compliant container runtimes. In Kubernetes, the Kubelet uses the CRI interface to communicate with the container runtime, and the crictl tool can be used to directly interact with the container runtime.

cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
  • runtime-endpoint: specify the address of the container runtime
  • image-endpoint: Specifies the address of the mirror warehouse, that is, the address of the mirror warehouse used by containerd.

5. Configure containerd mirror accelerator

Specify accelerator directory information:

vim /etc/containerd/config.toml

config_path = "/etc/containerd/certs.d"

Configure acceleration information:

mkdir /etc/containerd/certs.d/docker.io/ -p
vim /etc/containerd/certs.d/docker.io/hosts.toml

[host."https://vh3bm52y.mirror.aliyuncs.com",host."https://registry.docker-cn.com"]
  capabilities = ["pull"]

restart containerd

systemctl restart containerd

4. Install kubeadm (synchronous operation of all nodes)

1. Configure domestic yum source, install kubeadm, kubelet, kubectl with one click

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF

yum install -y kubelet-1.26.0 kubeadm-1.26.0 kubectl-1.26.0

2. kubeadm will use the kubelet service to deploy the main services of kubernetes in a container, so you need to start the kubelet service first

systemctl enable kubelet.service --now

5. Initialize the cluster

Operate on the master-1 host

1. Configure container runtime

crictl config runtime-endpoint unix:///run/containerd/containerd.sock

2. Generate initialization default configuration file

kubeadm config print init-defaults > kubeadm.yaml

We modify the default configuration file according to our own needs. I mainly changed the configuration as follows:

  • advertiseAddress: Change to the IP address of the master
  • criSocket: specifies the container runtime
  • imageRepository: configure domestic acceleration source address
  • podSubnet: pod network segment address
  • serviceSubnet: services network segment address
  • Added at the end to specify the use of ipvs and enable systemd
  • nodeRegistration.name: Change to the current host name

The final initialization configuration file is as follows:

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 16.32.15.200
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: master-1
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {
    
    }
dns: {
    
    }
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.26.0
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {
    
    }
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

3. Initialize

kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification

After successful initialization, the output is as follows:

[init] Using Kubernetes version: v1.26.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node] and IPs [10.96.0.1 16.32.15.200]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost node] and IPs [16.32.15.200 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost node] and IPs [16.32.15.200 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 12.003782 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node node as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node node as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 16.32.15.200:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:02dc7cc7702f9814d01f9a4d5957da3053e74adcc2f583415e516a4b81fb37bc

4. Configure the configuration file config of kubectl, which is equivalent to authorizing kubectl, so that the kubectl command can use this certificate to manage the k8s cluster

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Verify that you can use the kubectl command

kubectl get nodes

5. To view the image, we use the image pulled by containerd, but the docker images command cannot be used to view the image. You need to use the ctr command, as follows:

ctr -n k8s.io images list
crictl images
  • -n specifies the namespace

6. Add Node nodes to the cluster

Operate on two node nodes

1. The token information output by assignment initialization is executed on two node nodes

kubeadm join 16.32.15.200:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:6839bcc102c7ab089554871dd1a8f3d4261e1482ff13eafdf32fc092ebaf9f7e

If you forget, you can use the following commands to create and view the token:

kubeadm token create --print-join-command

Successfully joined the cluster as shown below:

[External link picture transfer failed, the source site may have an anti-theft link mechanism, it is recommended to save the picture and upload it directly (img-WcReqrzj-1682478646677) (D:\MD Archives\IMG\image-20230425151533713.png)]

2. Label the two node nodes

Execute on the master-1 host

kubectl label nodes node-1 node-role.kubernetes.io/work=work
kubectl label nodes node-2 node-role.kubernetes.io/work=work

7. Install the network component Calico

Calico online document address:

Calico.yaml download link:

1. Upload the calico.yaml file to the server. The content of the calico.yaml file is provided below:

Execute on the master host

kubectl apply -f  calico.yaml

2. View the cluster status && view the built-in Pod status

kubectl get nodes

[External link picture transfer failed, the source site may have an anti-leeching mechanism, it is recommended to save the picture and upload it directly (img-wScfqriS-1682478646680) (D:\MD Archives\IMG\image-20230426104418539.png)]

3. Check whether the component status is Running, as shown in the figure below:

kubectl get pods -n kube-system

[External link picture transfer failed, the source site may have an anti-leeching mechanism, it is recommended to save the picture and upload it directly (img-ugY0UxIX-1682478646681) (D:\MD Archives\IMG\image-20230426104559318.png)]

8. Test the availability of CoreDNS resolution

1. Download busybox:1.28 image

ctr -n k8s.io images pull docker.io/library/busybox:1.28

2. Test coredns

kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh

If you don't see a command prompt, try pressing enter.

/ # nslookup kubernetes.default.svc.cluster.local
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default.svc.cluster.local
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
  • Note: busybox needs to use the specified version 1.28, not the latest version, the latest version, nslookup will not be able to resolve dns and ip

9. Expansion

1. The specific difference between ctr and crictl commands

Both ctr and crictl are command-line tools used to manage container runtime, but their specific differences are as follows:

  • ctr was developed by Docker while crictl was developed by Kubernetes.

  • ctr supports multiple container runtimes, including Docker, containerd, and CRI-O, while crictl only supports CRI (Container Runtime Interface)-compatible container runtimes, such as CRI-O and containerd.

  • Ctr provides more functions, such as image management, network management, volume management, etc., while crictl only provides basic container management functions, such as container creation, deletion, start, stop, etc.

  • The commands of ctr are more intuitive and easy to use, while the commands of crictl are more in line with the design concept and specifications of Kubernetes.

To sum up, both ctr and crictl are excellent container management tools, which one to choose depends on the specific usage scenarios and requirements.

2. Calico multi-network card configuration

There may be some situations where a server has multiple network cards, but only one of them can communicate with the external network. In this case, some configuration needs to be added to specify the network card. The following command specifies the ens33 network card, of course, it can also be represented by regular expressions.

 - name: IP_AUTODETECTION_METHOD
   value: "interface=ens33"

insert image description here
Re-execute to take effect:

kubectl apply -f calico.yaml

Guess you like

Origin blog.csdn.net/weixin_45310323/article/details/130381980