Kubernetes certification exam self-study series | Install kubernetes cluster

Book source: "CKA/CKAD Test Guide: From Docker to Kubernetes Complete Raiders"

Organize the reading notes while studying, and share them with everyone. If the copyright is violated, it will be deleted. Thank you for your support!

Attach a summary post: Kubernetes certification exam self-study series | Summary_COCOgsta's Blog-CSDN Blog


3.2.1 Experiment topology and environment

Figure 3-5 shows the topology and configuration.

3.2.2 Experimental preparation

Before installing kubernetes, you need to set up the yum source, close selinux and close swap, etc.

Step 1: It is recommended that all nodes use centos7.4 and synchronize /etc/hosts on all nodes.

[root@vmsX ~]# cat/etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.26.10   vms10.rhce.cc          vms10
192.168.26.11   vms11.rhce.cc          vms11
192.168.26.12   vms12.rhce.cc          vms12
[root@vmsX ~]#
复制代码

Step 2: Configure firewall and close selinux on all nodes.

[root@vmsX ~]# firewall-cmd --get-default-zone 
trusted 
[root@vmsX ~]# getenforce 
Disabled 
[root@vmsX ~]#
复制代码

Step 3: Turn off swap on all nodes, and comment out the swap-related entries in /etc/fstab.

[root@vms10 ~]# swapon -s 
文件名               类型           大小        已用     权限
/dev/sda2         partition    10485756       12      -1
[root@vmsX ~]# swapoff -a 
[root@vmsX ~]# sed -i '/swap/s/UUID/#UUID/g' /etc/fstab
复制代码

Step 4: Configure yum sources on all nodes.

[root@vmsX ~]# rm -rf /etc/yum.repos.d/* ; wget -P /etc/yum.repos.d/ ftp://ftp.rhce.cc/k8s/*
[root@vmsX ~]#
...
[root@vmsX ~]#
复制代码

Step 5: Install and start docker on all nodes, and set docker to start automatically.

yum install docker-ce -y 
systemctl enable docker --now
复制代码

Step 6: Set kernel parameters on all nodes.

[root@vmsX ~]# cat <<EOF> /etc/sysctl.d/k8s.conf 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
[root@vmsX ~]#
复制代码

Let it take effect immediately.

[root@vmsX ~]# sysctl -p /etc/sysctl.d/k8s.conf 
[root@vmsX ~]#
复制代码

Step 7: Install packages on all nodes

[root@vmsX ~]# yum install -y kubelet-1.21.1-0 kubeadm-1.21.1-0 kubectl-1.21.1-0 --disable excludes=kubernetes 
已加载插件: fastestmirror
...
更新完毕:
  yum.noarch 0:3.4.3-167.el7.centos 
  
  完毕!
[root@vmsX ~]#
复制代码

Step 8: Start kubelet on all nodes and set it to start automatically at boot.

[root@vmsX ~]# systemctl restart kubelet ; systemctl enable kubelet 
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@vmsX ~]# 
注意: 此时kubelet的状态为activating 
复制代码

3.2.3 install master

The following operations are performed on vms10, the purpose is to configure vms10 as the master.

Step 1: Perform initialization on the master.

[root@vms10 ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.21.1 --pod-network-cidr=10.244.0.0/16
...输出...
Then you can join any number of worker nodes by running the following on each as root:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
  export KUBE CONFIG=/etc/ku bernet es/admin.conf
...输出...
Then you can join any number of worker nodes by running the following on each as root:
     kubeadm join 192.168.26.10:6443 --token 524g6o.cpzywevx4ojems69 \
    --discovery-token-ca-cert-hash sha256:6b19ba9d3371c0ac474e8e70569dfc8ac93c76fd841ac8df025a43d49d8cd860
[root@vms10 ~]#
复制代码

Step 2: Copy the kubeconfig file.

[root@vms10 ~]# mkdir -p $HOME/.kube
[root@vms10 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/confg 
[root@vms10 ~]# sudo chown $(id-u):$(id-g) $HOME/.kube/config
[root@vms10 ~]#
复制代码

3.2.4 Configure workers to join the cluster

The following steps are to add vms11 and vms12 to the kubernetes cluster as workers.

Step 1: Execute the following commands on vms11 and vms12 respectively.

[root@vmsX ~]# kubeadm join 192.168.26.10:6443 --token w6v53s.16xt8ssokjuswlzx --discovery-token-ca-cert-hash sha256:6b19ba9d3371coac474e8e70569dfc8ac93c76fd841ac8df025a43d49d8cd860
[preflight] Running pre-flight checks
   [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
...输出...
Run 'kubectl get nodes' on the master to see this node join the cluster.

[root@vmsX ~]#
复制代码

Step 2: Switch to the master, and you can see that all nodes have joined the cluster.

[root@vms10 ~]# kubectl get nodes
NAME                STATUS    ROLES                  AGE      VERSION 
vms10.rhce.cc      NotReady  control-plane,master   2m27s     v1.21.1
vms11.rhce.cc      NotReady  <none>                  21s      v1.21.1
vms12.rhce.cc      NotReady  <none>                  19s      v1.21.1
[root@vms10 ~]#
复制代码

From here you can see that the status of all nodes is NotReady, we need to install the calico network to make k8s work normally.

3.2.5 Install calico network

Because in the entire kubernetes cluster, pods are distributed on different hosts, in order to realize the cross-host communication of these pods, the CNI network plug-in must be installed, and the calico network is selected here.

Step 1: Download the yaml that configures the calico network on the master.

[root@vms10 ~]# wget https://docs.projectcalico.org/v3.19/manifests/calico.yaml 
... 输出 ...
[root@vms10 ~]#
复制代码

Step 2: Modify the pod network segment in calico.yaml.

Change the network segment of the pod in calico.yaml to the network segment specified by the option --pod-network-cidr in kubeadm init, open this file with vim and search for "192", and modify it as follows.

# no effect. This should fall within '--cluster-cidr'.
# - name: CALICO_IPV4POOL_CIDR 
#   value: "192.168.0.0/16"
# Disable file logging so 'kubectl logs'works.
- name: CALICO_DISABLE_FILE_LOGGING 
  value: "true"
复制代码

Remove the spaces behind the two # and #, and change 192.168.0.0/16 to 10.244.0.0/16.

# no effect. This should fall within '--cluster-cidr'.
- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16"
# Disable file logging so 'kubectl logs' works.
- name: CALICO_DISABLE_FILELOGGING
  value: "true"
复制代码

Step 3: Download the required images in advance.

See which mirrors are used for this file.

[root@vms10 ~]# grep image calico.yaml
         image: calico/cni:v3.19.1
         image: calico/cni:v3.19.1
         image: calico/pod2daemon-flexvol:v3.19.1
         image: calico/node:v3.19.1
         image: calico/kube-controllers:v3.19.1
[root@vms10 ~]#
复制代码

Download these images on all nodes (including the master).

[root@vmsX ~]# for i in calico/cni:v3.19.1 calico/pod2daemon-flexvol:v3.19.1 calico/node:v3.19.1 calico/kube-controllers:v3.19.1 ; do docker pull $i ; done 
...大量输出...
[root@vmsX ~]#
复制代码

Step 4: Install calico network.

Execute the following command on the master.

[root@vms10 ~]# kubectl apply -f calico.yaml 
... 大量输出 ...
[root@vms10 ~]#
复制代码

Step 5: Verify the result.

Run the command kubectl get nodes on the master again to check the running results.

[root@vms10 ~]# kubectl get nodes 
NAME               STATUS      ROLES                   AGE     VERSION 
vms10.rhce.cc       Ready     control-plane, master    13m     v1.21.1
vms11.rhce.cc       Ready     <none>                   11m     v1.21.1
vms12.rhce.cc       Ready     <none>                   11m     v1.21.1
[root@vms10 ~]#
复制代码

You can see that the status of all nodes has changed to Ready.

 

Guess you like

Origin blog.csdn.net/guolianggsta/article/details/130469950