2. Build K8S environment based on kubeadm

Table of contents

1. Environmental Description

2. Initialize all nodes

3. Modify the host names of the three servers and write them into the host file

Fourth, adjust the kernel parameters

5. Install Docker on all nodes

6. Configure K8S source for all nodes

7. Install kubeadm, kubelet and kubectl on all nodes

8. Deploy the kubernetes Master node

9. The k8s-node node joins the master node

10. Deploy the CNI network plug-in on the master node

11. Check the master node

12. Test the kubernetes cluster

Thirteen, token production


kubeadm is a tool launched by the official community for the rapid deployment of kubernetes clusters. This tool can complete the deployment of a kubernetes cluster through two instructions:

  • First, create a Master node: kubeadm init
  • Second, add the Node node to the current cluster: kubeadm join <IP and port of the Master node>

1. Environmental Description

Virtual machine configuration:

server type

Role

IP address

k8s-master

master

192.168.1.33

k8s-node01

node

192.168.1.31

k8s-node02

node

192.168.1.32

2. Initialize all nodes

All nodes need to close firewall rules, close selinux, and close swap exchange.

# 关闭防火墙
[root@localhost ~]# systemctl stop firewalld
# 禁用 firewalld 服务
[root@localhost ~]# systemctl disable firewalld
# 临时关闭 selinux
[root@localhost ~]# setenforce 0
[root@localhost ~]# iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
# 关闭 swap
# 临时关闭【立即生效】查看使用 free 命令
[root@localhost ~]# swapoff -a
# 永久关闭【重启生效】
[root@localhost ~]# sed -ri 's/.*swap.*/#&/' /etc/fstab
[root@localhost ~]# free -g
              total        used        free      shared  buff/cache   available
Mem:              7           4           1           0           0           2
Swap:             0           0           0

Taking the master as an example, it also needs to be executed on node01 and node02:

3. Modify the host names of the three servers and write them into the host file

cat >> /etc/hosts << EOF
192.168.1.33 master
192.168.1.31 node01
192.168.1.32 node02
EOF

Of course, you can also use hostnamectl set-hostname xxx to set the host name, as follows:

# 【master 节点上操作】
hostnamectl set-hostname master
# 【node01 节点上操作】
hostnamectl set-hostname node01
# 【node02 节点操作】
hostnamectl set-hostname node02

Fourth, adjust the kernel parameters

Pass bridged IPV4 traffic to the iptables chain.

cat > /etc/sysctl.d/kubernetes.conf << EOF
#开启网桥模式,可将网桥的流量传递给iptables链
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
#关闭ipv6协议
net.ipv6.conf.all.disable_ipv6=1
net.ipv4.ip_forward=1
EOF

Taking the master node as an example, node01 and node02 also need to execute:

Then execute sysctl --system loading parameters on the three nodes:

5. Install Docker on all nodes

The default CRI (container runtime) of Kubernetes is Docker, so install Docker first. Taking the master node as an example, node01 and node02 also need to execute:

# 配置一下 Docker 的 yum 源【阿里云】
cat >/etc/yum.repos.d/docker.repo<<EOF
[docker-ce-edge]
name=Docker CE Edge - \$basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/\$basearch/edge
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
EOF

# 然后 yum 方式安装 docker
yum -y install docker-ce
# 查看 docker 版本
docker --version

# 配置 docker 的镜像源【阿里云】
cat >> /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF

# 启动 docker
systemctl enable docker
systemctl start docker
systemctl status docker

6. Configure K8S source for all nodes

Execute the following command to configure the yum source [Alibaba Cloud] of k8s:

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

Taking the master as an example, it also needs to be executed on node01 and node02:

7. Install kubeadm, kubelet and kubectl on all nodes

Install kubelet, kubeadm, kubectl, and specify the version:

[root@localhost ~]# yum install -y kubelet-1.21.3 kubeadm-1.21.3 kubectl-1.21.3

Taking the master node as an example, node01 and node02 also need to execute:

ps: Depending on the speed of the network, it may take some time.

After the installation is complete, as shown below.

After k8s is installed through kubeadm, it exists as a Pod, that is, the bottom layer runs as a container, so the kubelet must be set to start automatically.

[root@localhost ~]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@localhost ~]# systemctl start kubelet

8. Deploy the kubernetes Master node

Here you only need to execute it on the master node. Execute the following script on the master node (192.168.1.33) to perform [cluster initialization]:

kubeadm init \
--apiserver-advertise-address=192.168.1.33 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.21.3 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16

Since the default pull image address k8s.gcr.io cannot be accessed domestically, specify the address of the Alibaba Cloud mirror warehouse here. [Executing the above command will be slower, because the background image is actually pulling the image], we can view the docker images command Pulled image.

Parameter Description:

kubeadm init \
--apiserver-advertise-address=192.168.1.33 \                  #指定master监听的地址,修改为自己的master地址
--image-repository registry.aliyuncs.com/google_containers \   #指定为aliyun的下载源,最好用国内的
--kubernetes-version v1.18.0 \                 #指定k8s版本,1.18.0版本比较稳定
--service-cidr=10.96.0.0/12 \                  #设置集群内部的网络
--pod-network-cidr=10.244.0.0/16                #设置pod的网络
# service-cidr 和 pod-network-cidr 最好就用这个,不然需要修改后面的 kube-flannel.yaml 文件

After executing this command, the corresponding image will be pulled in the background, and it takes some time to wait.

The following figure is the result after the pull is completed:

Here we continue to go back and look at the logs just pulled, there are several key information:

Through the comments, we can see that if you want to start using the cluster, you need to execute the above script:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

The following kubeadm join script is actually a command to add node1 and node2 to the master cluster, which we will use in the next step.

9. The k8s-node node joins the master node

Here we only need to execute on node01 (192.168.1.31), node02 (192.168.1.32) nodes, we directly copy the kubeadm join script in the previous picture, and add node01 and node02 to the master cluster. Execute on node01 and node02 nodes, do not execute on the master node, otherwise an error will be reported.

kubeadm join 192.168.1.33:6443 --token ailzq6.z3r7d3u0ov225p99 \
        --discovery-token-ca-cert-hash sha256:45c01d464d97fe9d14d42c91b629bbe561aba2508e9db823f81b00b911c8ccfa 

The default validity period of the token is 24 hours. After the expiration, the token is no longer available. At this time, you need to recreate the token, as follows:

kubeadm token create --print-join-command

After we have added both nodes, we can view the current node information through the kubectl get nodes command on the master node:

We can see that node01 and node02 have successfully joined the master cluster, but the status is still NotReady, and a network plug-in needs to be installed for network access.

10. Deploy the CNI network plug-in on the master node

First download kube-flannel.yml from the GitHub repository:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

The download of foreign sites is very slow, and they often fail to connect:

Install:

kubectl apply -f http://120.78.77.38/file/kube-flannel.yaml

After installing the CNI network plug-in, the status of the three nodes becomes Ready.

We use kubectl get pod -n kube-system to view the pod status, and we can see that the status is Running.

11. Check the master node

Use kubectl get cs to check the master node status.

We see that the status of controller-manager and scheduler is unhealthy, indicating an unhealthy status.

We need to modify the configuration file:

  • vim /etc/kubernetes/manifests/kube-scheduler.yaml

Comment out --port=0.

  • vim /etc/kubernetes/manifests/kube-controller-manager.yaml

Comment out --port=0.

Then we check whether the service is normal again:

1) Use the kubectl get pods -A command to check whether all pods are running normally

2), Use the kubectl get cs command to query whether the master is normal

3), use the kubectl get nodes command to query whether the node node is ready

12. Test the kubernetes cluster

Create a pod in the Kubernetes cluster. Here we use k8s to deploy nginx to verify whether it is running normally.

# 下载 nginx 【会联网拉取 nginx 镜像】
kubectl create deployment nginx --image=nginx

# 暴露端口,让其它外界能够访问
kubectl expose deployment nginx --port=80 --type=NodePort

# 查看状态
kubectl get pods

Through the kubectl get pods and kubectl get svc commands, we can see that k8s has successfully helped us automatically pull the nginx image, and run a container with the exposed port: 30140.

Note that you need to execute kubectl expose deployment nginx --port=80 --type=NodePort to expose the port

Test it, master and node01, node02 can access through port 30140 of their own IP:

  • 1), master node

  • 2), node01 node

  • 3), node02 node

It can be found that our nginx has been successfully started.

Thirteen, token production

node The token that needs to be generated to join the cluster. The token is valid for 24 hours and needs to be recreated when it expires.
The creation command is: kubeadm token create --print-join-command

Then you can execute the kubeadm join above to add the node node to the master cluster.

Guess you like

Origin blog.csdn.net/Weixiaohuai/article/details/131686649