1. Installation requirements
Before starting, the following conditions must be met to deploy Kubernetes cluster machines:
- One or more machines, operating system CentOS7.x-86_x64
- Hardware configuration: 2GB or more RAM, 2 CPU or more CPU, hard disk 30GB or more
- Network communication between all machines in the cluster
- Can access the external network, need to pull the mirror
- Prohibit swap partition
2. Learning objectives
- Install Docker and kubeadm on all nodes
- Deploy Kubernetes Master
- Deploy the container network plugin
- Deploy Kubernetes Node and add the node to the Kubernetes cluster
- Deploy the Dashboard Web page to visually view Kubernetes resources
3. Prepare the environment
Roles | IP |
---|---|
k8s-master | 172.31.53.143 (intranet) |
k8s-node1 | 172.31.53.144 (intranet) |
关闭防火墙:
$ systemctl stop firewalld
$ systemctl disable firewalld
关闭selinux:
$ sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
$ setenforce 0 # 临时
关闭swap:
$ swapoff -a # 临时
$ vim /etc/fstab # 永久
设置主机名:
$ hostnamectl set-hostname <hostname>
在master添加hosts:
$ cat >> /etc/hosts << EOF
172.31.53.143 k8s-master
172.31.53.144 k8s-node1
EOF
将桥接的IPv4流量传递到iptables的链:
$ cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
$ sysctl --system # 生效
时间同步:
$ yum install ntpdate -y #centos8:yum -y install chrony
$ ntpdate time.windows.com
4. Install Docker/kubeadm/kubelet on all nodes
Kubernetes default CRI (container runtime) is Docker, so install Docker first.
4.1 Install Docker
$ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
$ yum -y install docker-ce-18.06.1.ce-3.el7
$ systemctl enable docker && systemctl start docker
$ docker --version
Docker version 18.06.1-ce, build e68fc7a
# cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://a9b10k37.mirror.aliyuncs.com"]
}
EOF
4.2 Add Alibaba Cloud YUM Software Source
$ cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
4.3 Install kubeadm, kubelet and kubectl
Due to frequent version updates, the version number is specified here for deployment:
$ yum install -y kubelet-1.17.0 kubeadm-1.17.0 kubectl-1.17.0
$ systemctl enable kubelet
5. Deploy Kubernetes Master
Execute at 172.31.53.143 (Master). (172.31.53.143 is the intranet)
apiserver-advertise-address
must be the intranet ip! Initial access to the external network cannot be accessed, which will cause a failure\
$ kubeadm init \
--apiserver-advertise-address=172.31.53.143
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.17.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16
Since the default pull mirror address k8s.gcr.io cannot be accessed in China, specify the Alibaba Cloud mirror warehouse address here.
Use kubectl tool:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl get nodes
6. Install Pod Network Plug-in (CNI)
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
#查看是否启动完毕
$ kubectl get pods -n kube-system
#查看pods列表
$ kubectl get pods #--all-namespaces
ps: If https://raw.githubusercontent.com is not accessible, use offline download:
#下载不了请使用离线下载:https://pan.baidu.com/s/1dmk2ZnJCrrxoReex3FwUkg提取码:vniv
#下载后先安装镜像,然后执行安装flannel
docker import flannel-v0.11.0-linux-amd64.tar.gz
kubectl create -f kube-flannel.yml
#查看安装是否成功
kubectl get pod -n kube-system
#如果STATUS不是running则失败删除重装
#kubectl delete -f kube-flannel.yml
#查看是否启动完毕
$ kubectl get pods -n kube-system
#查看pods列表
$ kubectl get pods #--all-namespaces
Make sure you can access the registery of quay.io.
If the Pod mirror download fails, you can change to this mirror address: lizhenliang/flannel:v0.11.0-amd64
7. Join Kubernetes Node
Performed at 172.31.53.144 (Node).
To add a new node to the cluster, execute the kubeadm join command output in kubeadm init:
$ kubeadm join 172.31.53.143:6443 --token 5fuqco.mxzqki2me183mssm \
--discovery-token-ca-cert-hash sha256:0aaceebc4bc3bd5cc1653666683eb72b9e328747c7d077e473e7a130a6463586
If there is a problem: the master has been in the notready state after kubeadm init.
Troubleshooting: use systemctl status kubelet to view the log
-
plugin flannel does not support config version “”
vim /etc/cni/net.d/10-flannel.conflist //加上cni的版本号: //文件内容如下 { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] }
Run after modification:
systemctl daemon-reload
Check the cluster status again
kubectl get nodes
and find that the master is normal and in the ready state; but the node1 node is still in the notready state:Go to node1, check the kubectl log, and find that it still reports an error
no valid networks found in /etc/cni/net.d
. Add the version number of cni to node1 and restart, you can see that the cluster status becomes normal
8. Test the kubernetes cluster
Create a pod in the Kubernetes cluster and verify that it is running normally:
$ kubectl create deployment nginx --image=nginx
$ kubectl expose deployment nginx --port=80 --type=NodePort
$ kubectl get pod,svc
#扩容3个nginx
$ kubectl scale deployment nginx --replicas=3
View display:
[root@master ~]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-86c57db685-glwd5 1/1 Running 0 19s
pod/nginx-86c57db685-j8d94 1/1 Running 0 19s
pod/nginx-86c57db685-z22p4 1/1 Running 0 3m52s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 131m
service/nginx NodePort 10.96.139.211 <none> 80:32618/TCP 3m43s
Visit address: http://NodeIP:Port (http://external network ip:32618)
Successful visit:
end
At this point, the k8s cluster installation and deployment is complete!