Foreword:
Cloud native|kubernetes|kubernetes cluster deployment artifact kubekey preliminary use (use of kubekey under centos7) - late wind_END's blog - CSDN blog
I used kubekey to deploy a simple non-highly available kubernetes cluster with a single instance of etcd. After research, I found that the deployment process can be simplified and part of the download process (mainly the process of downloading kubernetes components) is omitted. However, the kubernetes version will be fixed at version 1.22.16. The etcd cluster can be deployed as an external cluster for production, and apiserver and other components are also highly available, and the deployment is very simple, so it is very nice.
one,
Offline installation package
#### Note, this offline package is applicable to centos7 and passed the whole series of verification under centos7, some versions of Euler should also be available
Link: https://pan.baidu.com/s/1d4YR_a244iZj5aj2DJLU2w?pwd=kkey
Extraction code:
There are roughly the following files in the kkey installation package:
The first one has nothing to say, the kubekey installation package, after decompression, check whether there is execution permission, if not, add execution permission
The second is the binary file of the kubernetes component, which can be directly decompressed to the root directory
The third is strong dependence. After decompression, enter the decompression directory and execute rpm -ivh *.
The fourth thing is the deployment list. You need to fill in the IP and server password according to the actual situation. Others basically do not need to be changed.
Then you can perform the deployment work, just pull some images, and these images are pulled from the kubesphere official website. If you think that pulling the images is too slow, you can export KKZONE=cn, and then the images will be pulled from Alibaba Cloud.
two,
Parsing of deployment manifest files
The content of the file is as follows:
Mainly hosts tag, roleGroups tag
Under the hosts label, there are several nodes to write several nodes. In my experiment, I used four VMware virtual machines. Each virtual machine has 4G memory, 2CPUI specifications, and the IP address and password are filled in according to the actual situation.
The user uses root, which is actually to avoid some failures. After all, root has the highest authority. It is better not to use ordinary users for deployment and installation work (yum deployment never uses ordinary users, just to avoid failure).
The label of roleGroups is 11, 12, and 13. These three nodes are the master nodes and the nodes of the etcd cluster
The specific implementation details of haproxy used for high availability have not been analyzed yet.
The specific installation and deployment logs are in /root/kubekey/logs
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: node1, address: 192.168.123.11, internalAddress: 192.168.123.11, user: root, password: "密码"}
- {name: node2, address: 192.168.123.12, internalAddress: 192.168.123.12, user: root, password: "密码"}
- {name: node3, address: 192.168.123.13, internalAddress: 192.168.123.13, user: root, password: "密码"}
- {name: node4, address: 192.168.123.14, internalAddress: 192.168.123.14, user: root, password: "密码"}
roleGroups:
etcd:
- node1
- node2
- node3
control-plane:
- node1
- node2
- node3
worker:
- node4
controlPlaneEndpoint:
## Internal loadbalancer for apiservers
internalLoadbalancer: haproxy
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.23.16
clusterName: cluster.local
autoRenewCerts: true
containerManager: docker
etcd:
type: kubekey
network:
plugin: calico
kubePodsCIDR: 10.244.0.0/18
kubeServiceCIDR: 10.96.0.0/18
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
multusCNI:
enabled: false
registry:
privateRegistry: ""
namespaceOverride: ""
registryMirrors: []
insecureRegistries: []
addons: []
three,
Status check of deployment completion
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
[root@centos1 ~]# kubectl get po -A -owide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-84897d7cdf-hrj4f 1/1 Running 0 152m 10.244.28.2 node3 <none> <none>
kube-system calico-node-2m7hp 1/1 Running 0 152m 192.168.123.11 node1 <none> <none>
kube-system calico-node-5ztjk 1/1 Running 0 152m 192.168.123.14 node4 <none> <none>
kube-system calico-node-96dmb 1/1 Running 0 152m 192.168.123.13 node3 <none> <none>
kube-system calico-node-rqp2p 1/1 Running 0 152m 192.168.123.12 node2 <none> <none>
kube-system coredns-b7c47bcdc-bbxck 1/1 Running 0 152m 10.244.28.3 node3 <none> <none>
kube-system coredns-b7c47bcdc-qtvhf 1/1 Running 0 152m 10.244.28.1 node3 <none> <none>
kube-system haproxy-node4 1/1 Running 0 152m 192.168.123.14 node4 <none> <none>
kube-system kube-apiserver-node1 1/1 Running 0 152m 192.168.123.11 node1 <none> <none>
kube-system kube-apiserver-node2 1/1 Running 0 152m 192.168.123.12 node2 <none> <none>
kube-system kube-apiserver-node3 1/1 Running 0 152m 192.168.123.13 node3 <none> <none>
kube-system kube-controller-manager-node1 1/1 Running 0 152m 192.168.123.11 node1 <none> <none>
kube-system kube-controller-manager-node2 1/1 Running 0 152m 192.168.123.12 node2 <none> <none>
kube-system kube-controller-manager-node3 1/1 Running 0 152m 192.168.123.13 node3 <none> <none>
kube-system kube-proxy-649mn 1/1 Running 0 152m 192.168.123.14 node4 <none> <none>
kube-system kube-proxy-7q7ts 1/1 Running 0 152m 192.168.123.13 node3 <none> <none>
kube-system kube-proxy-dmd7v 1/1 Running 0 152m 192.168.123.12 node2 <none> <none>
kube-system kube-proxy-fpb6z 1/1 Running 0 152m 192.168.123.11 node1 <none> <none>
kube-system kube-scheduler-node1 1/1 Running 0 152m 192.168.123.11 node1 <none> <none>
kube-system kube-scheduler-node2 1/1 Running 0 152m 192.168.123.12 node2 <none> <none>
kube-system kube-scheduler-node3 1/1 Running 0 152m 192.168.123.13 node3 <none> <none>
kube-system nodelocaldns-565pz 1/1 Running 0 152m 192.168.123.12 node2 <none> <none>
kube-system nodelocaldns-dpwlx 1/1 Running 0 152m 192.168.123.13 node3 <none> <none>
kube-system nodelocaldns-ndlbw 1/1 Running 0 152m 192.168.123.14 node4 <none> <none>
kube-system nodelocaldns-r8gjl 1/1 Running 0 152m 192.168.123.11 node1 <none> <none>
[root@centos1 ~]# kubectl get no -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node1 Ready control-plane,master 152m v1.23.16 192.168.123.11 <none> CentOS Linux 7 (Core) 3.10.0-1062.el7.x86_64 docker://20.10.8
node2 Ready control-plane,master 152m v1.23.16 192.168.123.12 <none> CentOS Linux 7 (Core) 3.10.0-1062.el7.x86_64 docker://20.10.8
node3 Ready control-plane,master 152m v1.23.16 192.168.123.13 <none> CentOS Linux 7 (Core) 3.10.0-1062.el7.x86_64 docker://20.10.8
node4 Ready worker 152m v1.23.16 192.168.123.14 <none> CentOS Linux 7 (Core) 3.10.0-1062.el7.x86_64 docker://20.10.8
After shutting down 12 nodes, you can see that the kubernetes cluster can still run normally (11 cannot be shut down, because it is a management node, and the config files of those clusters are not copied to other nodes)