Foreword:
The installation and deployment of kubernetes cluster is the first difficulty to learn kubernetes. It is indeed very difficult to deploy, especially in binary mode. Although there is minikube, kubeadm greatly simplifies the difficulty of kubernetes deployment. So, for our Learning environment or testing environment, how can we quickly, simply, and very elegantly deploy a kubernetes cluster for learning or testing?
At present, the version answer is the kubekey project, which is kk
This project aims at the deployment difficulty of kubernetes cluster, which greatly reduces the deployment threshold of kubernetes cluster. It can deploy single master cluster and multi-master high-availability cluster very quickly. Usually, deployment and installation may take up to 10 minutes (for online installation). If it is installed offline, the deployment time can be shortened to 12 minutes.
This article will give a brief description of kubekey deploying kubernetes single master cluster.
one,
The download address of the kubekey project
Releases · kubesphere/kubekey · GitHub
Roughly translate About, the brief introduction of the project says that kubekey can install only kubernetes cluster, or kubernetes cluster and kubesphere, and supports multi-cloud architecture, multi-worker nodes, and high-availability kubernetes cluster.
Then, it should be noted that since kubekey is a sub-project of kubesphere company, it is not decoupled from kubesphere, that is to say, either use kubekey to install kubernetes cluster, or use kubekey to install kubernetes cluster and kubesphere, both at the same time Installation, you can't just use kubekey to install kubesphere
OK, this article will only use kubekey to install and deploy a single master kubernetes cluster (online deployment, note that it is not an offline method, and it has not been researched yet)
As an installation tool, of course it is better to use the latest version, because there are enough supported kubernetes versions, enough bug fixes, and enough functions.
Due to the nature of the demonstration, I chose a version to download at will. The version used in this article is kubekey-v3.1.0-alpha.0-linux-amd64.tar.gz
It can be seen that the latest version 3.0.8 is quite fragrant, and it is basically the latest technology. If you want to experience the happiness of a higher version of kubernetes, this version is naturally more suitable
two,
Prerequisites for kubekey usage
roughly translate
First, the server system requires 2-core CPU, 4G memory, and at least 20G disk space
Second, the ssh service is normal, and all nodes can be accessed through the ssh service
Third, the curl and openssl commands can be sudo, if it is deployed by ordinary users
Fourth, the docker environment
Fifth, selinux is closed or has been configured. It is recommended to close selinux directly
Sixth, it is best to clean the server that has just installed the system
Seventh, you need to install socat and conntrack. These two are key dependencies and must be installed. Ebtables, ipset, and ipvsadm are medium dependencies. You can not install them, but it is best to install them
To sum up, under centos7, you need a docker environment (you can not install it, wait for kubekey to install it), time server, sshd, server password, close selinux and firewall, available external yum source, it is best to have epel source and foundation source.
Install dependencies, the installation command is:
yum install conntrack socat ipset ipvsadm ebtables -y
In this example, the information of two servers is used:
IP: 192.168.123.11 192.168.123.12 OS version is centos7
[root@node1 ~]# cat /etc/redhat-release
CentOS Linux release 7.7.1908 (Core)
Then, when we deploy kubeadm or binary, we often have steps to upgrade the kernel. Why is kubekey not mentioned here? In fact, the upgrade of the kernel is only to make the kubernetes cluster more stable. Considering that the purpose of this deployment is to test or learn the environment, the kernel does not need to be upgraded.
three,
kubekey generates a deployment configuration file
Basically, kubekey and kubeadm are similar, and you can use configuration files, that is, write how to deploy and install kubernetes in the configuration file, and then tell kubekey
On kubekey's official website, we use its high-level deployment method, which is the configuration file method
#### Note, the binary installation package of kubekey is recommended to be placed on the master node, and it can be used directly after decompression
generate configuration file
./kk create config [--with-kubernetes version] [--with-kubesphere version] [(-f | --filename) path]
According to the example command, write the following command to generate the configuration file 1.22.yaml
./kk create config with-kubernetes 1.22.16 -f 1.22.yaml
The content of the file is as follows:
[root@node1 ~]# vim 1.22.yaml
[root@node1 ~]# cat 1.22.yaml
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: node1, address: 172.16.0.2, internalAddress: 172.16.0.2, user: ubuntu, password: "Qcloud@123"}
- {name: node2, address: 172.16.0.3, internalAddress: 172.16.0.3, user: ubuntu, password: "Qcloud@123"}
roleGroups:
etcd:
- node1
control-plane:
- node1
worker:
- node1
- node2
controlPlaneEndpoint:
## Internal loadbalancer for apiservers
# internalLoadbalancer: haproxy
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.23.10
clusterName: cluster.local
autoRenewCerts: true
containerManager: docker
etcd:
type: kubekey
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
multusCNI:
enabled: false
registry:
privateRegistry: ""
namespaceOverride: ""
registryMirrors: []
insecureRegistries: []
addons: []
Obviously, there are still many things in this file that do not meet our expectations and need to be modified. The contents of the modified file are as follows:
###Mainly modify the IP address, password and CIDR
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: node1, address: 192.168.123.11, internalAddress: 192.168.123.11, user: root, password: "密码"}
- {name: node2, address: 192.168.123.12, internalAddress: 192.168.123.12, user: root, password: "密码"}
roleGroups:
etcd:
- node1
control-plane:
- node1
worker:
- node1
- node2
controlPlaneEndpoint:
## Internal loadbalancer for apiservers
# internalLoadbalancer: haproxy
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: 1.22.16
clusterName: cluster.local
autoRenewCerts: true
containerManager: docker
etcd:
type: kubekey
network:
plugin: calico
kubePodsCIDR: 10.244.0.0/24
kubeServiceCIDR: 10.96.0.0/24
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
multusCNI:
enabled: false
registry:
privateRegistry: ""
namespaceOverride: ""
registryMirrors: []
insecureRegistries: []
addons: []
Four,
The modified configuration file of the kubekey application starts to be officially deployed
./kk create cluster -f 1.22.yaml
Before starting deployment, due to the firewall, we need to add an environment variable, use domestic mirrors, etc., that is, localization
export KKZONE=cn
The output of the command is roughly as follows:
#### Note, all y in the form is enough, just enter yes directly to start the installation
[root@centos1 ~]# ./kk create cluster -f 123.yaml
_ __ _ _ __
| | / / | | | | / /
| |/ / _ _| |__ ___| |/ / ___ _ _
| \| | | | '_ \ / _ \ \ / _ \ | | |
| |\ \ |_| | |_) | __/ |\ \ __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
__/ |
|___/
11:52:54 CST [GreetingsModule] Greetings
11:52:54 CST message: [node2]
Greetings, KubeKey!
11:52:55 CST message: [node1]
Greetings, KubeKey!
11:52:55 CST success: [node2]
11:52:55 CST success: [node1]
11:52:55 CST [NodePreCheckModule] A pre-check on nodes
11:53:01 CST success: [node2]
11:53:01 CST success: [node1]
11:53:01 CST [ConfirmModule] Display confirmation form
+-------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| name | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time |
+-------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| node1 | y | y | y | y | y | y | | y | | | | | | | CST 11:53:01 |
| node2 | y | y | y | y | y | y | | y | | | | | | | CST 11:52:55 |
+-------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]:
After yes, start to download the components and execute the installation. You can see that kubeadm is downloaded, and many scripts are used:
11:54:24 CST success: [LocalHost]
11:54:24 CST [NodeBinariesModule] Download installation binaries
11:54:24 CST message: [localhost]
downloading amd64 kubeadm v1.22.16 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 43.7M 100 43.7M 0 0 998k 0 0:00:44 0:00:44 --:--:-- 1031k
11:55:09 CST message: [localhost]
downloading amd64 kubelet v1.22.16 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 115M 100 115M 0 0 1017k 0 0:01:56 0:01:56 --:--:-- 1078k
11:57:06 CST message: [localhost]
downloading amd64 kubectl v1.22.16 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 44.7M 100 44.7M 0 0 1017k 0 0:00:45 0:00:45 --:--:-- 1151k
11:57:51 CST message: [localhost]
downloading amd64 helm v3.9.0 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 44.0M 100 44.0M 0 0 1012k 0 0:00:44 0:00:44 --:--:-- 1082k
11:58:36 CST message: [localhost]
downloading amd64 kubecni v1.2.0 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 38.6M 100 38.6M 0 0 1008k 0 0:00:39 0:00:39 --:--:-- 1143k
11:59:16 CST message: [localhost]
downloading amd64 crictl v1.24.0 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 13.8M 100 13.8M 0 0 1044k 0 0:00:13 0:00:13 --:--:-- 1154k
11:59:29 CST message: [localhost]
downloading amd64 etcd v3.4.13 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 16.5M 100 16.5M 0 0 1012k 0 0:00:16 0:00:16 --:--:-- 1066k
11:59:46 CST message: [localhost]
downloading amd64 docker 20.10.8 ...
The final output is as follows:
poddisruptionbudget.policy/calico-kube-controllers created
12:05:00 CST success: [node1]
12:05:00 CST [ConfigureKubernetesModule] Configure kubernetes
12:05:00 CST success: [node1]
12:05:00 CST [ChownModule] Chown user $HOME/.kube dir
12:05:00 CST success: [node2]
12:05:00 CST success: [node1]
12:05:00 CST [AutoRenewCertsModule] Generate k8s certs renew script
12:05:01 CST success: [node1]
12:05:01 CST [AutoRenewCertsModule] Generate k8s certs renew service
12:05:02 CST success: [node1]
12:05:02 CST [AutoRenewCertsModule] Generate k8s certs renew timer
12:05:02 CST success: [node1]
12:05:02 CST [AutoRenewCertsModule] Enable k8s certs renew service
12:05:03 CST success: [node1]
12:05:03 CST [SaveKubeConfigModule] Save kube config as a configmap
12:05:03 CST success: [LocalHost]
12:05:03 CST [AddonsModule] Install addons
12:05:03 CST success: [LocalHost]
12:05:03 CST Pipeline[CreateClusterPipeline] execute successfully
Installation is complete.
Please check the result using the command:
kubectl get pod -A
Very simple, wait for 10 minutes for an available kubernetes-1.22.16 cluster to be deployed
five,
summary:
What exactly does kubekey do in the kubernetes cluster installation? Are there any flaws in such a cluster?
1,
Kubekey generally downloads some components of binary kubernetes, such as kubelet, kubeadm, helm, and some scripts, configuration files, etc.
The specific directory is under the kubekey folder:
[root@node1 kubekey]# ll
total 12
drwxr-xr-x. 3 root root 20 Jul 16 11:58 cni
-rw-r--r--. 1 root root 5667 Jul 16 12:04 config-sample
drwxr-xr-x. 3 root root 21 Jul 16 11:59 crictl
drwxr-xr-x. 3 root root 21 Jul 16 11:59 docker
drwxr-xr-x. 3 root root 21 Jul 16 11:59 etcd
drwxr-xr-x. 3 root root 20 Jul 16 11:57 helm
drwxr-xr-x. 3 root root 22 Jul 16 11:54 kube
drwxr-xr-x. 2 root root 53 Jul 16 11:52 logs
drwxr-xr-x. 2 root root 4096 Jul 16 12:37 node1
drwxr-xr-x. 2 root root 137 Jul 16 12:04 node2
drwxr-xr-x. 3 root root 18 Jul 16 12:03 pki
More installation details are in the log files under the logs directory. Interested students can go to research. In fact, the script that initializes the system is worth reading.
[root@node1 node1]# ls
10-kubeadm.conf backup-etcd.timer daemon.json etcd-backup.sh etcd.service k8s-certs-renew.service k8s-certs-renew.timer kubelet.service nodelocaldnsConfigmap.yaml
backup-etcd.service coredns-svc.yaml docker.service etcd.env initOS.sh k8s-certs-renew.sh kubeadm-config.yaml network-plugin.yaml nodelocaldns.yaml
[root@node1 node1]# pwd
/root/kubekey/node1
As you can see, the initialization script turns off the firewall, selinux and does kernel optimization work
[root@node1 node1]# cat initOS.sh
#!/usr/bin/env bash
# Copyright 2020 The KubeSphere Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
swapoff -a
sed -i /^[^#]*swap*/s/^/\#/g /etc/fstab
# See https://github.com/kubernetes/website/issues/14457
if [ -f /etc/selinux/config ]; then
sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
fi
# for ubuntu: sudo apt install selinux-utils
# for centos: yum install selinux-policy
if command -v setenforce &> /dev/null
then
setenforce 0
getenforce
fi
echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-arptables = 1' >> /etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-iptables = 1' >> /etc/sysctl.conf
echo 'net.ipv4.ip_local_reserved_ports = 30000-32767' >> /etc/sysctl.conf
echo 'vm.max_map_count = 262144' >> /etc/sysctl.conf
echo 'vm.swappiness = 1' >> /etc/sysctl.conf
echo 'fs.inotify.max_user_instances = 524288' >> /etc/sysctl.conf
echo 'kernel.pid_max = 65535' >> /etc/sysctl.conf
#See https://imroc.io/posts/kubernetes/troubleshooting-with-kubernetes-network/
sed -r -i "s@#{0,}?net.ipv4.tcp_tw_recycle ?= ?(0|1)@net.ipv4.tcp_tw_recycle = 0@g" /etc/sysctl.conf
sed -r -i "s@#{0,}?net.ipv4.ip_forward ?= ?(0|1)@net.ipv4.ip_forward = 1@g" /etc/sysctl.conf
sed -r -i "s@#{0,}?net.bridge.bridge-nf-call-arptables ?= ?(0|1)@net.bridge.bridge-nf-call-arptables = 1@g" /etc/sysctl.conf
sed -r -i "s@#{0,}?net.bridge.bridge-nf-call-ip6tables ?= ?(0|1)@net.bridge.bridge-nf-call-ip6tables = 1@g" /etc/sysctl.conf
sed -r -i "s@#{0,}?net.bridge.bridge-nf-call-iptables ?= ?(0|1)@net.bridge.bridge-nf-call-iptables = 1@g" /etc/sysctl.conf
sed -r -i "s@#{0,}?net.ipv4.ip_local_reserved_ports ?= ?([0-9]{1,}-{0,1},{0,1}){1,}@net.ipv4.ip_local_reserved_ports = 30000-32767@g" /etc/sysctl.conf
sed -r -i "s@#{0,}?vm.max_map_count ?= ?([0-9]{1,})@vm.max_map_count = 262144@g" /etc/sysctl.conf
sed -r -i "s@#{0,}?vm.swappiness ?= ?([0-9]{1,})@vm.swappiness = 1@g" /etc/sysctl.conf
sed -r -i "s@#{0,}?fs.inotify.max_user_instances ?= ?([0-9]{1,})@fs.inotify.max_user_instances = 524288@g" /etc/sysctl.conf
sed -r -i "s@#{0,}?kernel.pid_max ?= ?([0-9]{1,})@kernel.pid_max = 65535@g" /etc/sysctl.conf
tmpfile="$$.tmp"
awk ' !x[$0]++{print > "'$tmpfile'"}' /etc/sysctl.conf
mv $tmpfile /etc/sysctl.conf
systemctl stop firewalld 1>/dev/null 2>/dev/null
systemctl disable firewalld 1>/dev/null 2>/dev/null
systemctl stop ufw 1>/dev/null 2>/dev/null
systemctl disable ufw 1>/dev/null 2>/dev/null
modinfo br_netfilter > /dev/null 2>&1
if [ $? -eq 0 ]; then
modprobe br_netfilter
mkdir -p /etc/modules-load.d
echo 'br_netfilter' > /etc/modules-load.d/kubekey-br_netfilter.conf
fi
modinfo overlay > /dev/null 2>&1
if [ $? -eq 0 ]; then
modprobe overlay
echo 'overlay' >> /etc/modules-load.d/kubekey-br_netfilter.conf
fi
modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
cat > /etc/modules-load.d/kube_proxy-ipvs.conf << EOF
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
EOF
modprobe nf_conntrack_ipv4 1>/dev/null 2>/dev/null
if [ $? -eq 0 ]; then
echo 'nf_conntrack_ipv4' > /etc/modules-load.d/kube_proxy-ipvs.conf
else
modprobe nf_conntrack
echo 'nf_conntrack' > /etc/modules-load.d/kube_proxy-ipvs.conf
fi
sysctl -p
sed -i ':a;$!{N;ba};s@# kubekey hosts BEGIN.*# kubekey hosts END@@' /etc/hosts
sed -i '/^$/N;/\n$/N;//D' /etc/hosts
cat >>/etc/hosts<<EOF
# kubekey hosts BEGIN
192.168.123.11 node1.cluster.local node1
192.168.123.12 node2.cluster.local node2
192.168.123.11 lb.kubesphere.local
# kubekey hosts END
EOF
echo 3 > /proc/sys/vm/drop_caches
# Make sure the iptables utility doesn't use the nftables backend.
update-alternatives --set iptables /usr/sbin/iptables-legacy >/dev/null 2>&1 || true
update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy >/dev/null 2>&1 || true
update-alternatives --set arptables /usr/sbin/arptables-legacy >/dev/null 2>&1 || true
update-alternatives --set ebtables /usr/sbin/ebtables-legacy >/dev/null 2>&1 || true
ulimit -u 65535
ulimit -n 65535
2,
What is unreasonable about the kubernetes installed by kubekey by default?
I think the processing of etcd component is relatively poor, because it is only a single instance of etcd, such a cluster cannot be used in production. Although it is an external etcd, it is not a cluster, and the stability of the cluster cannot be guaranteed
[root@node1 node1]# kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-69cfcfdf6c-2dl2z 1/1 Running 0 41m
kube-system calico-node-8l2vk 1/1 Running 0 41m
kube-system calico-node-plbbn 1/1 Running 0 41m
kube-system coredns-5495dd7c88-7746t 1/1 Running 0 42m
kube-system coredns-5495dd7c88-gzxl2 1/1 Running 0 42m
kube-system kube-apiserver-node1 1/1 Running 0 42m
kube-system kube-controller-manager-node1 1/1 Running 0 42m
kube-system kube-proxy-ld97n 1/1 Running 0 42m
kube-system kube-proxy-q7zzm 1/1 Running 0 41m
kube-system kube-scheduler-node1 1/1 Running 0 42m
kube-system nodelocaldns-9l8lf 1/1 Running 0 42m
kube-system nodelocaldns-hw4tn 1/1 Running 0 41m
[root@node1 node1]# systemctl status etcd
● etcd.service - etcd
Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2023-07-16 12:03:33 CST; 43min ago
Main PID: 4872 (etcd)
Tasks: 15
Memory: 50.6M
CGroup: /system.slice/etcd.service
└─4872 /usr/local/bin/etcd
Jul 16 12:24:24 node1 etcd[4872]: store.index: compact 1888
Jul 16 12:24:24 node1 etcd[4872]: finished scheduled compaction at 1888 (took 400.469µs)
Jul 16 12:29:24 node1 etcd[4872]: store.index: compact 2279
Jul 16 12:29:24 node1 etcd[4872]: finished scheduled compaction at 2279 (took 351.415µs)
Jul 16 12:34:24 node1 etcd[4872]: store.index: compact 2672
Jul 16 12:34:24 node1 etcd[4872]: finished scheduled compaction at 2672 (took 403.899µs)
Jul 16 12:39:24 node1 etcd[4872]: store.index: compact 3063
Jul 16 12:39:24 node1 etcd[4872]: finished scheduled compaction at 3063 (took 355.549µs)
Jul 16 12:44:24 node1 etcd[4872]: store.index: compact 3455
Jul 16 12:44:24 node1 etcd[4872]: finished scheduled compaction at 3455 (took 346.379µs)
The performance of kubekey in other places is basically perfect (kubekey supports the installation of high-availability kubernetes clusters, but it is not used in this example)
The next article describes how to use kubekey to deploy a highly available kubernetes cluster and fix the above etcd problem.