Note: This article comes from Teacher Li Zhenliang.
"Deploy a Complete Enterprise K8s Cluster"
v1.20 , binary mode
author information |
Li Zhenliang (A Liang), WeChat: xyz12366699 |
DevOps Practical Academy |
|
illustrate |
The document has a navigation pane for easy reading. If it is not displayed on the left, please check whether word is enabled. Please indicate the author when reprinting, and refuse unethical behavior! |
last updated |
2021-04-06 |
Table of contents
1.1 Two ways to deploy K8s clusters in the production environment... 2
1.2 Prepare the environment... 2
1.3 Operating system initialization configuration... 4
2.1 Prepare the cfssl certificate generation tool... 5
2.2 Generate Etcd certificate... 6
2.3 Download binaries from Github....8
3.1 Decompress the binary package... 12
3.2 systemd management docker.. 12
3.3 Create a configuration file... 13
3.4 Start and set the boot start... 13
4.1 Generate kube-apiserver certificate... 13
4.2 Download binaries from Github....16
4.3 Decompress the binary package... 16
4.4 deploy kube-apiserver.. 16
4.5 Deploy kube-controller-manager.. 20
4.6 Deploy kube-scheduler.. 23
5.1 Create a working directory and copy binary files... 28
5.3 Approve the kubelet certificate application and join the cluster... 31
5.5 Deploying network components... 34
5.6 Authorize apiserver to access kubelet.. 38
6. Deploy Dashboard and CoreDNS.. 41
7. Multi-Master expansion (high availability architecture)... 43
7.2 Deploy Nginx+Keepalived High Availability Load Balancer.... 47
7.3 Modify all Worker Node connection LB VIP.. 54
1. Pre-knowledge points
1.1 Two ways to deploy K8s cluster in production environment
- kubeadm
Kubeadm is a K8s deployment tool that provides kubeadm init and kubeadm join for rapid deployment of Kubernetes clusters.
- binary package
Download the distribution's binary package from github, and manually deploy each component to form a Kubernetes cluster.
Summary: Kubeadm lowers the deployment threshold, but shields many details, making it difficult to troubleshoot problems. If you want to be easier and more controllable, it is recommended to use binary packages to deploy Kubernetes clusters. Although manual deployment is troublesome, you can learn a lot of working principles during this period, which is also conducive to later maintenance.
1.2 Prepare the environment
Server requirements:
- Recommended minimum hardware configuration: 2-core CPU, 2G memory, 30G hard disk
- The server should preferably be able to access the external network, and there will be a need to pull images from the Internet. If the server cannot access the Internet, you need to download the corresponding image in advance and import it to the node
Software Environment:
software |
Version |
operating system |
CentOS7.x_x64 (mini) |
container engine |
Docker CE 19 |
Kubernetes |
Kubernetes v1.20 |
Overall planning of the server:
Role |
IP |
components |
k8s-master01 |
192.168.31.71 |
kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, kube-proxy, docker, etcd, nginx,keepalived |
k8s-master2 |
192.168.31.74 |
be apiserver,be controller-manager,be scheduler,belet,be proxy,docker, nginx,keepalived |
k8s-node1 |
192.168.31.72 |
kubelet,kube proxy,docker,etcd |
k8s-node2 |
192.168.31.73 |
kubelet,kube proxy,docker,etcd |
load balancer IP |
192.168.31.88 (VIP) |
Note: Considering that some friends have low computer configurations, it will be impossible to run four machines at a time, so building this K8s high-availability cluster is implemented in two parts. First, deploy a set of single Master architecture (3 sets), and then expand the capacity to Multi-Master architecture (4 or 6), by the way, familiarize yourself with the Master expansion process.
Single Master architecture diagram:
Single Master server planning:
Role |
IP |
components |
k8s-master |
192.168.31.71 |
be apiserver,be controller-manager,be scheduler,etcd |
k8s-node1 |
192.168.31.72 |
kubelet,kube proxy,docker,etcd |
k8s-node2 |
192.168.31.73 |
kubelet,kube proxy,docker,etcd |
1.3 Operating system initialization configuration
The above three servers operate in the same way ~~~
# close the firewall systemctl stop firewalld systemctl disable firewalld # close selinux sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久 setenforce 0 #temporary # close swap swapoff -a #temporary sed -ri 's/.*swap.*/#&/' /etc/fstab # forever #Set the hostname according to the plan hostnamectl set-hostname <hostname> #Add hosts in master cat >> /etc/hosts << EOF 192.168.16.80 k8s-master01
192.168.16.81 k8s-master02
192.168.16.82 k8s-node01
192.168.16.83 k8s-node02 EOF # Pass bridged IPv4 traffic to the chain of iptables cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system #effective # time synchronization yum install ntpdate -y ntpdate time.windows.com
ntpdate cn.pool.ntp.org
crontab -l
0 5 * * * /usr/sbin/ntpdate -u cn.pool.ntp.org
2. Deploy Etcd cluster
Etcd is a distributed key-value storage system. Kubernetes uses Etcd for data storage, so prepare an Etcd database first. In order to solve Etcd single point of failure, it should be deployed in cluster mode. Here, 3 machines are used to form a cluster, which can tolerate 1 machine failure. , Of course, you can also use 5 machines to form a cluster, which can tolerate the failure of 2 machines.
node name |
IP |
etcd-1 |
192.168.31.71 |
etcd-2 |
192.168.31.72 |
etcd-3 |
192.168.31.73 |
Note: In order to save machines, it is reused here with K8s node machines. It can also be deployed independently of the k8s cluster, as long as the apiserver can be connected.
2.1 Prepare the cfssl certificate generation tool
cfssl is an open source certificate management tool that uses json files to generate certificates, which is more convenient to use than openssl.
Find any server to operate, here use the Master node.
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
2.2 Generate Etcd certificate
1. Self-signed certificate authority (CA)
Create a working directory:
mkdir -p ~/TLS/{etcd,k8s}
cd ~/TLS/etcd
Self-signed CA:
cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json << EOF
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
Generate the certificate:
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
The ca.pem and ca-key.pem files will be generated.
2. Use self-signed CA to issue Etcd HTTPS certificate
Create a certificate request file:
cat > server-csr.json << EOF
{
"CN": "etcd",
"hosts": [
"192.168.16.80",
"192.168.16.81",
"192.168.16.82",
"192.168.16.83",
"192.168.16.90"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF
Note: The IP in the hosts field of the above file is the cluster internal communication IP of all etcd nodes, and none of them can be missing! In order to facilitate later expansion, you can write a few more reserved IPs.
Generate the certificate:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
The server.pem and server-key.pem files will be generated.
2.3 Download binaries from Github
Download address: https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz
2.4 Deploy Etcd cluster
The following operations are performed on node 1. To simplify the operation, all files generated by node 1 will be copied to node 2 and node 3 later.
1. Create a working directory and unzip the binary package
mkdir /opt/etcd/{bin,cfg,ssl} -p
tar zxvf etcd-v3.4.9-linux-amd64.tar.gz
mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
2. Create etcd configuration file
cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.31.71:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.31.71:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.71:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.71:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.31.71:2380,etcd-2=https://192.168.31.72:2380,etcd-3=https://192.168.31.73:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
- ETCD_NAME: node name, unique in the cluster
- ETCD_DATA_DIR: data directory
- ETCD_LISTEN_PEER_URLS: cluster communication listening address
- ETCD_LISTEN_CLIENT_URLS: client access listening address
- ETCD_INITIAL_ADVERTISE_PEERURLS: cluster notification address
- ETCD_ADVERTISE_CLIENT_URLS: client notification address
- ETCD_INITIAL_CLUSTER: cluster node address
- ETCD_INITIALCLUSTER_TOKEN: Crowd Token
- ETCD_INITIALCLUSTER_STATE: Join the current state of the cluster, new is a new cluster, existing means joining an existing cluster
cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
4. Copy the certificate just generated
Copy the newly generated certificate to the path in the configuration file:
cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl/
systemctl start etcd
systemctl enable etcd
6. Copy all the files generated by node 1 above to node 2 and node 3
scp -r /opt/etcd/ [email protected]:/opt/
scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/
scp -r /opt/etcd/ [email protected]:/opt/
scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/
Then modify the node name and current server IP in the etcd.conf configuration file on node 2 and node 3 respectively:
vi /opt/etcd/cfg/etcd.conf #[Member] ETCD_NAME="etcd-1" #Modify here, node 2 is changed to etcd-2, node 3 is changed to etcd-3 ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.31.71:2380" # Modify here as the current server IP ETCD_LISTEN_CLIENT_URLS="https://192.168.31.71:2379" # Modify here to the current server IP #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.71:2380" # Modify this to the current server IP ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.71:2379" # Modify this to the current server IP ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.31.71:2380,etcd-2=https://192.168.31.72:2380,etcd-3=https://192.168.31.73:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"
Finally, start etcd and set it to start at boot, as above.
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.16.80:2379,https://192.168.16.82:2379,https://192.168.16.83:2379" endpoint health --write-out=table
+----------------------------+--------+-------------+-------+
| ENDPOINT | HEALTH | TOOK | ERROR |
+----------------------------+--------+-------------+-------+
| https://192.168.31.71:2379 | true | 10.301506ms | |
| https://192.168.31.73:2379 | true | 12.87467ms | |
| https://192.168.31.72:2379 | true | 13.225954ms | |
+----------------------------+--------+-------------+-------+
If the above information is output, it means that the cluster deployment is successful.
If there is a problem, the first step is to look at the log: /var/log/message or journalctl -u etcd
3. Install Docker
Here, Docker is used as the container engine, or it can be replaced with another one, such as containerd
Download address: https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz
The following operates on all nodes. Binary installation is used here, and the same is true for installation with yum.
3.1 Decompress the binary package
tar zxvf docker-19.03.9.tgz
[root@k8s-master01 DOCKER]# chown -R root.root docker mv docker/* /usr/bin
3.2 systemd management docker
cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF
3.3 Create a configuration file
mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF
- registry-mirrors Alibaba Cloud Mirror Accelerator
3.4 Start and set the startup
systemctl daemon-reload
systemctl start docker
systemctl enable docker
4. Deploy Master Node
If you encounter problems in the study or the document is wrong, you can contact A Liang~ Wechat: xyz12366699
4.1 Generate kube-apiserver certificate
1. Self-signed certificate authority (CA)
cd ~/TLS/k8s
[root@k8s-master01 k8s]# mkdir kube-apiserver cat > ca-config.json << EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF cat > ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF
Generate the certificate:
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
The ca.pem and ca-key.pem files will be generated. And the ca.csr file
2. Use self-signed CA to issue kube-apiserver HTTPS certificate
Create a certificate request file:
cat > server-csr.json << EOF
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"192.168.16.80",
"192.168.16.81",
"192.168.16.82",
"192.168.16.83", "192.168.16.90", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF
Note: The IPs in the hosts field of the above file are all Master/LB/VIP IPs, and none of them can be missing! In order to facilitate later expansion, you can write a few more reserved IPs.
Generate the certificate:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
The server.pem and server-key.pem files will be generated. And the server.csr file.
4.2 Download binaries from Github
Download address: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md
Note: Open the link and you will find that there are many packages in it. It is enough to download a server package, which includes the Master and Worker Node binary files.
4.3 Decompress the binary package
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
cp kubectl /usr/bin/
4.4 deploy kube-apiserver
1. Create a configuration file
cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--etcd-servers=https://192.168.16.80:2379,https://192.168.16.82:2379,https://192.168.16.83:2379 \\
--bind-address=192.168.16.80 \\
--secure-port=6443 \\
--advertise-address=192.168.16.80 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-32767 \\
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--service-account-issuer=api \\
--service-account-signing-key-file=/opt/kubernetes/ssl/server-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \\
--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--proxy-client-cert-file=/opt/kubernetes/ssl/server.pem \\
--proxy-client-key-file=/opt/kubernetes/ssl/server-key.pem \\
--requestheader-allowed-names=kubernetes \\
--requestheader-extra-headers-prefix=X-Remote-Extra- \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-username-headers=X-Remote-User \\
--enable-aggregator-routing=true \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
EOF
Note: The first of the above two \ \ is an escape character, and the second is a newline character. The escape character is used to preserve the newline character using EOF.
- --logtostderr: enable logging
- ---v: log level
- --log-dir: log directory
- --etcd-servers: etcd cluster address
- --bind-address: listen address
- --secure-port: https secure port
- --advertise-address: cluster advertisement address
- --allow-privileged: enable authorization
- --service-cluster-ip-range: Service virtual IP address segment
- --enable-admission-plugins: admission control module
- --authorization-mode: authentication authorization, enable RBAC authorization and node self-management
- --enable-bootstrap-token-auth: enable TLS bootstrap mechanism
- --token-auth-file: bootstrap token statement
- --service-node-port-range: Service nodeport type assigns port range by default
- --kubelet-client-xxx: apiserver access kubelet client certificate
- --tls-xxx-file: apiserver https certificate
- Parameters that must be added in version 1.20: --service-account-issuer, --service-account-signing-key-file
- --etcd-xxxfile: connect Etcd cluster certificate
- --audit-log-xxx: audit log
- 启动聚合层相关配置:--requestheader-client-ca-file,--proxy-client-cert-file,--proxy-client-key-file,--requestheader-allowed-names,--requestheader-extra-headers-prefix,--requestheader-group-headers,--requestheader-username-headers,--enable-aggregator-routing
2. Copy the certificate just generated
Copy the newly generated certificate to the path in the configuration file:
cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/
cp ~/TLS/k8s/kube-apiserver/ca*pem ~/TLS/k8s/kube-apiserver/server*pem /opt/kubernetes/ssl/
3. Enable TLS Bootstrapping mechanism
TLS Bootstraping: After the Master apiserver enables TLS authentication, the Node node kubelet and kube-proxy must use a valid certificate issued by the CA to communicate with the kube-apiserver. When there are many Node nodes, this kind of client certificate issuance requires a lot of work , which will also increase the complexity of cluster expansion. In order to simplify the process, Kubernetes introduces the TLS bootstraping mechanism to automatically issue client certificates. The kubelet will automatically apply for a certificate from the apiserver as a low-privileged user, and the kubelet certificate is dynamically signed by the apiserver. Therefore, it is strongly recommended to use this method on Node. Currently, it is mainly used for kubelet. We still issue a certificate for kube-proxy.
TLS bootstraping workflow:
Create the token file in the above configuration file:
cat > /opt/kubernetes/cfg/token.csv << EOF
bfe8e191df6b2afaf9cc9e83f78b5d50,kubelet-bootstrap,10001,"system:node-bootstrapper"
EOF
Format: token, username, UID, user group
token can also be generated and replaced by itself:
head -c 16 /dev/urandom | od -An -t x | tr -d ' '
bfe8e191df6b2afaf9cc9e83f78b5d50
4. systemd management apiserver
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver
4.5 deploy kube-controller-manager
1. Create a configuration file
cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect=true \\
--kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/16 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--cluster-signing-duration=87600h0m0s"
EOF
- --kubeconfig: connect to the apiserver configuration file
- --leader-elect: Automatic election (HA) when the component starts multiple
- --cluster-signing-cert-file/--cluster-signing-key-file: CA that automatically issues certificates for kubelet, consistent with apiserver
Generate kube-controller-manager certificate:
# switch working directory cd ~/TLS/k8s # Create certificate request file cat > kube-controller-manager-csr.json << EOF { "CN": "system:kube-controller-manager", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "system:masters", "OU": "System" } ] } EOF # generate certificate cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
Generate the kubeconfig file (the following are shell commands, executed directly in the terminal):
KUBE_CONFIG="/opt/kubernetes/cfg/kube-controller-manager.kubeconfig"
KUBE_APISERVER="https://192.168.16.80:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-controller-manager \
--client-certificate=./kube-controller-manager.pem \
--client-key=./kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-controller-manager \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
3. systemd management controller-manager
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager
4.6 deploy kube-scheduler
1. Create a configuration file
cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect \\
--kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig \\
--bind-address=127.0.0.1"
EOF
- --kubeconfig: connect to the apiserver configuration file
- --leader-elect: Automatic election (HA) when the component starts multiple
Generate kube-scheduler certificate:
# switch working directory cd ~/TLS/k8s # Create certificate request file cat > kube-scheduler-csr.json << EOF { "CN": "system:kube-scheduler", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "system:masters", "OU": "System" } ] } EOF # generate certificate cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
Generate the kubeconfig file (the following are shell commands, executed directly in the terminal):
KUBE_CONFIG="/opt/kubernetes/cfg/kube-scheduler.kubeconfig"
KUBE_APISERVER="https://192.168.16.80:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-scheduler \
--client-certificate=./kube-scheduler.pem \
--client-key=./kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-scheduler \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
3. systemd management scheduler
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler
Generate a certificate for kubectl to connect to the cluster:
cat > admin-csr.json <<EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
Generate kubeconfig file:
mkdir /root/.kube
KUBE_CONFIG="/root/.kube/config"
KUBE_APISERVER="https://192.168.16.80:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials cluster-admin \
--client-certificate=/root/TLS/k8s/kube-apiserver/admin.pem \
--client-key=/root/TLS/k8s/kube-apiserver/admin-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user=cluster-admin \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
View the current cluster component status through the kubectl tool:
kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"} token.csv
The above output shows that the Master node components are running normally.
6. Authorize the kubelet-bootstrap user to allow certificate requests
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
Delete the wrong signature and create it again
kubectl delete clusterrolebindings kubelet-bootstrap
5. Deploy Worker Node
If you encounter problems in the study or the document is wrong, you can contact A Liang~ Wechat: xyz12366699
The following is still operated on the Master Node, that is, as a Worker Node at the same time
5.1 Create working directory and copy binary files
Create working directories on all worker nodes:
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
Copy from master node:
cd /root/TLS/k8s/kubernetes/server/bin cp kubelet kube-proxy /opt/kubernetes/bin #Local copy
5.2 deploy kubelet
1. Create a configuration file
cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=k8s-master01 \\
--network-plugin=cni \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"
EOF
- --hostname-override: display name, unique in the cluster
- --network-plugin: enable CNI
- --kubeconfig: Empty path, will be automatically generated, later used to connect to apiserver
- --bootstrap-kubeconfig: Apply for a certificate to the apiserver at the first startup
- --config: configuration parameter file
- --cert-dir: kubelet certificate generation directory
- --pod-infra-container-image: Manage the image of the Pod network container
2. Configuration parameter file
cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF
3. Generate kubelet to join the cluster for the first time to guide the kubeconfig file
KUBE_CONFIG="/opt/kubernetes/cfg/bootstrap.kubeconfig" KUBE_APISERVER="https://192.168.16.80:6443" # apiserver IP:PORT TOKEN="bfe8e191df6b2afaf9cc9e83f78b5d50" #Consistent with token.csv # Generate kubelet bootstrap kubeconfig configuration file kubectl config set-cluster kubernetes \ --certificate-authority=/opt/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=${KUBE_CONFIG} kubectl config set-credentials "kubelet-bootstrap" \ --token=${TOKEN} \ --kubeconfig=${KUBE_CONFIG} kubectl config set-context default \ --cluster=kubernetes \ --user="kubelet-bootstrap" \ --kubeconfig=${KUBE_CONFIG} kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
5.3 Approve the kubelet certificate application and join the cluster
#View kubelet certificate request kubectl get csr NAME AGE SIGNERNAME REQUESTOR CONDITION node-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ--K6M4G7bjhk8A 6m3s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
#problem _
[root@k8s-master01 server]# kubectl get csr
No resources found
#approve application kubectl certificate approve node-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ--K6M4G7bjhk8A # view nodes kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 NotReady <none> 7s v1.18.3
Note: Since the network plugin has not been deployed, the node will not be ready NotReady
5.4 deploy kube-proxy
1. Create a configuration file
cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF
2. Configuration parameter file
cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master01
clusterCIDR: 10.0.0.0/24
EOF
3. Generate kube-proxy.kubeconfig file
# switch working directory cd ~/TLS/k8s # Create certificate request file cat > kube-proxy-csr.json << EOF { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF # generate certificate cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
Generate kubeconfig file:
KUBE_CONFIG="/opt/kubernetes/cfg/kube-proxy.kubeconfig"
KUBE_APISERVER="https://192.168.16.80:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-proxy \
--client-certificate=./kube-proxy.pem \
--client-key=./kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user=proxy \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy
5.5 Deploying network components
Calico is a pure three-tier data center network solution, which is currently the mainstream network solution for Kubernetes.
Deploy Calico:
[root@k8s-master01 opt]# mkdir calico
[root@k8s-master01 calico]# wget https://docs.projectcalico.org/manifests/calico-etcd.yaml
Download the required mirror in the configuration file:
[root@k8s-master01 calico]# docker pull docker.io/calico/cni:v3.19.1
[root@k8s-master01 calico]# docker pull docker.io/calico/pod2daemon-flexvol:v3.19.1
[root@k8s-master01 calico]# docker pull docker.io/calico/node:v3.19.1
[root@k8s-master01 calico]# docker pull docker.io/calico/kube-controllers:v3.19.1
If the image is not prepared in advance, it is very likely that the node will fail to download the image, and the final installation will fail. Do not ignore
Docker private warehouse deployment
Docker builds a local private warehouse - Fat Peter Pan - Blog Park
Create a host storage directory , that is, the container image storage path
mkdir -p /opt/data/registry
Download and start a registry container
[root@k8s-master01 ~]# docker search registry
[root@k8s-master01 DOCKER]# docker pull registry
[root@k8s-master01 DOCKER]# docker run -itd -p 5000:5000 -v /opt/data/registry:/var/lib/registry --name private_registry registry
ab2b50736d5110b5483d7adf5e0e540000788d397bd4c3f3af11bb213b961699
Configure http access
[root@k8s-master01 ~]# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
"insecure-registries": ["192.168.16.80:5000"]
}
Restart docker, restart registry service
[root@k8s-master01 calico]# systemctl restart docker
[root@k8s-master01 calico]# docker restart private_registry
private_registry
[root@k8s-master01 calico]# ps -a
PID TTY TIME CMD
modify tag
docker tag calico/node:v3.19.1 192.168.16.80:5000/calico/node:v3.19.1
docker tag calico/pod2daemon-flexvol:v3.19.1 192.168.16.80:5000/calico/pod2daemon-flexvol:v3.19.1
docker tag calico/cni:v3.19.1 192.168.16.80:5000/calico/cni:v3.19.1
docker tag calico/kube-controllers:v3.19.1 192.168.16.80:5000/calico/kube-controllers:v3.19.1
upload
docker push 192.168.16.80:5000/calico/node:v3.19.1
docker push 192.168.16.80:5000/calico/pod2daemon-flexvol:v3.19.1
docker push 192.168.16.80:5000/calico/cni:v3.19.1
docker push 192.168.16.80:5000/calico/kube-controllers :v3.19.1
View the local warehouse image
Other machines pull 192.168.16.80:5000 local image
Modify the docker startup script
/usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd --insecure-registry 192.168.16.80:5000
Then reload and restart dokcer to pull the local image file
[root@k8s-node01 ~]# systemctl daemon-reload
[root@k8s-node01 ~]# systemctl restart docker
[root@k8s-node01 ~]# docker pull 192.168.16.80:5000/calico/node:v3.19.1
Modify the configuration file calio-etcd.yaml
① Replace the image in the configuration file /opt/calico/calico-etcd.yaml with the local image address
② Modify etcd database address and certificate authentication
[root@k8s-master01 calico]# mkdir -p /opt/calico/calicoTSL
[root@k8s-master01 calico]# ll
-rw-r--r-- 1 root root 19068 Jun 8 14:29 calico-etcd.yaml
drwxr-xr-x 2 root root 6 Jun 8 17:08 calicoTSL
Copy the authentication file in /opt/etcd/ssl/ to /opt/calico/calicoTSL
[root@k8s-master01 calicoTSL]# cp /opt/etcd/ssl/ca.pem etcd-ca
[root@k8s-master01 calicoTSL]# cp /opt/etcd/ssl/server-key.pem etcd-key
[root@k8s-master01 calicoTSL]# cp /opt/etcd/ssl/server.pem etcd-cert
③ Modify the cluster network segment
- name: CALICO_IPV4POOL_CIDR
value: "10.0.0.0/24"
(--service-cluster-ip-range=10.0.0.0/24)
④ Configure the calico environment variable
env:
# The location of the etcd cluster.
- name: KUBERNETES_SERVICE_HOST
value: "192.168.16.80"
- name: KUBERNETES_SERVICE_PORT
value: "6443"
- name: KUBERNETES_SERVICE_PORT_HTTPS
value: "6443"
kubectl apply -f calico.yaml
kubectl get pods -n kube-system
When the Calico Pods are running, the node will be ready:
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready <none> 37m v1.20.4
5.6 Authorize apiserver to access kubelet
Application scenarios: such as kubectl logs
cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
- pods/log
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
kubectl apply -f apiserver-to-kubelet-rbac.yaml
5.7 Add Worker Node
1. Copy the deployed Node related files to the new node
Copy the Worker Node related files to the new node 192.168.31.72/73 on the Master node
scp -r /opt/kubernetes [email protected]:/opt/
scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service [email protected]:/usr/lib/systemd/system
scp /opt/kubernetes/ssl/ca.pem [email protected]:/opt/kubernetes/ssl
2. Delete the kubelet certificate and kubeconfig file
rm -f /opt/kubernetes/cfg/kubelet.kubeconfig
rm -f /opt/kubernetes/ssl/kubelet*
Note: These files are automatically generated after the certificate application is approved. Each Node is different and must be deleted
vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-node1
vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-node1
systemctl daemon-reload
systemctl start kubelet kube-proxy
systemctl enable kubelet kube-proxy
5. Approve the new Node kubelet certificate application on the Master
#View certificate request kubectl get csr NAME AGE SIGNERNAME REQUESTOR CONDITION node-csr-4zTjsaVSrhuyhIGqsefxzVoZDCNKei-aE2jyTP81Uro 89s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending # authorization request kubectl certificate approve node-csr-4zTjsaVSrhuyhIGqsefxzVoZDCNKei-aE2jyTP81Uro
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready <none> 47m v1.20.4
k8s-node1 Ready <none> 6m49s v1.20.4
Node2 (192.168.31.73) is the same as above. Remember to modify the hostname!
6. Deploy Dashboard and CoreDNS
6.1 Deploy Dashboard
kubectl apply -f kubernetes-dashboard.yaml
#View deployment
kubectl get pods,svc -n kubernetes-dashboard
Access address: https://NodeIP:30001
Create a service account and bind the default cluster-admin administrator cluster role:
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Use the output token to log in to Dashboard.
6.2 Deploy CoreDNS
CoreDNS is used for Service name resolution within the cluster.
kubectl apply -f coredns.yaml
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5ffbfd976d-j6shb 1/1 Running 0 32s
DNS resolution test:
kubectl run -it --rm dns-test --image=busybox:1.28.4 sh
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes
Server: 10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local
Parsing is fine.
So far, a single Master cluster has been built! This environment is enough to meet the learning experiment. If your server configuration is high, you can continue to expand the multi-Master cluster!
7. Multi-Master expansion (high availability architecture)
Kubernetes , as a container cluster system, realizes Pod failure self-healing capability through health check + restart strategy, realizes distributed deployment of Pods through scheduling algorithm, and maintains the expected number of copies, and automatically pulls Pods on other Nodes according to Node failure status. High availability at the application layer.
For Kubernetes clusters, high availability should also include the following two considerations: high availability of Etcd database and high availability of Kubernetes Master components. For Etcd, we have used three nodes to form a cluster to achieve high availability. This section will describe and implement the high availability of the Master node.
The Master node plays the role of the master control center, and maintains the healthy working status of the entire cluster by constantly communicating with Kubelet and kube-proxy on the worker nodes. If the Master node fails, you will not be able to use the kubectl tool or API for any cluster management.
The Master node mainly has three services, kube-apiserver, kube-controller-manager, and kube-scheduler. Among them, the kube-controller-manager and kube-scheduler components have achieved high availability through the selection mechanism, so the high availability of the Master is mainly for kube-apiserver Component, and this component provides services with HTTP API, so its high availability is similar to that of a web server, just add a load balancer to balance its load, and it can be expanded horizontally.
Multi-Master architecture diagram:
7.1 Deploy Master2 Node
Now you need to add a new server, as Master2 Node, IP is 192.168.31.74.
In order to save resources, you can also reuse the previously deployed Worker Node1 as the Master2 Node role (that is, deploy the Master component)
All operations of Master2 are consistent with those of the deployed Master1. So we only need to copy all the K8s files of Master1, and then modify the server IP and host name to start.
scp /usr/bin/docker* [email protected]:/usr/bin scp / usr / bin / runc root@ 192.168.31.74 : / usr / bin scp /usr/bin/containerd* [email protected]:/usr/bin scp /usr/lib/systemd/system/docker.service [email protected]:/usr/lib/systemd/system scp -r /etc/docker [email protected]:/etc #Start Docker on Master2 systemctl daemon-reload systemctl start docker systemctl enable docker
2. Create etcd certificate directory
Create etcd certificate directory on Master2:
mkdir -p /opt/etcd/ssl
3. Copy files (Master1 operation)
Copy all K8s files and etcd certificates on Master1 to Master2:
scp -r /opt/kubernetes [email protected]:/opt
scp -r /opt/etcd/ssl [email protected]:/opt/etcd
scp /usr/lib/systemd/system/kube* [email protected]:/usr/lib/systemd/system
scp /usr/bin/kubectl [email protected]:/usr/bin
scp -r ~/.kube [email protected]:~
4. Delete the certificate file
Delete the kubelet certificate and kubeconfig file:
rm -f /opt/kubernetes/cfg/kubelet.kubeconfig
rm -f /opt/kubernetes/ssl/kubelet*
5. Modify configuration file IP and hostname
Modify the apiserver, kubelet and kube-proxy configuration files to local IP:
vi /opt/kubernetes/cfg/kube-apiserver.conf
...
--bind-address=192.168.31.74 \
--advertise-address=192.168.31.74 \
...
vi /opt/kubernetes/cfg/kube-controller-manager.kubeconfig
server: https://192.168.31.74:6443
vi /opt/kubernetes/cfg/kube-scheduler.kubeconfig
server: https://192.168.31.74:6443
vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-master2
vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-master2
vi ~/.kube/config
...
server: https://192.168.31.74:6443
6. Startup settings start up
systemctl daemon-reload
systemctl start be apiserver be controller-manager be scheduler belet be proxy
systemctl enable be apiserver be controller-manager be scheduler belet be proxy
kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
8. Approve the kubelet certificate request
#View certificate request kubectl get csr NAME AGE SIGNERNAME REQUESTOR CONDITION node-csr-JYNknakEa_YpHz797oKaN-ZTk43nD51Zc9CJkBLcASU 85m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending # authorization request kubectl certificate approve node-csr-JYNknakEa_YpHz797oKaN-ZTk43nD51Zc9CJkBLcASU # View Node kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 Ready <none> 34h v1.20.4 k8s-master2 Ready <none> 2m v1.20.4 k8s-node1 Ready <none> 33h v1.20.4 k8s-node2 Ready <none> 33h v1.20.4
If you encounter problems in the study or the document is wrong, you can contact A Liang~ Wechat: xyz12366699
7.2 Deploy Nginx+Keepalived High Availability Load Balancer
kube-apiserver high availability architecture diagram:
- Nginx is a mainstream web service and reverse proxy server. Here, four layers are used to achieve load balancing for apiserver.
- Keepalived是一个主流高可用软件,基于VIP绑定实现服务器双机热备,在上述拓扑中,Keepalived主要根据Nginx运行状态判断是否需要故障转移(漂移VIP),例如当Nginx主节点挂掉,VIP会自动绑定在Nginx备节点,从而保证VIP一直可用,实现Nginx高可用。
注1:为了节省机器,这里与K8s Master节点机器复用。也可以独立于k8s集群之外部署,只要nginx与apiserver能通信就行。
注2:如果你是在公有云上,一般都不支持keepalived,那么你可以直接用它们的负载均衡器产品,直接负载均衡多台Master kube-apiserver,架构与上面一样。
在两台Master节点操作。
1. 安装软件包(主/备)
yum install epel-release -y
yum install nginx keepalived -y
2. Nginx配置文件(主/备一样)
cat > /etc/nginx/nginx.conf << "EOF" user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; include /usr/share/nginx/modules/*.conf; events { worker_connections 1024; } # 四层负载均衡,为两台Master apiserver组件提供负载均衡 stream { log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { server 192.168.31.71:6443; # Master1 APISERVER IP:PORT server 192.168.31.74:6443; # Master2 APISERVER IP:PORT } server { listen 16443; # Since nginx and the master node are multiplexed, the listening port cannot be 6443, otherwise it will conflict proxy_pass k8s-apiserver; } } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; server { listen 80 default_server; server_name _; location / { } } } EOF
3. Keepalived configuration file (Nginx Master)
cat > /etc/keepalived/keepalived.conf << EOF global_defs { notification_email { [email protected] [email protected] [email protected] } notification_email_from [email protected] smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_MASTER } vrrp_script check_nginx { script "/etc/keepalived/check_nginx.sh" } vrrp_instance VI_1 { state MASTER interface ens33 #Modify to the actual network card name virtual_router_id 51 # VRRP routing ID instance, each instance is unique priority 100 # priority, the standby server is set to 90 advert_int 1 # Specify the VRRP heartbeat packet notification interval, the default is 1 second authentication { auth_type PASS auth_pass 1111 } # virtual IP virtual_ipaddress { 192.168.31.88/24 } track_script { check_nginx } } EOF
- vrrp_script: specifies the script to check the working status of nginx (judging whether to failover according to the status of nginx)
- virtual_ipaddress: Virtual IP (VIP)
Prepare the script to check the running status of nginx in the above configuration file:
cat > /etc/keepalived/check_nginx.sh << "EOF"
#!/bin/bash
count=$(ss -antp |grep 16443 |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
exit 1
else
exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh
4. Keepalived configuration file (Nginx Backup)
cat > /etc/keepalived/keepalived.conf << EOF global_defs { notification_email { [email protected] [email protected] [email protected] } notification_email_from [email protected] smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_BACKUP } vrrp_script check_nginx { script "/etc/keepalived/check_nginx.sh" } vrrp_instance VI_1 { state BACKUP interface ens33 virtual_router_id 51 # VRRP routing ID instance, each instance is unique priority 90 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.31.88/24 } track_script { check_nginx } } EOF
Prepare the script to check the running status of nginx in the above configuration file:
cat > /etc/keepalived/check_nginx.sh << "EOF"
#!/bin/bash
count=$(ss -antp |grep 16443 |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
exit 1
else
exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh
Note: keepalived judges whether to fail over according to the status code returned by the script (0 means normal work, non-0 abnormal).
systemctl daemon-reload
systemctl start nginx keepalived
systemctl enable nginx keepalived
6. View keepalived working status
ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:04:f7:2c brd ff:ff:ff:ff:ff:ff
inet 192.168.31.80/24 brd 192.168.31.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 192.168.31.88/24 scope global secondary ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe04:f72c/64 scope link
valid_lft forever preferred_lft forever
It can be seen that the 192.168.31.88 virtual IP is bound to the ens33 network card, indicating that it is working normally.
7. Nginx+Keepalived high availability test
Shut down Nginx on the active node, and test whether the VIP has drifted to the server on the standby node.
Execute pkill nginx on Nginx Master;
on Nginx Backup, use the ip addr command to check that the VIP has been successfully bound.
Find any node in the K8s cluster, use curl to view the K8s version test, and use VIP access:
curl -k https://192.168.31.88:16443/version
{
"major": "1",
"minor": "20",
"gitVersion": "v1.20.4",
"gitCommit": "e87da0bd6e03ec3fea7933c4b5263d151aafd07c",
"gitTreeState": "clean",
"buildDate": "2021-02-18T16:03:00Z",
"goVersion": "go1.15.8",
"compiler": "gc",
"platform": "linux/amd64"
}
The K8s version information can be obtained correctly, indicating that the load balancer is set up normally. The request data flow: curl -> vip(nginx) -> apiserver
You can also see the forwarding apiserver IP by looking at the Nginx log:
tail /var/log/nginx/k8s-access.log -f
192.168.31.71 192.168.31.71:6443 - [02/Apr/2021:19:17:57 +0800] 200 423
192.168.31.71 192.168.31.72:6443 - [02/Apr/2021:19:18:50 +0800] 200 423
This is not over yet, there is the most critical step below.
7.3 Modify all Worker Nodes to connect to LB VIP
Just imagine, although we have added Master2 Node and load balancer, we are expanding from a single Master architecture, that is to say, all Worker Node components are still connected to Master1 Node. Then the Master is still a single point of failure.
Therefore, the next step is to change the component configuration files of all Worker Nodes (nodes viewed by the kubectl get node command) from the original 192.168.31.71 to 192.168.31.88 (VIP).
Execute on all Worker Nodes:
sed -i 's#192.168.31.71:6443#192.168.31.88:16443#' /opt/kubernetes/cfg/*
systemctl restart kubelet kube-proxy
Check node status:
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready <none> 32d v1.20.4
k8s-master2 Ready <none> 10m v1.20.4
k8s-node1 Ready <none> 31d v1.20.4
k8s-node2 Ready <none> 31d v1.20.4
So far, a complete Kubernetes high-availability cluster has been deployed!