kubeadm configure high availability cluster etcd

Operating system ubuntu18
Kubernetes version v1.15.1

k8s default on the control plane node static pod kubelet managed to run individual members etcd clusters, but this is not a highly available solution.
etcd availability cluster requires at least three members.

etcd default port is 2379,2380, these two ports three nodes must be able to pass.
You can change the default port kubeadm profile.

This experiment has five servers. I opened the Hong Kong Tencent cloud server to do the experiment, speed quickly, ssh stable. Baidu cloud were not detected. Ali cloud testing is not to force. Recommended Tencent cloud.
k8s1: Master1 
k8s2: node1
k8s3: HAProxy
k8s4: Master2
k8s5: Master3

1. First installed on k8s master1 kubeadm, kubelet, kubectl, then kubeadm init, last kubectl get nodes, confirmed k8s1 master1 ready.
k8s installation: ubuntu18 installation kubernetes v1.15

2. kubeadm are mounted on node1 K8S, K8S Master2, K8S Master3, kubectl, kubelet
K8S installation: ubuntu18 mounting kubernetes v1.15

3. kubeadm-init.out file on k8s master1 join worker node to find and join control-plane node commands.

4. Run the join command on k8s master2 and k8s master3, attention, -Control-Plane
kubeadm join k8s1: 6443 --token 8vqitz.g1qyah1wpd3n723o --discovery-token-CA-CERT the hash-SHA256: abd9a745b8561df603ccd58e162d7eb11b416feb4a7bbe1216a3aa114f4fecd9 --control-Plane --certificate-key 0e1e2844d565e657465f41707d8995b2d9d64246d5f2bf90f475b7782343254f

5. Run join command on node1
kubeadm join k8s1: 6443 --token 8vqitz.g1qyah1wpd3n723o --discovery-token-CA-CERT the hash-SHA256: abd9a745b8561df603ccd58e162d7eb11b416feb4a7bbe1216a3aa114f4fecd9 

6. Now master1, master2, master3 can perform kubectl management k8s cluster.
kubectl get nodes
showed three master, a node.

7.安装haproxy负载均衡
apt-get update 
apt-get install haproxy -y
cd /etc/haproxy
cp haproxy.conf haproxy.conf.bak
在defaults下面
log global
mode tcp
option tcplog
frontend proxynode
    bind *:80
    stats uri /proxystats
    default_backend k8s-qq
backend k8s-qq
    balance roundrobin
    server master1 172.19.0.12:6443 check
    server master2 172.19.0.8:6443 check
    server master3 172.19.0.4:6443 check
systemctl restart haproxy
systemctl enable haproxy

8. Check HAProxy
https://k8s3.example.com/proxystats
can see three back-end servers

9. Check the operation etcd POD
kubectl -n Kube-System GET PODS | grep etcd
Here we can see are run on k8s1, k8s2, k8s3 etcd 

10. See ETCD log
kubectl Kube -n-System logs ETCD-k8s1 
kubectl Kube -n-ETCD-k8s1 System logs -f

11. Log in to check the status of the other cluster etcd etcd the POD
kubectl Kube -n-Expediting IT k8s2 Exec System - / bin / SH
/. 3 = # ETCDCTL_API etcdctl -w Table -endpoints 172.19.0.12:2379, 172.19.0.4: 2379, 172.19.0.8:2379 -cacert /etc/kubernetes/pki/etcd/ca.crt -cert /etc/kubernetes/pki/etcd/server.crt -key /etc/kubernetes/pki/etcd/server.key endpoint status
will be displayed here our three etcd, one of which is true, two is false.

12. Test Failover

12.1. Suppose now k8s2 is ture, ssh to log in to k8s2, and then close the Docker
systemctl STOP Docker

12.2. View node information
kubectl get node
display k8s2 as NotReady state.

12.3. View on k8s3 again endpoint status, found k8s2 now is false, is true now k8s3

12.4. K8s1 can not close a docker test, close k8s1, the entire cluster is unavailable.

12.5. Haproxy of view statistical reports.

12.6 After testing, start docker

Guess you like

Origin www.cnblogs.com/51redhat/p/11951537.html