k8s-High availability cluster implementation (keepalived)

1. Environmental Planning

Approximate topology:

I am here that etcd and master are on the same machine

Two system initialization

See https://www.cnblogs.com/huningfei/p/12697310.html

Three install k8s and docker

See https://www.cnblogs.com/huningfei/p/12697310.html

Four install keepalived

Install on three master nodes

yum -y install keepalived

Configuration file
master1

[root@k8s-master01 keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
   router_id master01
}
vrrp_instance VI_1 {
    state MASTER #主
    interface ens33 #网卡名字
    virtual_router_id 50
    priority 100 #权重
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.1.222 #vip
    }
}



master2

! Configuration File for keepalived
global_defs {
   router_id master01
}
vrrp_instance VI_1 {
    state BACKUP 
    interface ens32
    virtual_router_id 50
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.1.222
    }
}



master3

! Configuration File for keepalived
global_defs {
   router_id master01
}
vrrp_instance VI_1 {
    state BACKUP 
    interface ens32
    virtual_router_id 50
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.1.222
    }
}

Start, and set the boot to start

service keepalived start
 systemctl enable keepalived

Four initialize the master node

Just execute on any one

kubeadm init --config=kubeadm-config.yaml
初始化配置文件如下:
```bash
[root@k8s-master01 load-k8s]# cat kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.15.1
apiServer:
  certSANs:    #填写所有kube-apiserver节点的hostname、IP、VIP(好像也可以不用写,只写vip就行)
  - k8s-master01
  - k8s-node1
  - k8s-node2
  - 192.168.1.210
  - 192.168.1.200
  - 192.168.1.211
  - 192.168.1.222
controlPlaneEndpoint: "192.168.1.222:6443" #vip
imageRepository: registry.aliyuncs.com/google_containers

networking:
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12

---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs


The information shown in the figure represents successful initialization:

Then follow the prompts to run the command:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Five install network plug-in flannel

kubectl apply -f kube-flannel.yml

Six copy certificate (key steps)

Copy from master01 to the remaining two master nodes, I use script copy here

[root@k8s-master01 load-k8s]# cat cert-master.sh 
USER=root # customizable
CONTROL_PLANE_IPS="192.168.1.200 192.168.1.211"
for host in ${CONTROL_PLANE_IPS}; do
    scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
    scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
    scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
    scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
    scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
    scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
    scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
    # Quote this line if you are using external etcd
    scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
done

然后去其他两个master节点把证书移动到/etc/kubernetes/pki目录下面,我这里用脚本移动
```bash
[root@k8s-node1 load-k8s]# cat mv-cert.sh 
USER=root # customizable
mkdir -p /etc/kubernetes/pki/etcd
mv /${USER}/ca.crt /etc/kubernetes/pki/
mv /${USER}/ca.key /etc/kubernetes/pki/
mv /${USER}/sa.pub /etc/kubernetes/pki/
mv /${USER}/sa.key /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.key /etc/kubernetes/pki/
mv /${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
# Quote this line if you are using external etcd
mv /${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key

Seven remaining two master nodes join the cluster

kubeadm join 192.168.1.222:6443 --token zi3lku.0jmskzstc49429cu \
    --discovery-token-ca-cert-hash sha256:75c2e15f51e23490a0b042d72d6ac84fc18ba63c230f27882728f8832711710b \
    --control-plane

Note that the ip here is the virtual ip generated by keepalived.
The following represents success.

After joining successfully, you can go to the three masters to check whether the status is successful.
Kubectl get nodes
Description: My host name here has not been changed to the master host name because it is convenient, in fact, all three are master nodes.

Eight node nodes join the cluster

kubeadm join 192.168.1.222:6443 --token zi3lku.0jmskzstc49429cu \
    --discovery-token-ca-cert-hash sha256:75c2e15f51e23490a0b042d72d6ac84fc18ba63c230f27882728f8832711710b

The following message appears to indicate success


Check the node status, node3 is my node node, the rest are master nodes

Nine cluster high availability test

1 Master01 shuts down, vip floats to master02, all functions are normal
2 master02 shuts down, vip floats to master03, pods are normal, but all commands can not be used. The
conclusion is that when one of the masters is broken, the cluster Can work normally

Guess you like

Origin www.cnblogs.com/huningfei/p/12759833.html