k8s as the current container enterprise scheduling tools, production environment has a lot of use, so we must ensure high availability cluster, k8s fact, the most important thing is api-server component, which is the only entrance of the entire cluster, so make sure the normal operation of the entire cluster.
As used herein kubeadmin deployment, kubeadmin can be used after the 1.13 version in a production environment ,, so here use 1.15 version of the deployment, of course, a binary deployment is also possible. Depends on your preferences.
Basic environment configuration, firewall, selinux these, yum source is not configured Kazakhstan, prior to reference document.
安装:
yum install -y-omelet-kubeadm 1.15.0 1.15.0 1.15.0-kubectl
systemctl enable kubelet
Implementation of all the above master
kubelet node -Y-install the install 1.15.0 yum-kubeadm-1.15.0 1.15.0 kubectl
systemctl enable kubelet
After completion of downloading the image
kubeadm config images list to see a mirror image of what needs
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.15.10
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.15.10
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.15.10
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.15.10
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1
As used herein, Ali cloud images Kazakhstan
All configuration I have put Baidu network disk
link: https://pan.baidu.com/s/167-yXwqK28gXDUBw3E0BxA
extraction code: mlr7
Installation keepalived
所有master
yum install keepalived -y
Modify the configuration After completing
In this example master1
These places are noting, priority this weight the other two machines are 90 and 70, the rest are unchanged on the line
And then start on the line
systemctl restart keepalived
They can close the test to see if vip drift, I have here is possible
Next, install the master node all haproxy
yum install haproxy -y
Three are the same machine, a direct copy of the last on the list need not be amended
And then start on the line
systemctl start haproxy
Write kubeadm configuration file
in a master above the line
PodSubnet which is the most important, or not start behind flannel sure to add
Then start
kubeadm init --config = kubeadm-init.yaml --experimental -upload-certs
The above master is used, the following is used ndoe
Then check the status of installed network plug flannel
Then another two master join the cluster
After the completion of the implementation should
look at the effect of another above a master
All have started the next start adding node node
Well, look at the addition is complete
Just to find a master to start a test container ''
in the mirror pull nginx, will wait
to test the effect of
Access it
Now vip in master1 above, then the fault simulation to see if vip drift, the cluster will not be used
ip address has been drifting up
After recovery, etc. to join the cluster like a
high-availability deployment on here, what is the problem Welcome to the private letter message