Build a high-availability cluster Kubernetes

Ali cloud using three servers (k8s-master0, k8s-master1, k8s-master2) as the master node to build a high-availability clustering, load balancing with Ali cloud SLB, Ali should be noted that due to the cloud back-end server load balancing is not supported forwards for ourselves, so the master node control-plane-endpoint can not take the load balancing.

K8s first installed on k8s-master0, installation procedures, see the Ubuntu installation k8s Troika kubeadm kubectl kubelet , then hit the snapshot creation Ali cloud ecs mirror.

Determining control-plane-endpoint host name, here assumed to be k8s-apiadded in the analytic k8s-api of hosts k8s-master0.

10.0.1.81       k8s-api

Create a cluster on k8s-master0,

kubeadm init \
    --control-plane-endpoint "k8s-api:6443" --upload-certs \
    --image-repository registry.aliyuncs.com/google_containers \
    --pod-network-cidr=192.168.0.0/16 \
    --v=6

Once created will appear the following tips

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join k8s-api:6443 --token ****** \
    --discovery-token-ca-cert-hash ****** \
    --control-plane --certificate-key ******

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s-api:6443 --token ****** \
    --discovery-token-ca-cert-hash ******

Aliyun ecs mirroring created before creating server 2 and k8s-master1 k8s-master2 As another master node, and to resolve the IP address in the k8s-master0 of the two servers hosts k8s-api.

Then log on separately and added k8s-master1 control-plane node of kubeadm join command k8s-master2 previously obtained with these two servers as the master cluster.

Then log on this three master nodes are running the following command:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

If you do not execute the above command, run kubectl get nodes will appear the following error message:

The connection to the server localhost:8080 was refused - did you specify the right host or port?

Log k8s-master1 then followed with k8s-master2 in the hosts k8s-api to resolve the IP address of the machine.

Now these three master nodes have joined the cluster, but the view through kubectl get pods will find three nodes are in NotReady state, since there is no deployment of CNI network plug.

The next step is to deploy a network plug-ins, such as we have here with calico network,

kubectl apply -f calico.yaml

When calico network deployment is successful, three master would have to enter the Running state, master node deployment is complete.

The next step is to deploy worker nodes.

Worker nodes by Ali cloud access api-server load balancing on the master node, first create a private network load balancing cloud Ali, adding four forwards for the tcp 6443 port, and mount three master node server.

The next step is to create a mirror image of Ali cloud ecs before continuing with the creation worker node server, and resolves to the IP address of Ali in the load balancing of each server hosts k8s-api, then added with a previously generated worker node kubeadm join k8s-api:6443command these server cluster.

Such high-availability cluster k8s build better, you can deploy the application. Note that now with three master, according to Raft consensus algorithm, only when at least two master normal cluster to function properly.

Guess you like

Origin www.cnblogs.com/dudu/p/12168433.html