kubeadm deploy highly available master

Preparation Phase

master1  master2  node1

Close selinux, firewall

setenforce  0

sed -i 's/SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config

systemctl stop firewalld

systemctl disable firewalld

Close swap, (required after the 1.8 version, do not want to swap aim should be to interfere with memory limit pod can be used)

swapoff -a

sed -ri 's/.*swap.*/#&/' /etc/fstab

Modify the kernel parameter, or request data through routing iptables may be a problem

cat <<EOF >  /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

sysctl --system

 

 

Installation kubeadm, docker

The Kubernetes installation source was changed to Ali cloud, domestic network environment easy installation

cat << EOF > /etc/yum.repos.d/kubernetes.repo

[Kubernetes]

name = Kubernetes

baseurl = https: //mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

 

Installation docker-ce

wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum install -y docker-ce

 

Installation kubelet kubeadm kubectl

yum install kubeadm-1.15.0-0.x86_64 kubectl 1.15.0-0.x86_64-omelet-1.15.0-0.x86_64 -y

 

kubectl command auto-completion

yum install bash-completion* -y

## write environment variables

source <(kubectl completion bash)

echo "source <(kubectl completion bash)" >> ~/.bashrc

 

 

部署 Kubernetes

In the master-1 node operation:

Prepare the cluster configuration file , api version currently used for v1beta1, the specific configuration can refer to the official reference

cat << EOF > /root/kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta1

kind: ClusterConfiguration

kubernetesVersion: v1.15.0 # specified version 1.14

controlPlaneEndpoint: 192.168.41.232:6443 # haproxy address and port

imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers # specified mirror source to source Ali

networking:

  podSubnet: 10.244.0.0/16 # flannel network plans to use plug-ins, network segment and mask specify pod

EOF

Perform node initialization

systemctl enable omelet

systemctl start acting

kubeadm config images pull --config kubeadm-config.yaml # previously pulled by the mirror source Ali

kubeadm init --config=kubeadm-config.yaml --upload-certs --ignore-preflight-errors=all

kubeadm init  --kubernetes-version=v1.15.0 --pod-network-cidr=10.244.0.0/16  --service-cidr=10.96.0.0/12 --ignore-preflight-errors=all --upload-certs

 

Successful installation, you can see the output

You can now join any number of the control-plane node running the following command on each as root:

# Master node to join the cluster with the following command:

  kubeadm join 192.168.41.232:6443 --token ocb5tz.pv252zn76rl4l3f6 \

    --discovery-token-ca-cert-hash sha256:141bbeb79bf58d81d551f33ace207c7b19bee1cfd7790112ce26a6a300eee5a2 \

    --control-plane --certificate-key 20366c9cdbfdc1435a6f6d616d988d027f2785e34e2df9383f784cf61bab9826 --ignore-preflight-errors=all

# Work node joins the cluster with the following command:

kubeadm join 192.168.41.232:6443 --token ocb5tz.pv252zn76rl4l3f6 \

    --discovery-token-ca-cert-hash sha256:141bbeb79bf58d81d551f33ace207c7b19bee1cfd7790112ce26a6a300eee5a2 --ignore-preflight-errors=all 

 

Original kubeadm version, the join command is only used to join the working node, and the new version adds the parameters --control-plane, the control plane (master) node to be added to the cluster by kubeadm join command.

Add kubectl control authority

mkdir -p $HOME/.kube

cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

chown $(id -u):$(id -g) $HOME/.kube/config

 

 

Further addition of a master node

In the master-2 operations:

kubeadm join 192.168.41.232:6443 --token ocb5tz.pv252zn76rl4l3f6 \

--discovery-token-ca-cert-hash sha256:141bbeb79bf58d81d551f33ace207c7b19bee1cfd7790112ce26a6a300eee5a2 \

--experimental-control-plane --certificate-key 20366c9cdbfdc1435a6f6d616d988d027f2785e34e2df9383f784cf61bab9826

--ignore-preflight-errors=all

Add kubectl control authority

mkdir -p $HOME/.kube

cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

chown $(id -u):$(id -g) $HOME/.kube/config

 

 

notready state

Edit /etc/systemd/system/kubelet.service.d/10-kubeadm.conf file (some /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf file),
delete the last line in the $ KUBELET_NETWORK_ARGS

systemctl restart omelet

 

 

Install network plug

kubectl Apply -f kube-flannel.yml

 

 

Check node status again

kubectl get nodes

 

 

Reference :

kube-flannel network plugin: https: //www.wanghaiqing.com/article/aa3ac027-7ae8-43ff-821e-49f6dfcd17e8/

kubeadm availability: https: //segmentfault.com/a/1190000018741112 utm_source = tag-newest?

notready See: https: //www.cnblogs.com/zhongyuanzhao000/p/11401031.html

Guess you like

Origin www.cnblogs.com/leiwenbin627/p/11595921.html