k8s high availability cluster construction (3)

Reference: Mushroom Community - a community platform focusing on technology sharing

Since I transferred from Youdao Notes, some code formats are messy, if you need more details, you can refer to my Youdao Notes

Document: k8s high-availability cluster construction (study notes in Shang Silicon Valley)....
Link: http://note.youdao.com/noteshare?id=294d02f4133dccf37f6313bc12268134&sub=28BB5A24488F4253A0FD991F8A86BE7A
 

 Install Docker, Kubeadm, kubectl

Install Docker/kubeadm/kubelet on all nodes, and the default CRI (container runtime) of Kubernetes is Docker, so install Docker first

Install Docker

First configure Docker's Ali yum source

cat >/etc/yum.repos.d/docker.repo<<EOF [docker-ce-edge] name=Docker CE Edge - \$basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/\$basearch/edge enabled=1 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg EOF

Then install docker in yum mode

# yum installation yum -y install docker-ce # View docker version docker --version # Start docker systemctl enable docker systemctl start docker configure docker mirror source cat >> /etc/docker/daemon.json << EOF { "registry- mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"] } EOF

Then restart docker

systemctl restart docker

Add kubernetes software source

Then we also need to configure the k8s software source of yum

cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF

Install kubeadm, kubelet and kubectl

Due to frequent version updates, the version number deployment is specified here:

# Install kubelet, kubeadm, kubectl, and specify the version yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0 # Set boot systemctl enable kubelet

Deploy Kubernetes Master [master node]

Create kubeadm configuration file

Perform initialization operations on the master with vip, here is master1

# Create a folder mkdir /usr/local/kubernetes/manifests -p # Go to the manifests directory cd /usr/local/kubernetes/manifests/ # Create a new yaml file vi kubeadm-config.yaml

The yaml content is as follows:

apiServer: certSANs: - master1 - master2 - master.k8s.io - 192.168.44.158 - 192.168.44.155 - 192.168.44.156 - 127.0.0.1 extraArgs: authorization-mode: Node,RBAC timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta1 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: "master.k8s.io:16443" controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: registry.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.18.0 networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16 serviceSubnet: 10.1.0.0/16 scheduler: {}

Then we execute on the master1 node

kubeadm init --config kubeadm-config.yaml

Execute the statement in the prompt:

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

Follow the prompts to save the following content, which will be used later:

kubeadm join master.k8s.io:16443 --token rhbcl9.rzv0rktd95tsry01 \ --discovery-token-ca-cert-hash sha256:e24060c1bd81641d3c7dedfda94640ac13c1cf1040dc39cb0869e51b96e6d4fb \ --control-plane

--control-plane : Only available when adding a master node

View cluster status

# View cluster status kubectl get cs # View pod kubectl get pods -n kube-system

Install cluster network

Get the yaml of flannel from the official address and execute it on master1,

# Create folder mkdir flannel cd flannel # Download yaml file wget -c https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml # Execute kubectl apply -f kube-flannel.yml

The URL of raw.githubusercontent.com refuses to access the solution: add the ip that maps this domain name to the host file, the ip is always changing, check it before each download, check the domain name IP on the IP website

vim /etc/hosts 185.199.109.133 raw.githubusercontent.com

The master2 node joins the cluster

Copy key and related files

Copy the key and related files from master1 to master2

# ssh [email protected] mkdir -p /etc/kubernetes/pki/etcd # scp /etc/kubernetes/admin.conf [email protected]:/etc/kubernetes # scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} [email protected]:/etc/kubernetes/pki # scp /etc/kubernetes/pki/etcd/ca.* [email protected]:/etc/kubernetes/pki/etcd

master2 joins the cluster

To execute the join command output after init on master1, you need to bring the parameter --control-plane to indicate that the master control node is added to the cluster

kubeadm join master.k8s.io:16443 --token rhbcl9.rzv0rktd95tsry01 \ --discovery-token-ca-cert-hash sha256:e24060c1bd81641d3c7dedfda94640ac13c1cf1040dc39cb0869e51b96e6d4fb \ --control-plane

check status

kubectl get node kubectl get pods --all-namespaces

Master2 execution output prompt

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

Join Kubernetes Node

execute on node1

To add a new node to the cluster, execute the kubeadm join command output by kubeadm init:

Note that you need to change the host file of node1 to be the same as master2

kubeadm join master.k8s.io:16443 --token rhbcl9.rzv0rktd95tsry01 \ --discovery-token-ca-cert-hash sha256:e24060c1bd81641d3c7dedfda94640ac13c1cf1040dc39cb0869e51b96e6d4fb \

The cluster network is reinstalled because new node nodes are added

Reinstall flannel, see the above steps for installation

check status

kubectl get node kubectl get pods --all-namespaces

If the node is not ready for a long time,

View status: kubectl describe node node1

It was found that Kubelet stopped posting node status

You can try to restart, kubectl of node1

Under the test:

Guess you like

Origin blog.csdn.net/weixin_42821448/article/details/125449899