k8s study notes (1): k8s single-master architecture deployment

Article directory

k8s single master architecture installation and deployment

Environmental preparation

3 brand new centos7.9 systems, 4-core CPU, 2G memory, and 200G disk

Environmental Statement

  • podSubnet (pod network segment) 10.244.0.0/16
  • serviceSubnet (service network segment): 10.96.0.0/16
  • Physical machine network segment: 192.168.31.0/24

Cluster architecture

K8S cluster role ip CPU name Install components
control node 192.168.31.180 k8s-master apiserver、control manager、scheduler、etcd、kube-proxy、docker、calico
Work node 192.168.31.181 k8s-node-1 kublet、kube-proxy、docker、calico、coredns
Work node 192.168.31.182 k8s-node-2 kublet、kube-proxy、docker、calico、coredns

installation steps

1. Initialize and install the experimental environment of k8s cluster (three k8s must be operated, for convenience, I only recorded the operation of one)

1. Configure the host IP and host name according to the cluster architecture.

[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33
BOOTPROTO=static 
NAME=ens33
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.31.180
PREFIX=24
GATEWAY=192.168.31.1
DNS1=114.114.114.114
[root@localhost ~]# hostnamectl set-hostname k8s-master
[root@localhost ~]# su
[root@k8s-master ~]#

The configuration of the other two machines is similar. Configure the IP and host name according to the cluster architecture.

2. Turn off the firewall and selinux on all nodes

[root@k8s-master ~]# service firewalld stop #临时关闭防火墙
Redirecting to /bin/systemctl stop firewalld.service
[root@k8s-master ~]# systemctl disable firewalld #永久关闭防火墙
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@k8s-master ~]# setenforce 0 #临时关闭selinux
[root@k8s-master ~]# getenforce 
Permissive
[root@k8s-master ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config  #永久关闭selinux

Function: Prevent communication between k8s cluster hosts from being blocked

The other two machines operate the same

3. Modify the /etc/hosts file and access each other through host names.

All node operations

[root@k8s-master ~]# vi /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.31.180 k8s-master
192.168.31.181 k8s-node-1
192.168.31.182 k8s-node-2

4. All nodes establish secret-free channels with each other

k8s-master

[root@k8s-master ~]# ssh-keygen
[root@k8s-master ~]# ssh-copy-id k8s-node-1
[root@k8s-master ~]# ssh-copy-id k8s-node-2

k8s-node-1

[root@k8s-node-1 ~]# ssh-keygen
[root@k8s-node-1 ~]# ssh-copy-id k8s-master
[root@k8s-node-1 ~]# ssh-copy-id k8s-node-2

k8s-node-2

[root@k8s-node-2 ~]# ssh-keygen
[root@k8s-node-2 ~]# ssh-copy-id k8s-master
[root@k8s-node-2 ~]# ssh-copy-id k8s-node-1

5. Turn off swap partition to improve performance

Do all three

[root@k8s-master ~]# swapoff -a

Function: Use memory as much as possible, do not use disk swap partitions, and improve performance.
In order to improve performance when k8s is designed, the use of swap partitions is not allowed by default . When Kubeadm is initialized, it will detect whether swap is closed. If it is not closed, the initialization will fail . If you do not want to turn off the swap partition, you can specify –ignore-preflight-errors=Swap when installing k8s to solve the problem.

Close swap partition permanently

[root@k8s-master ~]# vim /etc/fstab

Insert image description here

6. Modify machine kernel parameters

Do all three

[root@k8s-master ~]# modprobe br_netfilter            #可以解决 sysctl -p /etc/sysctl.d/k8s.conf出现报错:
[root@k8s-master ~]# echo "modprobe br_netfilter" >> /etc/profile               #设置开机启动
[root@k8s-master ~]# cat > /etc/sysctl.d/k8s.conf <<EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> net.ipv4.ip_forward = 1
> EOF
[root@k8s-master ~]# sysctl -p /etc/sysctl.d/k8s.conf                   #从指定的文件加载系统参数

7. Configure Alibaba Cloud’s repo source

[root@k8s-master ~]# yum install -y yum-utils
[root@k8s-master ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

Configuring the domestic docker image source will make subsequent image pulls much faster.

8. Install all dependency packages required by the cluster

[root@k8s-master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel  python-devel epel-release openssh-server socat  ipvsadm conntrack ntpdate telnet ipvsadm

9. Configure the Alibaba Cloud repo source required to install k8s components

[root@k8s-master ~]# vim /etc/yum.repos.d/kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0

10. Configure time synchronization

Install ntpdate command

[root@k8s-master ~]# yum install -y ntpdate

Synchronize network time

[root@k8s-master ~]# ntpdate cn.pool.ntp.org

Make time synchronization a scheduled task

[root@k8s-master ~]# crontab -e

* */1 * * * /usr/sbin/ntpdate   cn.pool.ntp.org

Restart the crond service

[root@k8s-master ~]# service crond restart

2. Install docker service (operate on all three k8s hosts)

1.Install docker-ce

Install docker

[root@k8s-master ~]# yum install docker-ce-20.10.6 -y

Auto-start+Auto-start at boot

[root@k8s-master ~]# systemctl start docker && systemctl enable docker.service

2. Configure docker image accelerator and driver

Add image accelerator
[root@k8s-master ~]# vim /etc/docker/daemon.json

{
 "registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}

Modify the docker file driver to systemd. The default is cgroupfs. Kubelet uses systemd by default. The two must be consistent .

Refresh systemd and restart docker
[root@k8s-master ~]# systemctl daemon-reload  && systemctl restart docker
Check docker running status
[root@k8s-master ~]# systemctl status docker

Insert image description here

3. Install the software packages required to initialize k8s (operate on all three k8s hosts)

1. Install kubeadm, kubelet, kubectl

[root@k8s-master ~]# yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6

2. Start kubelet automatically after booting

[root@k8s-master ~]# systemctl enable kubelet
software effect
Kubeadm kubeadm is a tool used to initialize k8s clusters
Kubelet Installed on all nodes in the cluster and used to start Pods
Kubectl Through kubectl, you can deploy and manage applications, view various resources, create, delete and update various components.

4. Import offline image packages to speed up subsequent image pulls

1. Upload the offline image package required to initialize the k8s cluster to three k8s hosts

The image compression package is uploaded to the home directory of the k8s host through xftp.

[root@k8s-master ~]# ls
anaconda-ks.cfg  k8simage-1-20-6.tar.gz

scp to two other hosts

[root@k8s-master ~]# scp k8simage-1-20-6.tar.gz 192.168.31.181:/root
[root@k8s-master ~]# scp k8simage-1-20-6.tar.gz 192.168.31.182:/root

2.docker load import image

[root@k8s-master ~]# docker load -i k8simage-1-20-6.tar.gz

5. Kubeadm initializes the k8s cluster (only done on the master)

1. Generate kubeadm.yaml file

[root@k8s-master ~]# kubeadm config print init-defaults > kubeadm.yaml

2. Modify the relevant configuration of the kubeadm.yaml file

[root@k8s-master ~]# vim kubeadm.yaml 

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.31.180
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.20.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

Insert image description here

Special reminder : –image-repository registry.aliyuncs.com/google_containers To ensure that the pulled image is not pulled from foreign sites, manually specify the warehouse address as registry.aliyuncs.com/google_containers. kubeadm pulls the image from k8s.gcr.io by default. We have imported offline images locally, so the local images will be used first.

mode: ipvs indicates that the kube-proxy proxy mode is ipvs. If ipvs is not specified, iptables will be used by default. However, iptables is inefficient, so it is recommended to enable ipvs in our production environment.

3. Initialize k8s based on the kubeadm.yaml file

[root@k8s-master ~]# kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification

Insert image description here

4. Configuring the kubectl configuration file config is equivalent to authorizing kubectl so that the kubectl command can use this certificate to manage the k8s cluster.

[root@k8s-master ~]#   mkdir -p $HOME/.kube
[root@k8s-master ~]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config

View node information

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES                  AGE    VERSION
k8s-master   NotReady   control-plane,master   177m   v1.20.6

It was found that there is only master, because the other two nodes have not yet joined the master.

6. k8s cluster expansion-add two working nodes

Check the command to join the node on the master

[root@k8s-master ~]# kubeadm token create --print-join-command
kubeadm join 192.168.31.180:6443 --token bnokqt.324m0mxrhv7z7je3     --discovery-token-ca-cert-hash sha256:7013fd8145494ded4993d3d1d96cde925fe06e14c953841fcdb85fd33a315298

Join node1 node

[root@k8s-node-1 ~]# kubeadm join 192.168.31.180:6443 --token abcdef.0123456789abcdef     --discovery-token-ca-cert-hash sha256:7013fd8145494ded4993d3d1d96cde925fe06e14c953841fcdb85fd33a315298

Join node2 node

[root@k8s-node-2 ~]# kubeadm join 192.168.31.180:6443 --token abcdef.0123456789abcdef     --discovery-token-ca-cert-hash sha256:7013fd8145494ded4993d3d1d96cde925fe06e14c953841fcdb85fd33a315298

View node information on the master

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES                  AGE     VERSION
k8s-master   NotReady   control-plane,master   3h4m    v1.20.6
k8s-node-1   NotReady   <none>                 5m4s    v1.20.6
k8s-node-2   NotReady   <none>                 4m55s   v1.20.6

Join successfully!

You can turn the ROLES of k8s-node-1 and k8s-node-2 into work

[root@k8s-master ~]# kubectl label node k8s-node-1 node-role.kubernetes.io/worker=worke
node/k8s-node-1 labeled
[root@k8s-master ~]# kubectl label node k8s-node-2 node-role.kubernetes.io/worker=worke
node/k8s-node-2 labeled
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES                  AGE     VERSION
k8s-master   NotReady   control-plane,master   3h10m   v1.20.6
k8s-node-1   NotReady   worker                 10m     v1.20.6
k8s-node-2   NotReady   worker                 10m     v1.20.6

7. Install kubernetes network components-Calico (done on master)

1. Upload the calico.yaml file to your home directory

[root@k8s-master ~]# ls
anaconda-ks.cfg  calico.yaml  k8simage-1-20-6.tar.gz  kubeadm.yaml

2.yaml file to install calico network plug-in

[root@k8s-master ~]# kubectl apply -f  calico.yaml

3. Check node status

[root@k8s-master ~]# kubectl get node
NAME         STATUS   ROLES                  AGE     VERSION
k8s-master   Ready    control-plane,master   3h16m   v1.20.6
k8s-node-1   Ready    worker                 16m     v1.20.6
k8s-node-2   Ready    worker                 16m     v1.20.6

Insert image description here

It is found that the status has changed to ready status. Congratulations, the k8s one master and two slaves architecture has been successfully built!

Guess you like

Origin blog.csdn.net/qq_57629230/article/details/131314823