Kubernetes (k8s) ultra-detailed installation steps

Table of contents

1. Environment settings

2. Basic environment configuration

(1) Host name configuration

1. Operate on the master virtual machine

2. Operate on the node1r virtual machine

3. Operate on the node2 virtual machine

(2) VMware network configuration

(3) Virtual machine network configuration

1. Operate on the master virtual machine

2. Operate on the node1r virtual machine

3. Operate on the node2 virtual machine

4. Internal testing of the virtual machine

 (4) Modify the hosts file (executed on the three nodes separately)

 (5) Configure SSH password-free login (executed on three nodes separately)

 (6) Turn off the firewall and SELINUX (executed on the three nodes separately)

(7) Close the swap partition (executed on the three nodes separately)

1. Temporarily close swapoff -a

2. Permanently close sed -i '/swap/d' /etc/fstab

3. Test: free, found that swap is all 0

 (8) Modify kernel parameters (executed on three nodes separately)

 (9) Configure time synchronization

(10) Load the ip_vs module (executed on three nodes separately)

 (11) Configuration source

1. Execute separately on three nodes

(1) yum install lrzsz openssh-clients -y > /dev/null && wget -P /etc/yum.repos.d/ http://mirrors.aliyun.com/repo/Centos-7.repo && ls -l /etc/yum.repos.d/Centos-7.repo

 (2)  yum -y install yum-utils && yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo && yum -y install epel-release

 2. Operate on the master

(1)echo -e "[kubernetes]\nname=Kubernetes\nbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/\nenabled=1\ngpgcheck=0" > /etc/yum.repos.d/kubernetes.repo

(2) for i in 2 3;do scp /etc/yum.repos.d/kubernetes.repo  hd${i}.com:/etc/yum.repos.d/;done

 (12) Installation dependencies (executed on three nodes separately)

3. docker-ce

(1) Execute the installation on the three nodes separately

 (2) Configure warehouse acceleration

(3) Overload the startup script and docker program (executed on three nodes respectively)

4. Install kubernetes

(1) Install initialization software (execute on three nodes separately)

(2) Start and view the status (executed on the three nodes separately)

 (3) Copy the k8s image (operated on the master)

 (4) Load the docker image of k8s

 (5) Initialize the cluster (operate on the master)

1、kubeadm init --kubernetes-version=1.20.6  --apiserver-advertise-address=192.168.115.11  --image-repository registry.aliyuncs.com/google_containers  --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=SystemVerification

 2. If it fails, roll back

3. Copy the token information after success

4. If you forget to copy the token information

5. Copy the configuration file

6. View node status

 (6) Add node to the cluster (operate on node1 and node2)

1. Remote copy token.txt

 2. .token.txt This command executes successfully interface (operate on node1, node2)

 3. Verify on the control node (master)

 (7) Define the node role (operated in the master)

 (8) Install network components

1. Copy calico.yaml to the master node

 2. Execute on master, hd1.com

 3. Verification: kubectl get node

 (9) Verify the cluster

1. View the cluster status

 2. Check the namespace

 3. Use the busybox image to verify network communication

(1) Copy busybox package to master

(2) Load the Docker image busybox into the local Docker engine.

(3) Start busybox and enter


1. Environment settings

Role

IP address

CPU name

master

192.168.115.11/24

hd1.com

node1

192.168.115.12/24

hd2.com

node2

192.168.115.13/24

hd3.com

Note: The running memory of each host is set to at least 4GB, and the processor is set to two cores

2. Basic environment configuration

(1) Host name configuration

1. Operate on the master virtual machine

hostnamectl set-hostname hd1.com && bash

2. Operate on the node1r virtual machine

hostnamectl set-hostname hd2.com && bash

3. Operate on the node2 virtual machine

hostnamectl set-hostname hd2.com && bash

(2) VMware network configuration

Change the network card to NAT mode and adjust to assign 192.168.115.0/24 network segment address

(3) Virtual machine network configuration

1. Operate on the master virtual machine

systemctl stop NetworkManager

systemctl disable NetworkManager

vim  /etc/sysconfig/network-scripts/ifcfg-ens33

TYPE=Ethernet
BOOTPROTO=static
NAME=ens33
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.115.11
PREFIX=24
GATEWAY=192.168.115.2
DNS1=192.168.115.2

2. Operate on the node1r virtual machine

systemctl stop NetworkManager

systemctl disable NetworkManager

vim  /etc/sysconfig/network-scripts/ifcfg-ens33

TYPE=Ethernet
BOOTPROTO=static
NAME=ens33
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.115.12
PREFIX=24
GATEWAY=192.168.115.2
DNS1=192.168.115.2

3. Operate on the node2 virtual machine

systemctl stop NetworkManager

systemctl disable NetworkManager

vim /etc/sysconfig/network-scripts/ifcfg-ens33 #Manually configure the IP address

TYPE=Ethernet
BOOTPROTO=static
NAME=ens33
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.115.13
PREFIX=24
GATEWAY=192.168.115.2
DNS1=192.168.115.2

4. Internal testing of the virtual machine

ping www.baidu.com      #Need to test on all three nodes

 (4) Modify the hosts file (executed on the three nodes separately)

vim  /etc/hosts

192.168.115.11    hd1.com
192.168.115.12    hd2.com
192.168.115.13    hd3.com

Test: ping hd1.com && ping hd2.com && ping hd3.com         #It needs to be tested on all three nodes

 (5) Configure SSH password-free login (executed on three nodes separately)

ssh-keygen

for i in 1 2 3;do ssh-copy-id  hd${i}.com;done

 Test: ssh hd1.com ssh hd2.com ssh hd3.com                   #Need to test on all three nodes

 (6) Turn off the firewall and SELINUX (executed on the three nodes separately)

 systemctl stop firewalld && systemctl disable firewalld && setenforce 0 && sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config

(7) Close the swap partition (executed on the three nodes separately)

1. Temporarily close swapoff -a

2. Permanently close sed -i '/swap/d' /etc/fstab

3. Test: free, found that swap is all 0

 (8) Modify kernel parameters (executed on three nodes separately)

modprobe br_netfilter && echo "modprobe br_netfilter" >> /etc/profile && echo -e "net.bridge.bridge-nf-call-ip6tables = 1\nnet.bridge.bridge-nf-call-iptables = 1\nnet.ipv4.ip_forward = 1\n" > /etc/sysctl.d/k8s.conf && sysctl -p /etc/sysctl.d/k8s.conf

 (9) Configure time synchronization

for i in 1 2 3;do ssh hd${i}.com  yum install -y ntpdate && ntpdate cn.pool.ntp.org;done

(10) Load the ip_vs module (executed on three nodes separately)

#ipvs.modules can be found in resources

 for i in 1 2 3;do scp /root/ipvs.modules hd${i}.com:/etc/sysconfig/modules/ && ssh hd${i}.com  bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs && echo -e "-----------------------------\nhd${i}.com";done

echo -e "------------------------------\nhd${i}.com": after the output of the remote host , showing the separator and the name of the remote host.

 (11) Configuration source

1. Execute separately on three nodes

(1) yum install lrzsz openssh-clients -y > /dev/null && wget -P /etc/yum.repos.d/ http://mirrors.aliyun.com/repo/Centos-7.repo && ls -l /etc/yum.repos.d/Centos-7.repo

 

 (2)  yum -y install yum-utils && yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo && yum -y install epel-release

 2. Operate on the master

(1)echo -e "[kubernetes]\nname=Kubernetes\nbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/\nenabled=1\ngpgcheck=0" > /etc/yum.repos.d/kubernetes.repo

(2) for i in 2 3;do scp /etc/yum.repos.d/kubernetes.repo  hd${i}.com:/etc/yum.repos.d/;done

 (12) Installation dependencies (executed on three nodes separately)

yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel  python-devel epel-release openssh-server socat   conntrack ntpdate telnet ipvsadm

3. docker-ce

(1) Execute the installation on the three nodes separately

 yum install docker-ce-20.10.6 docker-ce-cli-20.10.6 containerd.io  -y && systemctl start docker && systemctl enable docker.service

 (2) Configure warehouse acceleration

vim /root/daemon.json

{
  "registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com", "https://rncxm540.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}

(3) Overload the startup script and docker program (executed on three nodes respectively)

systemctl daemon-reload

systemctl restart docker

4. Install kubernetes

(1) Install initialization software (execute on three nodes separately)

 yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6

(2) Start and view the status (executed on the three nodes separately)

 systemctl start kubelet && systemctl enable kubelet && systemctl status kubelet

 (3) Copy the k8s image (operated on the master)

for i in 2 3;do scp k8simage-1-20-6.tar.gz hd${i}.com:/root/;done

 (4) Load the docker image of k8s

for i in 1 2 3;do ssh hd${i}.com docker load -i /root/k8simage-1-20-6.tar.gz;done

 Verification: docker images (executed separately on three nodes)

 (5) Initialize the cluster (operate on the master)

1、kubeadm init --kubernetes-version=1.20.6  --apiserver-advertise-address=192.168.115.11  --image-repository registry.aliyuncs.com/google_containers  --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=SystemVerification

 

 2. If it fails, roll back

kubeadm reset

3. Copy the token information after success

 vim  token.txt

4. If you forget to copy the token information

kubeadm token create --print-join-command

5. Copy the configuration file

mkdir -p $HOME/.kube

cp /etc/kubernetes/admin.conf $HOME/.kube/config

6. View node status

kubectl get nodes

 (6) Add node to the cluster (operate on node1 and node2)

1. Remote copy token.txt

 2. .token.txt This command executes successfully interface (operate on node1, node2)

 3. Verify on the control node (master)

kubectl  get node

 (7) Define the node role (operated in the master)

kubectl label node hd2.com node-role.kubernetes.io/worker=worker

kubectl label node hd3.com node-role.kubernetes.io/worker=worker

 Verification: kubectl get node

 (8) Install network components

1. Copy calico.yaml to the master node

 2. Execute on master, hd1.com

kubectl apply -f  calico.yaml

 3. Verification: kubectl get node

 (9) Verify the cluster

1. View the cluster status

kubectl get pod -n kube-system

 2. Check the namespace

Shorthand for kubectl get namespace command: kubectl get ns

 3. Use the busybox image to verify network communication

(1) Copy busybox package to master

(2) Load the Docker image busybox into the local Docker engine.

for i in 2 3;do scp busybox-1-28.tar.gz  hd${i}.com:/root/ && ssh hd${i}.com  docker load -i /root/busybox-1-28.tar.gz;done

 

 

(3) Start busybox and enter

kubectl run busybox --image=busybox:1.28 --restart=Never --rm -it busybox --  sh

Need to reopen a terminal to view
The command is not necessarily on which node

 

Guess you like

Origin blog.csdn.net/wuds_158/article/details/131568554