k8s installation process notes

参考教程:https://blog.csdn.net/weixin_57250068/article/details/123368726
https://blog.csdn.net/kity9420/article/details/123674582

1. System preparation

I used two virtual machines, one as the cloud (master) and the other as the slave (slave).

1.1 Modify host name and mapping (all nodes)

vi /etc/hostname

Usage of vi: i key to enter editing mode, esc to exit editing mode, enter ":q!" to force exit without saving, enter ":wq", save and exitvim reference Link
The master host is changed from ubuntu to master, the same is true for the slave
Insert image description here
Check the ip address

ifconfig

Insert image description here
Modify the mapping and add the ip address and host name at the end

vi /etc/hosts

Insert image description here
Other references:/etc/hosts file configuration about IP address and host name mapping

1.2 Configure password-free login (SSH)

First check whether the sshd service is installed

ps -e |grep ssh

If only one ssh is displayed after entering the command, it means that you have not installed the sshd service.
Insert image description here
Install ssh:

sudo apt-get install openssh-server

In the user's home directory, enter the .ssh folder

cd ~
cd .ssh

If there is no ssh file (if there is, skip this step)
Insert image description here
Enter the ssh localhost command and enter the password in the third red box
Insert image description here
Now there is ssh file, generate key, enter

ssh-keygen -t rsa

Insert image description here
To issue other hosts, enter

ssh-copy-id 主机名(我的master主机对应slave)

Insert image description here
Verification: ssh slave, you can see that the master host can log in to the slave host
Insert image description here
Reference link: ssh: connect to host localhost port 22 : Solution to Connection refused

1.3 Turn off swap memory

Execute the following command to close swap memory

sudo swapoff -a

Install docker

Install

sudo apt install docker.io

Check whether the installation is successful/check the docker version

docker --version

Configuration

Kubernetes sets the cgroup driver (cgroupdriver) to "systemd" by default, and the cgroup driver of the docker service defaults to "cgroupfs". It is recommended to modify it to "systemd" to be consistent with kubernetes. You can modify docker's /etc/ docker/daemon.js file to set

mkdir /etc/docker 或者 cd /etc/docker/
cat <<EOF> /etc/docker/daemon.json
 
{
    
    
 
"exec-opts": ["native.cgroupdriver=systemd"],
 
"registry-mirrors": ["https://kn0t2bca.mirror.aliyuncs.com"]
 
}
 
EOF​

Insert image description here
Start the Docker service and activate startup:

systemctl start docker
 
systemctl enable docker

3 Install kubeadm, kubectl and kubelet

After installing docker, you can download the three main components of k8s, kubelet, kubeadm and kubectl. Both hosts need to be installed in this step. Let’s briefly introduce these three
kubelet: The core service of k8s
kubeadm: This is an integrated tool for quickly installing k8s. We use master1 and All k8s deployments on worker1 will be completed using this.
kubectl: The command line tool of k8s. After the deployment is completed, subsequent operations must be performed using it

apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -

Add k8s image source

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

Update source list

apt-get update

Install kubectl, kubeadm and kubelet

apt-get install kubelet=1.18.0-00 kubeadm=1.18.0-00 kubectl=1.18.0-00

Download image

Query the image to be downloaded

kubeadm config images list --kubernetes-version v1.18.0

Insert image description here
Download image

kubeadm config images pull --image-repository=registry.aliyuncs.com/google_containers

Copy and rename (can be skipped)

docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.0 k8s.gcr.io/kube-apiserver:v1.18.0
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.0 k8s.gcr.io/kube-controller-manager:v1.18.0
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.0 k8s.gcr.io/kube-scheduler:v1.18.0
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.18.0 k8s.gcr.io/kube-proxy:v1.18.0
docker tag registry.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2
docker tag registry.aliyuncs.com/google_containers/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
docker tag registry.aliyuncs.com/google_containers/coredns:1.6.7 k8s.gcr.io/coredns:1.6.7

Delete redundant images (if not done in the previous one, this one must be skipped)

docker rmi registry.aliyuncs.com/google_containers/etcd:3.4.3-0
docker rmi registry.aliyuncs.com/google_containers/coredns:1.6.7
docker rmi registry.aliyuncs.com/google_containers/pause:3.2
docker rmi registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.20
docker rmi registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.20
docker rmi registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.20
docker rmi registry.aliyuncs.com/google_containers/kube-proxy:v1.18.20

After the download is complete, you can view the image locally through docker images

docker images

Insert image description here

Configure cloud nodes

kubeadm init --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=192.168.0.0/16 --kubernetes-version=v1.18.0

Configuration item description:

  • –apiserver-advertise-address The deployment address of the service apiserver in k8s. If not filled in, the default is the local machine.
  • –image-repository pulls the docker image source. Because kubeadm will pull many components of k8s for deployment during initialization, you need to specify the domestic image source, otherwise the image will not be pulled.
  • –pod-network-cidr The node network used by k8s, because we will use flannel as the k8s network, so just fill in 192.168.0.0/16 here (because my master IP address is 192.168.XXX.XXX, use the ifconfig command to check the ip )
  • –kubernetes-version: This is used to specify the k8s version you want to deploy. Generally, it does not need to be filled in. However, if an installation error occurs due to an incorrect version during the initialization process, you can use this parameter to specify it manually.

The installation takes a while. After it is successful, you will see the following message at the end.

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.62.129:6443 --token paua0a.yeygd3v81lovxn2a \
    --discovery-token-ca-cert-hash sha256:e146ea71fa645698e4da8370ab34ee52fefc5fb7d4fa082b168ec93465607356 

Among them, the last two lines of messages are very important
The configuration file is set as follows

mkdir -p /root/.kube
 
cp -i /etc/kubernetes/admin.conf /root/.kube/config
 
chown root:root /root/.kube/config

If you forget the token, use the following command to check it

kubeadm token list

Join end node

Execute the kubeadm init initialization command on the child node

kubeadm init

If there are problems such as the port being occupied, reset the

kubeadm reset

Join the cluster and enter the last two lines of the cloud node

kubeadm join 192.168.62.129:6443 --token paua0a.yeygd3v81lovxn2a \
    --discovery-token-ca-cert-hash sha256:e146ea71fa645698e4da8370ab34ee52fefc5fb7d4fa082b168ec93465607356 

After the setup is complete, you can use the kubectl command line tool to access and operate the kubernetes cluster.

# 查看已加入的节点
kubectl get nodes
# 查看集群状态
kubectl get cs

Insert image description here

Deploy the network and automatically have flanner network service

You can see that the master and slave above are both in NotReady status. In order to make it Ready, install the Pod network plug-in (usually CNI).
There are many kinds of Pod network plug-ins. See official documentation. This article chooses to deploy the Flannel network plug-in so that the internal services of the Pod can communicate with each other. on master

The first method is to install it directly

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Second, if you get stuck in the previous step, you can download the file locally
and then execute it in the downloaded folder (root execution)

kubectl apply -f kube-flannel.yml

执行结果:
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

Check the joined nodes again

kubectl get node

Insert image description here

Install Dashboard plug-in

All the above operations are performed through the command line. We do not have an overall understanding of the entire cluster and it is not convenient to operate. At this time, we need a visual operation interface to improve this problem: Dashboard plug-in.

Kubernetes Dashboard is a common, web-based UI for Kubernetes clusters. It allows users to manage and troubleshoot applications running in the cluster, as well as manage the cluster itself. It needs to be installed on the master node.

Same as above, if you can download it directly

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml

If you can’t download it directly, use a machine with Internet access to download it first, and then send it to the machine that needs to be installed.
Here we take an extra step: since the default Dashboard can only be accessed within the cluster, use The dashboard plug-in installed with the default configuration only allows us to access the cluster's UI through the browser of the cluster's local master host, which is too inconvenient for us. Therefore, modify the Service to the NodePort type to expose the UI to the outside of the cluster and access it

vi recommended.yaml 或者 gedit recommended.yaml

Add two lines of configuration

kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort ---- 添加一行
ports:
- port: 443
targetPort: 8443
nodePort: 30001 ---- 添加一行
selector:
k8s-app: kubernetes-dashboard

Install again

kubectl apply -f recommended.yaml

Insert image description here
Check if there is a dashboard in the pod

kubectl get pods --all-namespaces

In the bottom two lines, dashbaord is available. View dashboard information

NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7ff77c879f-vh6qr 0/1 ContainerCreating 0 36m
kube-system coredns-7ff77c879f-xx4qr 0/1 ContainerCreating 0 36m
kube-system etcd-master 1/1 Running 0 36m
kube-system kube-apiserver-master 1/1 Running 0 36m
kube-system kube-controller-manager-master 1/1 Running 0 36m
kube-system kube-flannel-ds-szg68 0/1 CrashLoopBackOff 6 6m48s
kube-system kube-flannel-ds-v4fqm 0/1 CrashLoopBackOff 6 6m48s
kube-system kube-proxy-wcnn7 1/1 Running 0 23m
kube-system kube-proxy-zxm6d 1/1 Running 0 36m
kube-system kube-scheduler-master 1/1 Running 0 36m
kubernetes-dashboard dashboard-metrics-scraper-55bc59dffc-8rwvx 0/1 ContainerCreating 0 6m12s
kubernetes-dashboard kubernetes-dashboard-5c6dff6c6f-s8hbz 0/1 ContainerCreating 0 6m12s

Guess you like

Origin blog.csdn.net/weixin_45937957/article/details/125622734