Docker (16)--Docker k8s--Kubernetes cluster deployment

1. Introduction and architecture of Kubernetes

1.1 Introduction to Kubernetes

  • While Docker is developing rapidly as an advanced container engine, within Google, container technology has been applied for many years, and the Borg system runs and manages thousands of container applications.

  • The Kubernetes project originated from Borg, which can be said to be a collection of the essence of Borg's design ideas and absorb the experience and lessons of the Borg system.

  • Kubernetes has carried out a higher level of abstraction on computing resources, and handed over the final application service to the user through a detailed combination of containers.

  • The benefits of Kubernetes:
    hidden resource management and error handling, users only need to pay attention to application development.
    The service is highly available and highly reliable.
    The load can be run in a cluster composed of thousands of machines.

1.2 kubernetes design architecture

  • The Kubernetes cluster contains node agent kubelet and Master components (APIs, scheduler, etc), and everything is based on a distributed storage system.
    Insert picture description here

  • Kubernetes is mainly composed of the following core components:

    • etcd: Save the state of the entire cluster
    • apiserver: Provides the only entry point for resource operations, and provides mechanisms for authentication, authorization, access control, API registration, and discovery
    • controller manager: responsible for maintaining the status of the cluster, such as fault detection, automatic expansion, rolling updates, etc.
    • scheduler: Responsible for resource scheduling, and schedule Pod to the corresponding machine according to a predetermined scheduling strategy
    • kubelet: Responsible for maintaining the life cycle of the container, as well as for the management of Volume (CVI) and network (CNI)
    • Container runtime: responsible for image management and real operation of Pod and container (CRI)
    • kube-proxy: Responsible for providing service discovery and load balancing within the cluster for the Service
  • In addition to the core components, there are some recommended Add-ons:

    • kube-dns: Responsible for providing DNS services for the entire cluster
    • Ingress Controller: Provide external network access for services
    • Heapster: Provide resource monitoring
    • Dashboard: Provide GUI
    • Federation: Provides clusters across availability zones
    • Fluentd-elasticsearch: Provide cluster log collection, storage and query
  • The design concept and function of Kubernetes is actually a hierarchical architecture similar to Linux
    Insert picture description here

  • Core layer: The core function of Kubernetes. It provides external APIs to build high-level applications, and provides plug-in application execution environment internally.

  • Application layer: deployment (stateless applications, stateful applications, batch tasks, cluster applications, etc.) and routing (service discovery, DNS resolution, etc.)

  • Management layer: system measurement (such as infrastructure, container and network measurement), automation (such as automatic expansion, dynamic provision, etc.) and policy management (RBAC, Quota, PSP, NetworkPolicy, etc.)

  • Interface layer: kubectl command line tool, client SDK and cluster federation

  • Ecosystem: The ecosystem of the huge container cluster management and scheduling above the interface layer can be divided into two categories

    • Outside Kubernetes: logs, monitoring, configuration management, CI, CD, Workflow, FaaS, OTS applications, ChatOps, etc.
    • Inside Kubernetes: CRI, CNI, CVI, mirror warehouse, Cloud Provider, configuration and management of the cluster itself, etc.

2. Environmental cleaning

Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here

3. Kubernetes department

3.1 Deploy docker engine on all nodes

- - 关闭节点的selinux和iptables防火墙 
	所有节点部署docker引擎  (之前做过直接看截图即可)
	# yum install -y docker-ce docker-ce-cli	
	# vim /etc/sysctl.d/docker.conf
		net.bridge.bridge-nf-call-ip6tables = 1
		net.bridge.bridge-nf-call-iptables = 1	
	# sysctl --system	
	# systemctl enable docker
	# systemctl start docker

-  官网:https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker
- 
	# vim /etc/docker/daemon.json
		{
		  "exec-opts": ["native.cgroupdriver=systemd"],
		  "log-driver": "json-file",
		  "log-opts": {
		    "max-size": "100m"
		  },
		  "storage-driver": "overlay2",
		  "storage-opts": [
		    "overlay2.override_kernel_check=true"
		  ]
		}
	
	# mkdir -p /etc/systemd/system/docker.service.d
	
	# systemctl daemon-reload
	# systemctl restart docker

server2 configuration

Insert picture description here

Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here
server3 and server2 have the same configuration,

Insert picture description here

Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here

Insert picture description here

server4 configuration

Insert picture description here
Insert picture description here

3.2 Disable swap partition

- 禁用swap分区:
	# swapoff -a
	注释掉/etc/fstab文件中的swap定义

server2

[root@server3 ~]# swapon -s 
Filename				Type		Size	Used	Priority
/dev/dm-1                              	partition	1572860	0	-2
[root@server3 ~]# swapoff -a 
[root@server3 ~]# vim /etc/fstab 
[root@server3 ~]# cat /etc/fstab 
#
# /etc/fstab
# Created by anaconda on Thu Dec 31 18:38:54 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/rhel-root   /                       xfs     defaults        0 0
UUID=a31aabff-3bbf-4e8a-90eb-f1b494de384f /boot                   xfs     defaults        0 0
#/dev/mapper/rhel-swap   swap                    swap    defaults        0 0
[root@server3 ~]# swapon -s 

Insert picture description here
server3 and server4

Insert picture description here

Insert picture description here

3.3 Install and deploy software kubeadm

[root@server2 yum.repos.d]# ls
CentOS-Base.repo  docker.repo  redhat.repo  rhel7.6.repo
[root@server2 yum.repos.d]# cat CentOS-Base.repo | grep enabled  ##没用的仓库先禁用掉,因为相互之间的依赖会影响

[root@server2 yum.repos.d]# vim k8s.repo
[root@server2 yum.repos.d]# cat k8s.repo 
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
[root@server2 yum.repos.d]# yum repolist 

[root@server2 yum.repos.d]# yum install -y kubelet kubeadm kubectl   ##安装k8s软件  
[root@server2 yum.repos.d]# systemctl enable --now kubelet.service   ##设置开机自启动

##为server3和server4配置仓库
[root@server2 yum.repos.d]# scp k8s.repo server3:/etc/yum.repos.d/
[root@server2 yum.repos.d]# scp k8s.repo server4:/etc/yum.repos.d/

##server3和4安装并启动k8s
[root@server3 yum.repos.d]# yum install -y kubelet kubeadm kubectl  
[root@server3 ~]# systemctl enable --now kubelet.service
[root@server4 yum.repos.d]# yum install -y kubelet kubeadm kubectl
[root@server4 ~]# systemctl enable --now kubelet.service

Insert picture description here
Insert picture description here
Insert picture description here

Insert picture description here
Configure warehouses for server3 and 4, and install and start
Insert picture description here
Insert picture description here

Insert picture description here
Insert picture description here

Insert picture description here

3.4 View the default configuration information

[root@server2 ~]# kubeadm config print init-defaults

Insert picture description here

3.5 Modify the mirror warehouse

- 默认从k8s.gcr.io上下载组件镜像,需要翻墙才可以,所以需要修改镜像仓库:
- 	# kubeadm config images list --image-repository registry.aliyuncs.com/google_containers	//列出所需镜像
	# kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers	//拉取镜像
	# kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers  --kubernetes-version=v1.20.2   ##可以指定kubernetes的版本参数

	docker images | grep registry.aliyuncs.com

server2 is the management side
Insert picture description here
Insert picture description here

3.6 Initialize the cluster

- # kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository registry.aliyuncs.com/google_containers		//初始化集群

--pod-network-cidr=10.244.0.0/16	//使用flannel网络组件时必须添加
--kubernetes-version 	//指定k8s安装版本
[root@server2 ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository registry.aliyuncs.com/google_containers

##生成的加入集群的命令
kubeadm join 172.25.13.2:6443 --token ca3esn.f4h8djsl6cdvhw19 \
    --discovery-token-ca-cert-hash sha256:9e80ead05c9fb466665e6c89023771fdabcddc8d8ac2b8c3b7d13252750e4fdd 

Insert picture description here
Insert picture description here

3.6 Master view status:

# kubectl get cs      
# kubectl get node     ##查看节点是否准备好
# kubectl get pod -n kube-system  ##查看系统运行状态
# kubectl get ns      ##查看namespace
# master删除docker之后需要重新初始化!!

3.7 Install flannel network components (configure first and then install network components)

-  flannel网络组件
- 	下载官网:https://github.com/coreos/flannel
- 	安装命令如下:(可以外网下载,我先用本地下载好的)
-  		$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

- 其他网络组件:
	https://kubernetes.io/zh/docs/concepts/cluster-administration/addons/
[root@server2 ~]# ll kube-flannel.yml 
-rwxr-xr-x 1 root root 14366 Jan 30 15:56 kube-flannel.yml
[root@server2 ~]# kubectl  apply -f kube-flannel.yml


Insert picture description here

3.7 Configure kubectl (before configuring the network)

- # useradd kubeadm

	# vim /etc/sudoers
	kubeadm  ALL=(ALL)       NOPASSWD: ALL
	
	# mkdir -p $HOME/.kube
	# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@server2 ~]# mkdir -p $HOME/.kube
[root@server2 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[root@server2 ~]# kubectl get pod --namespace kube-system
[root@server2 ~]# kubectl get node
NAME      STATUS   ROLES                  AGE   VERSION
server2   Ready    control-plane,master   13m   v1.20.2

Insert picture description here
Insert picture description here

3.8 Configure kubectl command completion function:

[root@server2 ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc

Insert picture description here

3.9 Package the image required by the controlled host

[root@server2 ~]# docker images
[root@server2 ~]# docker save quay.io/coreos/flannel:v0.12.0-amd64 registry.aliyuncs.com/google_containers/pause:3.2 registry.aliyuncs.com/google_containers/coredns:1.7.0 registry.aliyuncs.com/google_containers/kube-proxy:v1.20.2 > node.tar
[root@server2 ~]# ll node.tar


##发送node.tar包到节点server3和server4
[root@server2 ~]# scp node.tar server3:
[root@server2 ~]# scp node.tar server4:

[root@server3 ~]# docker load  -i node.tar 
[root@server4 ~]# docker load  -i node.tar 

Operation on server2
Insert picture description here
Insert picture description here

Node server3 and server4 operation

Insert picture description here
Insert picture description here

Insert picture description here
Insert picture description here

3.10 Node expansion

Configure node server3 and server4

[root@server3 ~]# kubeadm join 172.25.13.2:6443 --token ca3esn.f4h8djsl6cdvhw19 \
>     --discovery-token-ca-cert-hash sha256:9e80ead05c9fb466665e6c89023771fdabcddc8d8ac2b8c3b7d13252750e4fdd        ##执行的是server2初始化集群生成的token


[root@server4 ~]# kubeadm join 172.25.13.2:6443 --token ca3esn.f4h8djsl6cdvhw19 \
>     --discovery-token-ca-cert-hash sha256:9e80ead05c9fb466665e6c89023771fdabcddc8d8ac2b8c3b7d13252750e4fdd 

Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here

3.11 kubectl command guide:

https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands

Guess you like

Origin blog.csdn.net/qwerty1372431588/article/details/113429128