Kubernetes - Detailed explanation of Kubernetes; installation and deployment (1)

1. Kubernetes

The word Kubernetes is derived from Greek and means "helmsman" or "pilot".

Kubernetes, also known as K8S, where 8 is the 8 characters representing the middle "ubernete", is a container orchestration engine open sourced by Google in 2014, as the most important component of CNCF (Cloud Native Computing Foundation; Cloud Native Computing Foundation) one

Kubernetes is used to automate the deployment, planning, expansion, and management of containerized applications. It groups the containers that make up the application into logical units for easy management and discovery, and is used to manage containerized applications on multiple hosts in the cloud platform.

The goal of Kubernetes is to make deploying containerized applications simple and efficient, and many details do not require operation and maintenance personnel to perform complex manual configuration and processing

Kubernetes official
websiteKubernetes  

GitHub address:

GitHub - kubernetes/kubernetes: Production-Grade Container Scheduling and Management

Kubernetes Chinese documentation:

What is Kubernetes? |Kubernetes

Kubernetes is developed using Go language (Go language is an open source programming language released by Google in 2009)

2. Kubernetes architecture

(1) Master

k8s cluster control node, schedules and manages the cluster, and accepts requests from users outside the cluster to operate the cluster

Master Node consists of API Server, Scheduler, ClusterState Store (ETCD database) and Controller MangerServer

1. Cluster API: kube-apiserver

kube-apiserver provides the only access to resource operations, and provides authentication, authorization, access control, API registration and discovery mechanisms.

When you need to interact with a Kubernetes cluster, you go through the API. The Kubernetes API is a front-end to the Kubernetes Control Plane for handling internal and external requests. The API server determines whether the request is valid, and if so, processes it. Users access the API through REST calls, the kubectl command-line interface, or other command-line tools such as kubeadm.

2. Cluster scheduling: kube-scheduler

kube-scheduler is responsible for resource scheduling, and schedules Pods to corresponding machines according to predetermined scheduling policies

The scheduler takes into account the pod's resource requirements (such as CPU or memory) as well as the health of the cluster. It then schedules the pods to the appropriate compute nodes.

3. Cluster controller: kube-controller-manager

kube-controller-manager is responsible for maintaining the state of the cluster, such as fault detection, automatic expansion, rolling update, etc.

The controller is responsible for actually running the cluster, and the Kubernetes controller manager is the function of multiple controllers rolled into one. The controller is used to query the scheduler and ensure that the correct number of pods are running. If a pod goes down, another controller notices and responds. Controllers connect services to pods so that requests go to the correct endpoint. There are also controllers for creating accounts and API access tokens.

4. Key-value storage database: etcd

etcd saves the state of the entire cluster

The application's configuration data and information about the state of the cluster are located in etcd (kv database). etcd is a distributed, fault-tolerant design and is considered the ultimate source of truth for the cluster

(2) Nodes

Cluster working nodes, running user business application containers

Nodes nodes are also called Worker Nodes, including kubelet, kube proxy and Pod (Container Runtime)

1、Pod

Pod is the smallest and simplest unit in the Kubernetes object model

It represents a single instance of the application. Each Pod consists of a container (or a series of tightly coupled containers) and several options that control how the container runs. Pods can connect to persistent storage to run stateful applications

2、Container Runtime Engine

Container runtime is responsible for image management and the actual operation of Pods and containers (CRI)

To run containers, each Node has a container runtime engine. Such as Docker, but Kubernetes also supports other Open Container Initiative (OCI) compliant containers such as rkt and CRI-O. Kubernetes supports most other containers that implement the CRI interface

3、kubelet

Kubelet is responsible for maintaining the life cycle of the container, and is also responsible for the management of Volume (CVI) and network (CNI)

Each compute node contains a kubelet, a tiny application that communicates with the Control Plane. A kublet ensures containers are running inside a Pod. When the Control Plane needs to perform an operation in the node, the kubelet will perform the operation

4、be a proxy

kube-proxy is responsible for providing service discovery and load balancing within the cluster for Service

Each compute node also contains kube-proxy, a network proxy for optimizing Kubernetes network services. kube-proxy is responsible for handling network communication inside or outside the cluster (relying on the packet filtering layer of the operating system, or forwarding traffic by itself)

3. Kubeadm deploys Kubernetes

(1) How to build the Kubernetes environment

1, minikube

Minikube is a tool that can run Kubernetes locally. Minikube can run a single-node Kubernetes cluster on personal computers (including Windows, macOS and Linux PCs) so that you can try out Kubernetes or do daily development work.

Hello Minikube | Kubernetes

2、kind

A tool similar to Kind and minikube that lets you run Kubernetes on your local computer. This tool requires Docker to be installed and configured.

kind

3、kube admin

Kubeadm is a K8s deployment tool that provides two operation commands, kubeadm init and kubeadm join, to quickly deploy a Kubernetes cluster;

Official address:

Kubeadm | Kubernetes

Installing kubeadm | Kubernetes

Install kubeadm | Kubernetes

4. Binary package

Download the binary package of the release version from Github, manually deploy and install each component, and form a Kubernetes cluster. The steps are cumbersome, but you can have a clearer understanding of each component.

5. yum installation

Install each component of Kubernetes through yum to form a Kubernetes cluster, but the k8s version in the yum source is relatively old, so this method is less used

 6. Third-party tools

Some masters have encapsulated some tools, and use these tools to install the k8s environment;

 7. Spend money to buy

Direct purchase of public cloud platform k8s like Alibaba Cloud

(2) Kubeadm deploys Kubernetes

kubeadm is a tool launched by the official community for the rapid deployment of kubernetes clusters. This tool can complete the deployment of a kubernetes cluster through two instructions

 1. Create a Master node: kubeadm init

 2. Add the Node node to the Master cluster:  kubeadm join <IP and port of the Master node>

(3) Kubernetes deployment environment requirements

(1) One or more machines, operating system CentOS 7.x-86_x64

(2) Hardware configuration: Memory 2GB or 2G+, CPU 2 cores or CPU 2 cores+

(3) Each machine in the cluster can communicate with each other

(4) Disable swap partition

(5) Each machine in the cluster can access the external network and needs to pull the image

If the environment does not meet the requirements, an error will be reported as follows

  

(4) Kubernetes deployment environment preparation

1. Turn off the firewall

systemctl stop firewalld
systemctl disable firewalld

2. Close selinux

sed -i 's/enforcing/disabled/' /etc/selinux/config  #永久
setenforce 0  #临时

3. Turn off swap (k8s prohibits virtual memory to improve performance)

sed -ri 's/.*swap.*/#&/' /etc/fstab #永久
swapoff -a #临时

4. Add hosts to the master

cat >> /etc/hosts << EOF
192.168.132.129 k8smaster
192.168.132.130 k8snode
EOF

5. Set bridge parameters

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system  #生效

6. Synchronization time

yum install ntpdate -y    #如果没有ntpdate命令先安装
ntpdate time.windows.com

(5) Kubernetes installation specific steps

1. Install Docker

Install Docker/kubeadm/kubelet/kubectl on all server nodes

The default container operating environment of Kubernetes is Docker, so you need to install Docker first

We can install the specified version of Docker as follows 

//更新docker的yum源
yum install wget -y
 
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
 
//安装指定版本的docker:
yum install docker-ce-20.10.0 -y

For installation and uninstallation, please refer to the latest documentation on the official website

Install Docker Engine on CentOS | Docker Documentation

Docker release notes

Docker Engine release notes | Docker Documentation

Configure Accelerator to Accelerate Downloads

/etc/docker/daemon.json  If this file does not exist, create it first

{
    # 阿里云镜像加速器;可自己注册复制过来
    "registry-mirrors": ["https://registry.docker-cn.com"]
}

  

implement 

systemctl enable docker.service

 Build: kubeadm, kubelet, kubectl

Kubelet

Runs on all nodes of the cluster and is responsible for starting PODs and containers

Kubeadm:

A tool for initializing the cluster

kubectl:

kubectl is a kubenetes command line tool, through which kubectl can deploy and manage applications, view various resources, create, delete and update components

2. Add the Alibaba Cloud YUM source of k8s

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

3. Install kubeadm, kubelet and kubectl

The Kubernetes version is as follows

Patch Version | Kubernetes

yum install kubelet-1.19.4 kubeadm-1.19.4 kubectl-1.19.4 -y

Check if installed:

yum list installed | grep kubelet

yum list installed | grep kubeadm

yum list installed | grep kubectl

Check the installed version:

kubeadm version
kubectl version --client
kubelet --version

Execute after installation

systemctl enable kubelet.service

Restart CentOS: reboot 

Linux centos restart command:

1. Reboot normal restart

2. Shutdown -r now restart immediately (used by root user)

3. shutdown -r 10 automatically restarts after 10 minutes (used by root user)

4. shutdown -r 10:30 restart at 10:30 (root user)

4. Deploy the Kubernetes Master master node 

(2) Execute on the master machine

kubeadm init 
    --apiserver-advertise-address=192.168.133.129 
    --control-plane-endpoint=master
    --image-repository registry.aliyuncs.com/google_containers 
    --kubernetes-version v1.19.4 
    --service-cidr=10.96.0.0/12 
    --pod-network-cidr=10.244.0.0/16

Parameter Description:

        --apiserver-advertise-address Cluster advertisement address, master node id

        --control-plane-endpoint hostname of the master node

        --image-repository Because the default pull image address k8s.gcr.io cannot be accessed in China, specify the address of the Alibaba Cloud image repository here

        --kubernetes-version K8s version, consistent with the one installed above

        --service-cidr cluster internal virtual network, Pod unified access entrance

        --pod-network-cidr Pod network, keep the same as the CNI network component yaml deployed below

Note:

The selection of service-cidr cannot overlap or conflict with PodCIDR and the local network. Generally, you can choose a private network address segment that is not used by the local network and PodCIDR. For example, if PODCIDR uses 10.244.0.0/16, then service cidr can be selected 10.96.0.0/12, the network does not overlap and conflict 

Execution error

Restart centos, and then execute the above kubeadm init command

After trying it, restarting centos will still report the same error and check carefully; "kube-apiserver.yaml, kube-controller-manager.yaml, kube-scheduler.yaml, etcd.yaml" these yaml files already exist, as long as Just reset it

kubeadm reset
mv /etc/containerd/config.toml /tmp/
systemctl restart containerd

Then re-enter the above kubeadm init command, you will get the following output, paste the last paragraph
kubeadm join IP:PORT --token XX \
    --discovery-token-ca-cert-hash sha256:XX 

Backup (to add a new node to the cluster, the executed command is the kubeadm join command output by kubeadm init at the end)

kubeadm join 192.168.133.129:6443 --token vx912w.g8wyei3zo8qem1mq \
    --discovery-token-ca-cert-hash sha256:5c08ca34fed1e3fcef41e581970d3046ac85db10ee4a3875efb0bca5c8f8b104

1 in the above figure means the initialization is successful

2 indicates the operations that the cluster still needs

3 means adding a node token (valid for 24 hours, just get the token after it expires)

reacquire token

kubeadm token create --print-join-command

(2) Execute on the master machine ( configure environment variables )

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubectl get nodes

If you encounter the following error:

sudo: /etc/sudoers is world writable
sudo: no valid sudoers sources found, quitting
sudo: unable to initialize policy plugin

Because the /etc/sudoers file is read-only, so sudo chmod 777 /etc/sudoers, the result can modify this file, but the sudo of all users cannot be used

We can enter the command to modify the permissions of sudoers:

chmod 0440 /etc/sudoers 
reboot

Then enter reboot to restart

 kubectl get nodes

5. Add the node node to the Kubernetes master and execute it on the Node machine

To add a new node to the cluster, the executed command is the last output kubeadm join command of kubeadm init

kubeadm join 192.168.133.129:6443 --token vx912w.g8wyei3zo8qem1mq \
    --discovery-token-ca-cert-hash sha256:5c08ca34fed1e3fcef41e581970d3046ac85db10ee4a3875efb0bca5c8f8b104

 kubectl get nodes 

(6) Deploy the network plug-in (master master node machine running)

1. Download the kube-flannel.yml file

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

2. Apply the kube-flannel.yml file to get the runtime container

kubectl apply -f kube-flannel.yml

3. View node status: kubectl get nodes

It will not be ready immediately, it will take a while for the node to be ready

Check status for success 

kubectl get cs
kubectl cluster-info

View the runtime container pod (multiple docker containers are running in a pod)

kubectl get pods -n kube-system

At this point, our k8s environment is set up OK! ! !

4. Kubernetes deploys "containerized applications" ( testing kubernetes clusters )

Create a pod in the cluster and verify that it is running normally

1. Deploy an Nginx in the Kubernetes cluster

1. Pull an nginx mirror on the Internet

kubectl create deployment nginx --image=nginx

2. Expose a port 80 to the outside world

kubectl expose deployment nginx --port=80 --type=NodePort

3. Check the external port

kubectl get pod,svc

4. Access address: http://NodeIP:Port

 2. Deploy a Tomcat in the Kubernetes cluster

kubectl create deployment tomcat --image=tomcat

kubectl expose deployment tomcat --port=8080 --type=NodePort

kubectl get pod,svc

Access address: http://NodeIP:Port

Guess you like

Origin blog.csdn.net/MinggeQingchun/article/details/126347932