Kubernetes definition and deployment

Table of contents

1. Definition

2. Architecture

1. Control Plane Components

2. Node components

3. Deploy the cluster


1. Definition

A large-scale container orchestration system, a system of systems.

k8s features:

  • Service Discovery and Load Balancing
    Expose containers using DNS names or their own IP addresses, and if there is a lot of traffic entering the container, it can load balance and distribute network traffic, making deployment stable.
  • Storage orchestration
    allows you to automatically mount the storage system of your choice, such as local storage, public cloud providers, etc.
  • Automated Deployment and Rollback
    You can use Kubernetes to describe the desired state of a deployed container, which can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, delete existing containers and use all their resources for new containers.
  • Automatic bin packing calculations
    allow you to specify the CPU and memory (RAM) required for each container. When a container specifies resource requests, better decisions can be made to manage the container's resources.
  • Self-healing
    restarts failed containers, replaces containers, kills containers that do not respond to user-defined health checks, and does not notify clients of them until they are ready to serve.
  • Key and Configuration Management
    Allows you to store and manage sensitive information such as passwords, OAuth tokens, and ssh keys. You can deploy and update secrets and application configurations without rebuilding container images, or exposing secrets in stack configurations.

2. Architecture

Control Plane Components

Components of the control plane make global decisions about the cluster (such as scheduling), as well as detect and respond to cluster events.

Control plane components can run on any node in the cluster . However, for simplicity, the setup script will usually start all control plane components on the same machine and will not run user containers on this machine. For details, please refer to Create a high-availability cluster using kubeadm|Kubernetes 

  • to apiserver

The API server is the component of the Kubernetes control plane that exposes the Kubernetes API. The API server is the front end of the Kubernetes control plane and is responsible for dealing with external nodes .

The primary implementation of the Kubernetes API server is kube-apiserver
. kube-apiserver is designed to scale horizontally, that is, it scales by deploying multiple instances. You can run multiple instances of kube-apiserver and balance traffic between these instances. For details, please check kube-apiserver | Kubernetes 

  • etcd

etcd is a key-value database with consistency and high availability , which can be used as the background database for storing all Kubernetes cluster data. See  Documentation versions | etcd for details .

  • kube-scheduler

The control plane component is responsible for monitoring newly created Pods that do not specify a running node, and selects a node for the Pod to run on.

Factors considered in scheduling decisions include resource requirements of individual Pods and collections of Pods, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, interference among workloads, and deadlines.

  • kube-controller-manager

Components that run the controller on the master node.

Logically, each controller is a separate process, but to reduce complexity, they are all compiled into the same executable and run in one process. For details, please see Controller | Kubernetes

These controllers include:

  • Node Controller: Responsible for notification and response when a node fails
  • Job controller: monitors Job objects representing one-off tasks, then creates Pods to run those tasks to completion
  • Endpoints Controller: Populate the Endpoints object (that is, add Service and Pod)
  • Service Account & Token Controllers: Create default accounts and API access tokens for new namespaces
  • cloud-controller-manager

A cloud controller manager refers to a control plane component that embeds the control logic for a particular cloud. Cloud Controller Manager allows you to link clusters to cloud provider APIs and separate components that interact with that cloud platform from components that only interact with your cluster.

cloud-controller-managerOnly run control loops specific to the cloud platform. If you are running Kubernetes in your own environment, or running the learning environment on your local machine, you do not need Cloud Controller Manager in your deployed environment.

Similar to kube-controller-manager, cloud-controller-managerseveral logically independent control loops are combined into the same executable file for you to run as the same process. You can scale it horizontally (run more than one replica) to improve performance or increase fault tolerance.

The following controllers all contain dependencies on cloud platform drivers:

  • Node Controller: used to check the cloud provider to determine whether the node has been deleted after the node terminates the response
  • Route Controller: used to set up routing in the underlying cloud infrastructure
  • Service Controller: used to create, update and delete cloud provider load balancers

2. Node components

The node component runs on each node, maintains running pods and provides the Kubernetes runtime environment.

  • Kubelet

A proxy running on each node in the cluster. It ensures that all containers are running in Pods .

The kubelet receives a set of PodSpecs provided to it through various mechanisms, and ensures that the containers described in these PodSpecs are running and healthy. The kubelet will not manage containers not created by Kubernetes.

  • be a proxy

kube-proxy is a network proxy running on each node in the cluster, which implements part of the Kubernetes service (Service) concept.

kube-proxy maintains network rules on nodes. These network rules allow network communication with pods from network sessions inside or outside the cluster.

3. Deploy the cluster

Basic environmental requirements

1.2GB or more memory and 2 cores or more CPU linux system

2. The networks of all machines in the cluster can be connected to each other (both public and intranet are available)

3. Nodes cannot have duplicate hostnames, MAC addresses or product_uuids

4. Disable the swap partition. In order for the kubelet to work properly, you must disable swap partitions.

5.Doceker environment installation, Docker basic knowledge notes (1)_cbzhunian's blog-CSDN blog

1. Configure the basic environment

#各个机器设置自己的域名 
hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2



# 一次性
sudo setenforce 0
# SELinux 设置为 permissive 模式(相当于将其禁用),永久性
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

# 关闭swap
# 一次性
swapoff -a
# 永久性  
sed -ri 's/.*swap.*/#&/' /etc/fstab

# tee命令将命令的输出结果显示在界面并将输出结果存入k8s.conf
# 下面命令将EOF之间的内容输入到k8s.conf 文件中
#允许 iptables 检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF


sudo sysctl --system

2. Install kubelet, kubeadm, kubectl

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
   http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF


sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes

# 启动并设置成开机自启动
sudo systemctl enable  kubelet --now

3. Download the image required for the control panel (the docker service must be enabled) 

sudo tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.20.9
kube-proxy:v1.20.9
kube-controller-manager:v1.20.9
kube-scheduler:v1.20.9
coredns:1.7.0
etcd:3.4.13-0
pause:3.2
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
done
EOF
   
chmod +x ./images.sh && ./images.sh

4. Initialize the management node

#所有机器添加master域名映射,IP是自己主机IP
echo "172.31.0.1  cluster-endpoint" >> /etc/hosts
echo "172.31.0.2  node1" >> /etc/hosts
echo "172.31.0.3  node2" >> /etc/hosts



#主节点初始化
kubeadm init \
--apiserver-advertise-address=172.31.0.1 \
--control-plane-endpoint=cluster-endpoint \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.168.0.0/16

#IP地址不要冲突

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join cluster-endpoint:6443 --token hums8f.vyx71prsg74ofce7 \
    --discovery-token-ca-cert-hash sha256:a394d059dd51d68bb007a532a037d0a477131480ae95f75840c461e85e2c6ae3 \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join cluster-endpoint:6443 --token hums8f.vyx71prsg74ofce7 \
    --discovery-token-ca-cert-hash sha256:a394d059dd51d68bb007a532a037d0a477131480ae95f75840c461e85e2c6ae3

5. After the startup is successful, follow the prompts to operate

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf

6. Install network components

curl https://docs.projectcalico.org/v3.8/manifests/calico.yaml -O
kubectl apply -f calico.yaml

7. Other nodes join

kubeadm join cluster-endpoint:6443 --token hums8f.vyx71prsg74ofce7 \
    --discovery-token-ca-cert-hash sha256:a394d059dd51d68bb007a532a037d0a477131480ae95f75840c461e85e2c6ae3

# 有效期24小时,可以重新生成
kubeadm token create --print-join-command

Install the visual interface

1. start

curl https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml -O

kubectl apply -f recommended.yaml

2. Set the external access port, enter /type (find type) after entering and change ClusterIP to NodePort

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

3. Check the corresponding to the external port

         Access: https://cluster any IP: port

 4. Set up access account

#创建访问账号,准备一个yaml文件; vi dash.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
kubectl apply -f dash.yaml

5. Generate token

kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{
   
   {.data.token | base64decode}}"

Reference: Basic Concepts of Kubernetes Yuque

Guess you like

Origin blog.csdn.net/qq_38295645/article/details/124316196