[Cloud native] Introduction to k8s components & architecture and deployment of the latest version of K8s

 

 

 

Personal homepage: Conquering bugs-CSDN blog

kubernetes column: kubernetes_Conquering bug blog-CSDN blog 

Table of contents

1 Cluster components

1.1 Control Plane Components

1.2 Node components

1.3 Addons

2 Cluster Architecture Details

3 Cluster construction [emphasis]

3.1 minikube

3.2 Bare metal installation


  • cluster components

  • Core idea

  • cluster installation

1 Cluster components

  • Cluster cluster: The process of organizing multiple nodes of the same software service together to provide services for the system is called the cluster of the software. Redis cluster, es cluster, mongo, etc.

  • k8s cluster: multiple nodes: 3 node roles: 1.master node/control plane control node 2. work node: working node (pod container: application container)

        When Kubernetes is deployed, you have a complete cluster. A group of worker machines, called nodes, run containerized applications. 每个集群至少有一个工作节点. Worker nodes will 托管 Pod, and Pods are components that act as application loads. 控制平面管理集群中的工作节点和 Pod.

 

1.1 Control Plane Components

        Control plane components make global decisions for the cluster, 资源的调度。 以及检测和响应集群事件,例如当不满足部署的 replicas 字段时, 要启动新的 pode.g.

        Control plane components can run on any node in the cluster. However, for simplicity, the setup script will usually start all control plane components on the same machine and will not run user containers on this machine.

  • to apiserver

    The API server is a component of the Kubernetes control plane, 该组件负责公开了 Kubernetes API,负责处理接受请求的工作. The API server is the front end of the Kubernetes control plane. The primary implementation of the Kubernetes API server is kube-apiserver. kube-apiserverIt is designed to scale horizontally, that is, it scales by deploying multiple instances. You can run kube-apiservermultiple instances of and balance traffic between those instances.

  • etcd

    一致且高度可用的键值存储,用作 Kubernetes 的所有集群数据的后台数据库

  • kube-scheduler

    kube-schedulerIs the component of the control plane responsible for monitoring newly created Pods that do not specify a node to run on, and selects the node on which the Pod will run. Scheduling decisions consider resource requirements for individual Pods and collections of Pods, hardware, software, and policy constraints, affinity and anti-affinity specifications, data locality, interference between workloads, and deadlines.

  • kube-controller-manager

    kube-controller-manager is the component of the control plane responsible for running the controller process. Logically, each controller is a separate process, but to reduce complexity, they are all compiled into the same executable and run in the same process.

    These controllers include:

    • Node Controller: responsible for notifying and responding when a node fails

    • Job Controller: monitors Job objects representing one-off tasks, then creates Pods to run those tasks to completion

    • EndpointSlice controller: Populates an EndpointSlice object (to provide links between Services and Pods).

    • Service Account Controller (ServiceAccount controller): Create a default service account (ServiceAccount) for a new namespace.

  • cloud-controller-manager (optional optional)

    A Kubernetes control plane component that embeds cloud-specific control logic. The Cloud Controller Manager allows you to connect your cluster to a cloud provider's API and separate the components that interact with that cloud from the components that interact with your cluster. cloud-controller-managerOnly run controllers specific to cloud platforms. So if you are running Kubernetes in your own environment, or running a learning environment on your local machine, the deployed cluster does not need to have a cloud controller manager. Similar to kube-controller-manager, cloud-controller-managerseveral logically independent control loops are combined into the same executable file for you to run as the same process. You can scale it horizontally (run more than one replica) to improve performance or increase fault tolerance.

    The following controllers all contain dependencies on cloud platform drivers:

    • Node Controller: used to check the cloud provider to determine whether the node has been deleted after the node terminates the response

    • Route Controller: used to set up routing in the underlying cloud infrastructure

    • Service Controller: used to create, update and delete cloud provider load balancers

1.2 Node components

The node component runs on each node and is responsible for maintaining the running pods and providing the Kubernetes runtime environment.

  • Kubelet

    The kubelet will run on every node in the cluster. It ensures that containers (containers) are running in Pods.

    The kubelet receives a set of PodSpecs provided to it through various mechanisms, and ensures that the containers described in these PodSpecs are running and healthy. The kubelet will not manage containers not created by Kubernetes.

  • be a proxy

    kube-proxy is a network proxy running on each node (node) in the cluster, and realizes part of the Kubernetes service (Service) concept.

    kube-proxy maintains some network rules on the nodes that allow network communication with pods from network sessions inside or outside the cluster.

    If the operating system provides a packet filtering layer available, kube-proxy will implement network rules through it. Otherwise, kube-proxy only forwards traffic.

  • Container Runtime

    A container runtime is the software responsible for running containers.

    Kubernetes supports many container runtimes such as containerd, CRI-0, Docker, and any other implementation of Kubernetes CRI.

1.3 Addons

  • DNS

    Although none of the other plugins are strictly required, almost all Kubernetes clusters should have cluster DNS as many examples require DNS services.

  • Web Interface (Dashboard)

    Dashboard is a generic, web-based user interface for Kubernetes clusters. It enables users to manage and troubleshoot the applications running in the cluster as well as the cluster itself.

  • Container resource monitoring

    Container resource monitoring saves some common time-series metrics about containers into a centralized database and provides an interface to browse these data.

  • Cluster-level logs

    The cluster-level log mechanism is responsible for saving the log data of the container in a centralized log storage, which provides a search and browse interface.

2 Cluster Architecture Details

 

  • Summarize

    • The Kubernetes cluster is composed of multiple nodes, and the nodes are divided into two types: one is the master node/control node (Master Node) belonging to the management plane; the other is the worker node (Worker Node) belonging to the operation plane. Obviously, the complex work must be handed over to the control node, and the worker node is responsible for providing a stable operation interface and capability abstraction.

3 Cluster construction [emphasis]

  • minikube is just a K8S cluster simulator, a cluster with only one node, just for testing, master and worker are together.

  • Bare-metal installation requires at least two machines (one for the master node and one for the worker node), and you need to install the Kubernetes components yourself, and the configuration will be a little troublesome. Disadvantages: troublesome configuration, lack of ecological support, such as load balancer and cloud storage.

  • Directly use the cloud platform Kubernetes to build visually, and you can create a cluster in just a few simple steps. Advantages: easy installation, complete ecology, load balancer, storage, etc. are all provided for you, and you can get it done with simple operations

  • k3s

    Installation is simple and the script completes automatically.

    Advantages: lightweight, low configuration requirements, simple installation, complete ecology.

3.1 minikube

 

3.2 Bare metal installation

0 environment preparation
  • Number of nodes: 3 virtual machines centos7

  • Hardware configuration: 2G or more RAM, 2 CPUs or more, hard disk at least 30G

  • Network requirements: Network intercommunication between multiple nodes, each node can access the external network

1 Cluster planning
  • k8s-node1:10.15.0.5

  • k8s-node2:10.15.0.6

  • k8s-node3:10.15.0.7

2 Set the hostname
$ hostnamectl set-hostname k8s-node1  
$ hostnamectl set-hostname k8s-node2
$ hostnamectl set-hostname k8s-node3
3 Synchronize the hosts file

If DNS does not support host name resolution, you need to /etc/hostsadd the corresponding relationship between host name and IP in the file of each machine:

cat >> /etc/hosts <<EOF
192.168.2.4 k8s-node1
192.168.2.5 k8s-node2
192.168.2.6 k8s-node3
EOF
4 Turn off the firewall
$ systemctl stop firewalld && systemctl disable firewalld
5 Close SELINUX

Note: Do not execute the ARM architecture, the execution will cause the problem that the ip cannot be obtained!

$ setenforce 0 && sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
6 Close the swap partition
$ swapoff -a && sed -ri 's/.*swap.*/#&/' /etc/fstab
7 sync time
$ yum install ntpdate -y
$ ntpdate time.windows.com
8 Install containerd
# 安装 yum-config-manager 相关依赖
$ yum install -y yum-utils device-mapper-persistent-data lvm2
# 添加 containerd yum 源
$ yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 安装 containerd
$ yum install  -y containerd.io cri-tools  
# 配置 containerd
$ cat >  /etc/containerd/config.toml <<EOF
disabled_plugins = ["restart"]
[plugins.linux]
shim_debug = true
[plugins.cri.registry.mirrors."docker.io"]
endpoint = ["https://frz7i079.mirror.aliyuncs.com"]
[plugins.cri]
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.2"
EOF
# 启动 containerd 服务 并 开机配置自启动
$ systemctl enable containerd && systemctl start containerd && systemctl status containerd 
# 配置 containerd 配置
$ cat > /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
# 配置 k8s 网络配置
$ cat  > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# 加载 overlay br_netfilter 模块
modprobe overlay
modprobe br_netfilter
# 查看当前配置是否生效
$ sysctl -p /etc/sysctl.d/k8s.conf
9 Add source
  • view source

$ yum repolist
  • add source x86

$ cat <<EOF > kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
$ mv kubernetes.repo /etc/yum.repos.d/
  • add source arm

$ cat << EOF > kubernetes.repo 
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-aarch64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
$ mv kubernetes.repo /etc/yum.repos.d/
11 Install k8s
# 安装最新版本
$ yum install -y kubelet kubeadm kubectl
# 指定版本安装
# yum install -y kubelet-1.26.0 kubectl-1.26.0 kubeadm-1.26.0
# 启动 kubelet
$ sudo systemctl enable kubelet && sudo systemctl start kubelet && sudo systemctl status kubelet
12 Initialize the cluster
  • 注意: 初始化 k8s 集群仅仅需要再在 master 节点进行集群初始化!

$ kubeadm init \
--apiserver-advertise-address=本机masterIP地址 \
--pod-network-cidr=10.244.0.0/16 \
--image-repository registry.aliyuncs.com/google_containers \
--cri-socket=unix:///var/run/containerd/containerd.sock
# 添加新节点
$ kubeadm token create --print-join-command --ttl=0
$ kubeadm join 10.15.0.21:6443 --token xjm7ts.gu3ojvta6se26q8i --discovery-token-ca-cert-hash sha256:14c8ac5c04ff9dda389e7c6c505728ac1293c6aed5978c3ea9c6953d4a79ed34 
13 Configure cluster network

Create configuration: kube-flannel.yml , execute kubectl apply -f kube-flannel.yml

  • 注意: 只在主节点执行即可!

---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
       #image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
       #image: flannelcni/flannel:v0.20.2 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
       #image: flannelcni/flannel:v0.20.2 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
14 View cluster status
# View the status of all cluster nodes as Ready, which means the cluster is built successfully

$ kubectl get nodes
NAME        STATUS   ROLES           AGE   VERSION
k8s-node1   Ready    control-plane   21h   v1.26.0
k8s-node2   Ready    <none>          21h   v1.26.0
k8s-node3   Ready    <none>          21h   v1.26.0

# Check the running status of the cluster system pods. All the following pods are in Running status, which means the cluster is available
$ kubectl get pod -A
NAMESPACE      NAME                                READY   STATUS    RESTARTS   AGE
default        nginx                               1/1     Running   0          21h
kube-flannel   kube-flannel-ds-gtq49               1/1     Running   0          21h
kube-flannel   kube-flannel-ds-qpdl6               1/1     Running   0          21h
kube-flannel   kube-flannel-ds-ttxjb               1/1     Running   0          21h
kube-system    coredns-5bbd96d687-p7q2x            1/1     Running   0          21h
kube-system    coredns-5bbd96d687-rzcnz            1/1     Running   0          21h
kube-system    etcd-k8s-node1                      1/1     Running   0          21h
kube-system    kube-apiserver-k8s-node1            1/1     Running   0          21h
kube-system    kube-controller-manager-k8s-node1   1/1     Running   0          21h
kube-system    kube-proxy-mtsbp                    1/1     Running   0          21h
kube-system    kube-proxy-v2jfs                    1/1     Running   0          21h
kube-system    kube-proxy-x6vhn                    1/1     Running   0          21h
kube-system    kube-scheduler-k8s-node1            1/1     Running   0          21h

Guess you like

Origin blog.csdn.net/weixin_53678904/article/details/132051373