Getting Started with Kubernetes 1: Introduction and Environment Preparation

Kubernetes study notes

This section mainly introduces:

  • Some basic knowledge of Kubernetes
  • Related operation content of Kubernetes cluster construction

Preface

1. Evolution of deployment:

The evolution process of the entire deployment is shown in the figure:

Insert image description here

  • Traditional deployment era:

    • Run the application on a physical server
    • Unable to define resource boundaries for application
    • leading to resource allocation problems

    If you run multiple applications on a physical server, there may be a situation where one application takes up the majority of the resources, causing performance degradation for other applications. One solution is to run each application on a different physical server, but this does not scale due to underutilization of resources, and the cost of maintaining many physical servers is high.

  • Virtualization deployment

    • Virtualization technology allows multiple virtual machines to be run on a single physical server's CPU
    • Virtualization allows applications to be isolated between virtual machines and provides certain procedural security so that applications cannot be accessed at will.
    • Virtualization technology can better utilize resources on physical servers
    • Each virtual machine is a complete computer, running all components on top of the virtualized hardware, including its own operating system, etc.

    Disadvantages: Each virtual machine has its own operating system, etc., and the virtual layer is redundant, resulting in resource waste and performance degradation.

  • Containerized deployment

    • Containers are similar to virtual machines, but the operating system can be shared between applications (there is only one copy of the operating system and it is shared between each program)
    • Containers are similar to virtual machines and have their own file system, CPU, memory, process space, etc.
    • Due to the separation of containers and infrastructure, it can be easily transplanted across clouds and Linux distributions.

1. kubernetesIntroduction

1.1 Overview

​Iskubernetes (k8s) a portable, extensible open source platform for managing containerized workloads and services that facilitates declarative configuration and automation. KubernetesHas a large and rapidly growing ecosystem. KubernetesServices, support and tools are widely available. k8sThis abbreviation is due to the eight-character relationship between k and s.

KubernetesGiving us the following functionality:

  • Self-healing : Once a container crashes, a new container can be quickly started in about 1 second
  • Auto-scaling : The number of running containers in the cluster can be automatically adjusted as needed.
  • Service discovery : A service can find the services it depends on through automatic discovery
  • Load balancing : If a service starts multiple containers, the load balancing of requests can be automatically realized
  • Version rollback : If you find a problem with the newly released program version, you can immediately roll back to the original version.
  • Storage orchestration : Storage volumes can be automatically created according to the needs of the container itself
    Insert image description here

1.2 kubernetes components

A kubernetescluster is mainly composed of a control node (master) and a working node (node), and different components are installed on each node.

Insert image description here

master: the control plane of the cluster, responsible for cluster decision-making (management)

ApiServer: The only entrance for resource operations, receives commands input by users, and provides APImechanisms such as authentication, authorization, registration, and discovery.

Scheduler: Responsible for cluster resource scheduling and scheduling Pods to corresponding node nodes according to the predetermined scheduling strategy.

ControllerManager: Responsible for maintaining the status of the cluster, such as program deployment arrangements, fault detection, automatic expansion, rolling updates, etc.

Etcd: Responsible for storing information about various resource objects in the cluster

node: the data plane of the cluster, responsible for providing the operating environment (work) for the container

Kubelet : Responsible for maintaining the life cycle of the container, that is, creating, updating, and destroying the container by controlling docker

KubeProxy: Responsible for providing service discovery and load balancing within the cluster

pod : Pod is another encapsulation of the container. A pod can contain multiple containers.

  • What docker run starts is a container. The container is the basic unit of Docker. An application is a container.
  • kubectl runWhat is started is an application called a Pod, and Pod is Kubernetes the basic unit.

Docker : Responsible for various operations of containers on nodes

Next, deploy an nginx service to illustrate the calling relationship between various components of the kubernetes system:

  1. First of all, it must be clear that once kubernetesthe environment is started, both master and node will store their own information in etcdthe database.

  2. A nginxservice installation request will first be sent to the masternode apiServercomponent

  3. apiServernodeThe component will call the scheduler component to determine which node the service should be installed on.

    At this time, it will read the information of each node from etcd, then select according to a certain algorithm, and inform the apiServer of the result.

  4. apiServerCall controller-managerto schedule Nodenode installation nginxservice

  5. kubeletAfter receiving the instruction, it will be notified dockerand dockerthen start nginxapod

    podYes kubernetes, the smallest operating unit that the container must run in pod. To this end,

  6. An nginx service is running. If you need to access nginx, you need to use kube-proxy to generate an access proxy for the pod.

nginxIn this way, external users can access services in the cluster .

1.3 kubernetesGlossary of terms

  • Master : Cluster control node. Each cluster requires at least one master node to be responsible for the management and control of the cluster.

  • Node : workload node, the master allocates containers to these node working nodes, and then the docker on the node node is responsible for running the container.

  • Pod : kubernetesThe smallest control unit. Containers are all running in pods. There can be 1 or more containers in a pod.

  • Controller : Controller, through which pods are managed, such as starting pods, stopping pods, scaling the number of pods, etc.

  • Service : A unified entrance for pod’s external services. Multiple pods of the same type can be maintained below.

  • Label : Label, used to classify pods. Pods of the same type will have the same label.

  • NameSpace: Namespace, used to isolate the running environment of pods

2. Cluster environment construction

2.1 Overall introduction

1. Cluster type:

Kubernetes clusters can generally be divided into two categories:

  • **One master and multiple slaves:** One master node and multiple node nodes. This method is simple to set up. Once the host master fails abnormally, the entire cluster will go down, which is more suitable for test environments.
  • Multi-master and multi-slave: multiple master nodes and multiple node nodes. This method is more complicated to set up, but it has high security. Once a master node fails, other masters can immediately take over to ensure the normal operation of the cluster. It is mostly used for Production Environment.
    Insert image description here

2. Construction method

Kubernetes has a variety of deployment methods. The current mainstream methods include kubeadm, minikube, and binary packages.

  • minikube: A Kubernetes tool for quickly building a single node.

  • kubeadm: A tool for quickly building Kubernetes clusters (can be used in production environments).

  • Binary package: Download the binary package of each component from the official website and install it in order (recommended for use in production environments).

kubeadm is a tool launched by the official community for rapid deployment of kubernetes clusters. This tool can complete the deployment of a kubernetes cluster through two instructions:

  • Create a Master node kubeadm init
  • Add the Node node to the current cluster $ kubeadm join <IP and port of the Master node>

This construction is carried out using kubeadm.

3. Host planning

Role IP address operating system Configuration
master 192.168.79.100 CentOS7.9 + infrastructure server 2 core 2G 20G
node1 192.168.79.101 CentOS7.9 + infrastructure server 2 core 2G 20G
node2 192.168.79.102 CentOS7.9 + infrastructure server 2 core 2G 20G

This cluster construction: 3 CentOS servers, one master, two nodes (one master and two slaves) CentOS 7.9 infrastructure server

Install docker (18.06.3); kubeadm (1.17.4); kubelet (1.17.4); kubectl (1.17.4) on each server

  • Network communication between all machines in the cluster
  • Can access the external network, need to pull the image
  • Disable swap partition

2.2. Install CentOS 7 on VM Ware

Steps to install CentOS7 on VMWare and configure IP address:

  1. First download and install the VM software and the CentOS7 version of the image: Download address:

Software download address:

CentOS7 Alibaba Cloud download address: Click here to download the CentOS7 image

VM Ware: Cloud disk link: Cloud disk download link Extraction code: 5lgm

  1. VMInstall 3 servers respectively through the software CentOS, one is the master and two are Node. Take one of them as an example. Please see another article for the specific installation process:

    CentOS7Installation details

    Click here to view the detailed installation process of CentOS7

2.3 Environment initialization

After the three CentOS servers are installed, as shown in the figure, they can be connected through the MobaXterm software and three machines can be operated at the same time.

Insert image description here

Insert image description here

Insert image description here

  1. Check the CentOS installation version and make sure it is CentOS7 or above.

    # 此方式安装k8s集群要求CentOS 版本必须在7.5以之上,本次使用7.9版本
    [root@master ~]# cat /etc/centos-release
    CentOS Linux release 7.9.2009 (Core)
    [root@master ~]#
    
  2. Host domain name resolution

    To facilitate direct access between cluster nodes, configure the host domain name resolution.

    #编辑三台服务器的/etc/hosts文件,在里面添加如下内容:
    192.168.79.100 master
    192.168.79.101 node1
    192.168.79.102 node2
    
  3. Time synchronization

    k8sIt is required that the node time in the cluster must be accurate and always use chronyda service to synchronize the time from the network.

    # 启动chronyd服务
    [root@master ~]# systemctl start chronyd
    # 将其设置为开机自启动
    [root@master ~]# systemctl enable chronyd
    # 查看当前三台机器的时间(只展示其中master一台)
    [root@master ~]# date
    2022年 07月 01日 星期五 13:04:36 CST
    
  4. Disable iptables and firewalld services

    K8s and docker will generate a large number of iptables rules during operation. In order not to be confused with the system rules, directly close the system rules.

    # 关闭firewalld
    [root@master ~]# systemctl stop firewalld
    # 设置为开启禁止自启
    [root@master ~]# systemctl disable firewalld
    Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
    Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
    # 关闭iptables (本机器没有iptales)
    [root@master ~]# systemctl stop iptables
    Failed to stop iptables.service: Unit iptables.service not loaded.
    
  5. disable selinux

    selinux is a security service under the Linux system. If it is not turned off, various problems will occur when installing the cluster.

    # 编辑注册表/etc/selinux/config文件,修改SELINUX的值为disabled 修改完之后需要重启Linux服务才能生效
    
    # 当前为开启状态
    [root@master ~]# getenforce
    Enforcing
    # 
    [root@master ~]# vim /etc/selinux/config
    # 进入该文件进行修改
    

Insert image description here

  1. Disable swap partition

    The swap partition is a virtual memory partition. After the physical memory is used, the disk space is virtualized into memory for use. Starting the swap device will have an impact on system performance, so disable the swap partition directly.

    # 编辑分区的配置文件/etc/fstab 注释掉swap分区这一行内容,修改之后需要重启服务
    [root@master ~]# vim /etc/fstab
    

Insert image description here

  1. Modify Linux kernel parameters

    #修改Linux的内核参数,添加网桥过滤和地址转发功能
    # 编辑(新建)/etc/sysctl.d/kubernetes.conf文件,添加如下配置
    net.bridge.bridge-nf-call-iptables=1
    net.bridge.bridge-nf-call-ip6tables=1
    net.ipv4.ip_forward=1
    
    # 保存退出之后重新加载配置
    [root@master ~]# sysctl -p
    
    # 加载网桥过滤模块
    [root@master ~]# modprobe br_netfilter
    
    # 查看该模块是否加载成功
    [root@master ~]# lsmod | grep br_netfilter
    br_netfilter           22256  0
    bridge                151336  1 br_netfilter
    

    Insert image description here

  2. Configuration ipvsfunction

    There are two proxy modes in , one is based on , and the other is based on . Comparing the two, k8sthe performance of , is higher. To use this mode, you need to manually load the module.serviceiptablesipvsipvsipvs

    # 1.安装ipset和ipvsadm
    [root@master ~]# yum install ipset ipvsadmin -y
    已加载插件:fastestmirror, langpacks
    Determining fastest mirrors
     * base: mirrors.aliyun.com
     * extras: mirrors.aliyun.com
     * updates: mirrors.aliyun.com
    base                                                                    | 3.6 kB  00:00:00
    extras                                                                  | 2.9 kB  00:00:00
    updates                                                                 | 2.9 kB  00:00:00
    (1/4): base/7/x86_64/primary_db                                         | 6.1 MB  00:00:00
    (2/4): base/7/x86_64/group_gz                                           | 153 kB  00:00:00
    (3/4): extras/7/x86_64/primary_db                                       | 247 kB  00:00:00
    (4/4): updates/7/x86_64/primary_db                                      |  16 MB  00:00:18
    软件包 ipset-7.1-1.el7.x86_64 已安装并且是最新版本
    没有可用软件包 ipvsadmin。
    无须任何处理
    
    # 2.添加需要加载的模块写入脚本文件
    [root@master ~]# cat <<EOF> /etc/sysconfig/modules/ipvs.modules
    > #!/bin/bash
    > modprobe -- ip_vs
    > modprobe -- ip_vs_rr
    > modprobe -- ip_vs_wrr
    > modprobe -- ip_vs_sh
    > modprobe -- nf_conntrack_ipv4
    > EOF
    
    # 3. 为脚本文件添加执行权限
    [root@master ~]# chmod +x /etc/sysconfig/modules/ipvs.modules
    
    # 4. 执行脚本文件
    [root@master ~]# /bin/bash /etc/sysconfig/modules/ipvs.modules
    
    # 5. 查看对应模块是否加载成功
    [root@node2 ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4
    nf_conntrack_ipv4      15053  0
    nf_defrag_ipv4         12729  1 nf_conntrack_ipv4
    ip_vs_sh               12688  0
    ip_vs_wrr              12697  0
    ip_vs_rr               12600  0
    ip_vs                 145458  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
    nf_conntrack          139264  2 ip_vs,nf_conntrack_ipv4
    libcrc32c              12644  3 xfs,ip_vs,nf_conntrack
    [root@master ~]#
    
  3. Restart the server

    [root@master ~]# reboot
    
  4. After restarting the service, check selinuxwhether the shutdown is successful.

    # 显示Disabled 关闭成功
    [root@master ~]# getenforce
    Disabled
    
    

2.4 Install docker

  1. Docker's official image source is slow, switch to Alibaba Cloud's image source

    # 切换为阿里云的镜像源
    [root@master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
    
  2. Check the supported docker versions in the current image source

    # 支持的docker版本,本次选择 docker-ce-18.06.3.ce-3.el7
    [root@master ~]# yum list docker-ce --showduplicates
    [root@master ~]#
    
    

Insert image description here

  1. Install the specified versiondocker-ce

    # --setopt=obsoletes=0 表示指定安装对应版本
    [root@master ~]# yum install --setopt=obsoletes=0 docker-ce-18.06.3.ce-3.el7 -y
    
  2. Add a configuration file

    # Docker 默认情况下使用Cgroupfs,而k8s推荐使用systemd来替代cgroupfs, 更换下载的镜像源
    [root@master ~]# mkdir /etc/docker
    
    [root@master ~]# cat <<EOF> /etc/docker/daemon.json
    {
          
          
    "exec-opts": ["native.cgroupdriver=systemd"],
    "registry-mirrors":["https://kn0t2bca.mirror.aliyuncs.com"]
    }
    EOF
    
  3. Start docker

    # 启动docker
    [root@master ~]# systemctl restart docker
    # 设置为开机自启
    [root@master ~]# systemctl enable docker
    
  4. Check docker status and version information

    #检查版本信息
    [root@master ~]# docker version
    Client:
     Version:           18.06.3-ce
     API version:       1.38
     Go version:        go1.10.3
     Git commit:        d7080c1
     Built:             Wed Feb 20 02:26:51 2019
     OS/Arch:           linux/amd64
     Experimental:      false
    
    Server:
     Engine:
      Version:          18.06.3-ce
      API version:      1.38 (minimum version 1.12)
      Go version:       go1.10.3
      Git commit:       d7080c1
      Built:            Wed Feb 20 02:28:17 2019
      OS/Arch:          linux/amd64
      Experimental:     false
    [root@master ~]#
    
    

2.5 Install kubernetescomponents

  1. Switch k8s image source

    # k8s镜像源在国外,下载速度较慢,切换为阿里云的
    # 编辑(新建文件)/etc/yum.repos.d/kubernetes.repo, 添加如下的配置:
    [root@master ~]# vim /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    

Insert image description here

  1. Installation kubeadm, kubectl,kubelet

    # 安装指定版本 1.17.4
    [root@master ~]# yum install --setopt=obsoletes=0 kubeadm-1.17.4-0 kubelet-1.17.4-0 kubectl-1.17.4-0 -y
    
  2. Modify configuration file

    # 配置kubelet的cgroup, 编辑/etc/sysconfig/kubelet,添加如下配置:
    [root@master ~]# vim /etc/sysconfig/kubelet
    KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
    KUBE_PROXY_MODE="ipvs"
    
  3. Set kubelet to start automatically at boot

    [root@master ~]# systemctl enable kubelet
    

All the contents in the previous 2.2-2.5 need to be operated on three virtual machines to prepare for building a cluster environment.

2.6 Cluster initialization

Start initializing the cluster and add the node node to the cluster. All the following steps only need to be performed on the master node.

  1. Create a cluster

    # 由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里需要指定阿里云镜像仓库地址
    [root@master ~]# kubeadm init --kubernetes-version=v1.17.4 --apiserver-advertise-address=192.168.79.100 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.0 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
    # 下面是执行命令之后的运行结果
    W0701 15:41:54.850536    9562 validation.go:28] Cannot validate kube-proxy config - no validator is available
    W0701 15:41:54.850592    9562 validation.go:28] Cannot validate kubelet config - no validator is available
    [init] Using Kubernetes version: v1.17.4
    [preflight] Running pre-flight checks
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Starting the kubelet
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.79.100]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.79.100 127.0.0.1 ::1]
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.79.100 127.0.0.1 ::1]
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    W0701 15:42:21.370847    9562 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    W0701 15:42:21.371810    9562 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [apiclient] All control plane components are healthy after 15.006014 seconds
    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
    [upload-certs] Skipping phase. Please see --upload-certs
    [mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
    [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [bootstrap-token] Using token: l8o7vp.gbiatmb3iexxbppo
    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    # 显示集群下载安装完毕
    Your Kubernetes control-plane has initialized successfully!
    # 需要运行下面这段话
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    # 将任意多的worker nodes加入到根中
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 192.168.79.100:6443 --token l8o7vp.gbiatmb3iexxbppo \
        --discovery-token-ca-cert-hash sha256:dfa1740b693f91f1e3eaf889c50802195d7dc30a4f2a3c7a9b7101b295ad1fe9
    
    
  2. Create necessary files:

    sudo chown $(id -u):$(id -g) $HOME/.kube/config# 集群创建完毕之后显示下面这段提示,
    # 需要运行下面这段话
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
      
    [root@master ~]# mkdir -p $HOME/.kube
    [root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    [root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    # 此时查看node会发现这个集群只有一个master节点,需要将node节点加入到集群中
    [root@master ~]# kubectl get nodes
    NAME     STATUS     ROLES    AGE     VERSION
    master   NotReady   master   4m51s   v1.17.4
    [root@master ~]#
    
  3. To add node nodes to the cluster, you need to execute the following commands in node1 and node2

    # 将任意多的worker nodes加入到根中
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 192.168.79.100:6443 --token l8o7vp.gbiatmb3iexxbppo \
        --discovery-token-ca-cert-hash sha256:dfa1740b693f91f1e3eaf889c50802195d7dc30a4f2a3c7a9b7101b295ad1fe9
        
    # 分别在node1和node2中运行这两句命令
    [root@node1 ~]# kubeadm join 192.168.79.100:6443 --token l8o7vp.gbiatmb3iexxbppo --discovery-token-ca-cert-hash sha256:dfa1740b693f91f1e3eaf889c50802195d7dc30a4f2a3c7a9b7101b295ad1fe9
    
    [root@node2 ~]# kubeadm join 192.168.79.100:6443 --token l8o7vp.gbiatmb3iexxbppo --discovery-token-ca-cert-hash sha256:dfa1740b693f91f1e3eaf889c50802195d7dc30a4f2a3c7a9b7101b295ad1fe9
    
    #加入到集群之后,查看master机器中集群的节点信息:此时显示所有的node均已添加进来,但是还处于NotReady的状态,需要添加网络插件
    [root@master ~]# kubectl get nodes
    NAME     STATUS     ROLES    AGE    VERSION
    master   NotReady   master   8m9s   v1.17.4
    node1    NotReady   <none>   14s    v1.17.4
    node2    NotReady   <none>   31s    v1.17.4
    [root@master ~]#
    
    
  4. Configure network plug-in

    # 下载网络插件到yml文件中
    [root@master ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    
    # 控制台的执行结果如下
    --2022-07-01 15:53:27--  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    正在解析主机 raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.108.133, 185.199.109.133, ...
    正在连接 raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... 已连接。
    已发出 HTTP 请求,正在等待回应... 200 OK
    长度:5750 (5.6K) [text/plain]
    正在保存至: “kube-flannel.yml”
    
    100%[=======================================================================================================================================================================>] 5,750       --.-K/s 用时 0s
    
    2022-07-01 15:53:28 (18.3 MB/s) - 已保存 “kube-flannel.yml” [5750/5750])
    
    # 将文件应用 apply命令
    [root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    
    # 执行过程
    podsecuritypolicy.policy/psp.flannel.unprivileged created
    clusterrole.rbac.authorization.k8s.io/flannel created
    clusterrolebinding.rbac.authorization.k8s.io/flannel created
    serviceaccount/flannel created
    configmap/kube-flannel-cfg created
    daemonset.apps/kube-flannel-ds created
    
    # 新开master的控制台窗口, 查看运行进度条情况
    [root@master ~]# kubectl get pods -n kube-system
    NAME                             READY   STATUS              RESTARTS   AGE
    coredns-9d85f5447-4r5vf          0/1     ContainerCreating   0          12m
    coredns-9d85f5447-r9k4g          0/1     ContainerCreating   0          12m
    etcd-master                      1/1     Running             0          12m
    kube-apiserver-master            1/1     Running             0          12m
    kube-controller-manager-master   1/1     Running             0          12m
    kube-flannel-ds-298ct            1/1     Running             0          61s
    kube-flannel-ds-z6whb            1/1     Running             0          61s
    kube-flannel-ds-zl4mx            1/1     Running             0          61s
    kube-proxy-9m6xw                 1/1     Running             0          12m
    kube-proxy-mhssb                 1/1     Running             0          5m23s
    kube-proxy-mnn62                 1/1     Running             0          5m6s
    kube-scheduler-master            1/1     Running             0          12m
    # 查看nodes的状态显示处于Ready状态
    [root@master ~]# kubectl get nodes
    NAME     STATUS   ROLES    AGE     VERSION
    master   Ready    master   13m     v1.17.4
    node1    Ready    <none>   5m46s   v1.17.4
    node2    Ready    <none>   6m3s    v1.17.4
    
    # 所有组件处于running运行状态
    [root@master ~]# kubectl get pods -n kube-system
    NAME                             READY   STATUS    RESTARTS   AGE
    coredns-9d85f5447-4r5vf          1/1     Running   0          13m
    coredns-9d85f5447-r9k4g          1/1     Running   0          13m
    etcd-master                      1/1     Running   0          13m
    kube-apiserver-master            1/1     Running   0          13m
    kube-controller-manager-master   1/1     Running   0          13m
    kube-flannel-ds-298ct            1/1     Running   0          111s
    kube-flannel-ds-z6whb            1/1     Running   0          111s
    kube-flannel-ds-zl4mx            1/1     Running   0          111s
    kube-proxy-9m6xw                 1/1     Running   0          13m
    kube-proxy-mhssb                 1/1     Running   0          6m13s
    kube-proxy-mnn62                 1/1     Running   0          5m56s
    kube-scheduler-master            1/1     Running   0          13m
    [root@master ~]#
    

At this point, the entire cluster has been set up

2.6 Cluster testing

Deploy nginxand test the cluster environment

# 1. 部署nginx 版本 1.14-alpine
[root@master ~]# kubectl create deployment nginx --image=nginx:1.14-alpine
deployment.apps/nginx created

# 2. 暴露端口
[root@master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed

# 3. 查看服务状态 一个pod已经正在运行, service服务处于运行状态
[root@master ~]# kubectl get pods,svc
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-6867cdf567-ppmrj   1/1     Running   0          60s

NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        18m
service/nginx        NodePort    10.104.243.191   <none>        80:30134/TCP   43s

# 4. 获取集群的pod信息
[root@master ~]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
nginx-6867cdf567-ppmrj   1/1     Running   0          81s
[root@master ~]#

# 5. 根据暴露的端口,可以通过浏览器访问部署的nginx
# 访问地址为: 192.168.79.100:30134
# 访问结果如下图所示:成功访问到部署的nginx页面,部署成功

Insert image description here

Guess you like

Origin blog.csdn.net/weixin_43155804/article/details/125831675