在VMware安装K8S

一、参考

https://blog.csdn.net/witton/article/details/107085155

注意:提前规划好K8S的IP段,先一步改好主机名和设置好静态IP,要不然后面问题很多。当然不改主机名和设置静态IP,也可以正常安装。

二、安装

我是在VMware上安装的,过程中查阅了很多资料,也因为安装混乱重新搞过,最终能安装成功,全依赖上面的链接中的指导。先保证单节点的安装,别直接安装集群,我之前修改了很多网络的配置,导致最终集群出现问题。

首先是安装一个Centos OS的基础版。然后从基础班克隆出一个节点,用于部署K8s。

如下截图是VMware的快照保存操作,如不需要可自行过滤掉。

截图中是我安装时保存的几个快照节点。

如截图所示,先切换至Root用户,安装都是在Root用户下执行。然后卸载podman。

[root@k8s-master centos]# yum update
Last metadata expiration check: 1 day, 21:57:46 ago on Tue 02 Mar 2021 04:07:05 AM PST.
Dependencies resolved.
Nothing to do.
Complete!
[root@k8s-master centos]# 
[root@k8s-master centos]# 
[root@k8s-master centos]# yum remove podman
No match for argument: podman
No packages marked for removal.
Dependencies resolved.
Nothing to do.
Complete!
[root@k8s-master centos]#

关闭缓存区

[root@k8s-master centos]# sudo swapoff -a
[root@k8s-master centos]# sudo sed -i 's/.*swap.*/#&/' /etc/fstab
[root@k8s-master centos]# vi /etc/fstab 
[root@k8s-master centos]#

扫描二维码关注公众号,回复: 12719639 查看本文章

禁用selinux

[root@k8s-master centos]# setenforce 0
[root@k8s-master centos]# sudo sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
[root@k8s-master centos]# vi /etc/selinux/config
[root@k8s-master centos]#

关闭防火墙 

[root@k8s-master centos]# sudo systemctl stop firewalld.service
[root@k8s-master centos]# sudo systemctl disable firewalld.service
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@k8s-master centos]#

前期均为系统准备阶段,最好在此处快照保存,以备不时之患。后面开始安装K8s。

首先配置系统基本安装源。

[root@k8s-master centos]# sudo curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  2595  100  2595    0     0  17185      0 --:--:-- --:--:-- --:--:-- 17072

添加K8s安装源

[root@k8s-master centos]# cd /etc/yum.repos.d/
[root@k8s-master yum.repos.d]# vi kubernetes.repo
[root@k8s-master yum.repos.d]#

下面为文件中添加的内容 。

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

开始安装Docker。

sudo yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y install docker-ce
[root@k8s-master centos]# sudo yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools
Repository extras is listed more than once in the configuration
CentOS-8 - Base - mirrors.aliyun.com                                                                        2.3 MB/s | 2.3 MB     00:00    
CentOS-8 - Extras - mirrors.aliyun.com                                                                       19 kB/s | 9.2 kB     00:00    
CentOS-8 - AppStream - mirrors.aliyun.com                                                                   3.6 MB/s | 6.3 MB     00:01    
Kubernetes                                                                                                  760  B/s | 844  B     00:01    
Kubernetes                                                                                                   16 kB/s | 3.6 kB     00:00    
Importing GPG key 0xA7317B0F:
 Userid     : "Google Cloud Packages Automatic Signing Key <[email protected]>"
 Fingerprint: D0BC 747F D8CA F711 7500 D6FA 3746 C208 A731 7B0F
 From       : https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
Importing GPG key 0xBA07F4FB:
 Userid     : "Google Cloud Packages Automatic Signing Key <[email protected]>"
 Fingerprint: 54A6 47F9 048D 5688 D7DA 2ABE 6A03 0B21 BA07 F4FB
 From       : https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
Importing GPG key 0x836F4BEB:
 Userid     : "gLinux Rapture Automatic Signing Key (//depot/google3/production/borg/cloud-rapture/keys/cloud-rapture-pubkeys/cloud-rapture-signing-key-2020-12-03-16_08_05.pub) <[email protected]>"
 Fingerprint: 59FE 0256 8272 69DC 8157 8F92 8B57 C5C2 836F 4BEB
 From       : https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
Kubernetes                                                                                                  5.5 kB/s | 975  B     00:00    
Importing GPG key 0x3E1BA8D5:
 Userid     : "Google Cloud Packages RPM Signing Key <[email protected]>"
 Fingerprint: 3749 E1BA 95A8 6CE0 5454 6ED2 F09C 394C 3E1B A8D5
 From       : https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
Kubernetes                                                                                                  182 kB/s | 115 kB     00:00    
Package device-mapper-persistent-data-0.8.5-4.el8.x86_64 is already installed.
Package lvm2-8:2.03.09-5.el8.x86_64 is already installed.
Package net-tools-2.0-0.52.20160912git.el8.x86_64 is already installed.
Dependencies resolved.
============================================================================================================================================
 Package                           Architecture                   Version                                Repository                    Size
============================================================================================================================================
Installing:
 yum-utils                         noarch                         4.0.17-5.el8                           base                          68 k

Transaction Summary
============================================================================================================================================
Install  1 Package

Total download size: 68 k
Installed size: 20 k
Downloading Packages:
yum-utils-4.0.17-5.el8.noarch.rpm                                                                           182 kB/s |  68 kB     00:00    
--------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                       181 kB/s |  68 kB     00:00     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                    1/1 
  Installing       : yum-utils-4.0.17-5.el8.noarch                                                                                      1/1 
  Running scriptlet: yum-utils-4.0.17-5.el8.noarch                                                                                      1/1 
  Verifying        : yum-utils-4.0.17-5.el8.noarch                                                                                      1/1 
Installed products updated.

Installed:
  yum-utils-4.0.17-5.el8.noarch                                                                                                             

Complete!
[root@k8s-master centos]#
[root@k8s-master centos]# sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Repository extras is listed more than once in the configuration
Adding repo from: https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@k8s-master centos]#
[root@k8s-master centos]# yum -y install docker-ce
Repository extras is listed more than once in the configuration
Last metadata expiration check: 9:42:02 ago on Thu 04 Mar 2021 05:55:12 AM PST.
Dependencies resolved.
============================================================================================================================================
 Package                             Architecture     Version                                              Repository                  Size
============================================================================================================================================
Installing:
 docker-ce                           x86_64           3:20.10.5-3.el8                                      docker-ce-stable            27 M
Installing dependencies:
 container-selinux                   noarch           2:2.155.0-1.module_el8.3.0+699+d61d9c41              AppStream                   51 k
 containerd.io                       x86_64           1.4.3-3.1.el8                                        docker-ce-stable            33 M
 docker-ce-cli                       x86_64           1:20.10.5-3.el8                                      docker-ce-stable            33 M
 docker-ce-rootless-extras           x86_64           20.10.5-3.el8                                        docker-ce-stable           9.1 M
 fuse-overlayfs                      x86_64           1.3.0-2.module_el8.3.0+699+d61d9c41                  AppStream                   72 k
 fuse3                               x86_64           3.2.1-12.el8                                         base                        50 k
 fuse3-libs                          x86_64           3.2.1-12.el8                                         base                        94 k
 libslirp                            x86_64           4.3.1-1.module_el8.3.0+475+c50ce30b                  AppStream                   69 k
 slirp4netns                         x86_64           1.1.8-1.module_el8.3.0+699+d61d9c41                  AppStream                   51 k
Enabling module streams:
 container-tools                                      rhel8                                                                                

Transaction Summary
============================================================================================================================================
Install  10 Packages

Total size: 102 M
Total download size: 102 M
Installed size: 423 M
Downloading Packages:
(1/10): fuse3-3.2.1-12.el8.x86_64.rpm                                                                       151 kB/s |  50 kB     00:00    
(2/10): fuse3-libs-3.2.1-12.el8.x86_64.rpm                                                                  272 kB/s |  94 kB     00:00    
(3/10): container-selinux-2.155.0-1.module_el8.3.0+699+d61d9c41.noarch.rpm                                  144 kB/s |  51 kB     00:00    
(4/10): slirp4netns-1.1.8-1.module_el8.3.0+699+d61d9c41.x86_64.rpm                                          360 kB/s |  51 kB     00:00    
(5/10): fuse-overlayfs-1.3.0-2.module_el8.3.0+699+d61d9c41.x86_64.rpm                                       443 kB/s |  72 kB     00:00    
(6/10): libslirp-4.3.1-1.module_el8.3.0+475+c50ce30b.x86_64.rpm                                             361 kB/s |  69 kB     00:00                                                      
(7/10): containerd.io-1.4.3-3.1.el8.x86_64.rpm                                                              2.8 MB/s |  33 MB     00:12    
(8/10): docker-ce-rootless-extras-20.10.5-3.el8.x86_64.rpm                                                  1.2 MB/s | 9.1 MB     00:07    
(9/10): docker-ce-cli-20.10.5-3.el8.x86_64.rpm                                                              1.3 MB/s |  33 MB     00:24    
(10/10): docker-ce-20.10.5-3.el8.x86_64.rpm                                                                 448 kB/s |  27 MB     01:00    
--------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                       1.7 MB/s | 102 MB     01:00     
warning: /var/cache/dnf/docker-ce-stable-fa9dc42ab4cec2f4/packages/containerd.io-1.4.3-3.1.el8.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEY
Docker CE Stable - x86_64                                                                                   1.6 kB/s | 1.6 kB     00:01    
Importing GPG key 0x621E9F35:
 Userid     : "Docker Release (CE rpm) <[email protected]>"
 Fingerprint: 060A 61C5 1B55 8A7F 742B 77AA C52F EB6B 621E 9F35
 From       : https://download.docker.com/linux/centos/gpg
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                    1/1 
  Running scriptlet: container-selinux-2:2.155.0-1.module_el8.3.0+699+d61d9c41.noarch                                                  1/10 
  Installing       : container-selinux-2:2.155.0-1.module_el8.3.0+699+d61d9c41.noarch                                                  1/10 
  Running scriptlet: container-selinux-2:2.155.0-1.module_el8.3.0+699+d61d9c41.noarch                                                  1/10 
  Installing       : containerd.io-1.4.3-3.1.el8.x86_64                                                                                2/10 
  Running scriptlet: containerd.io-1.4.3-3.1.el8.x86_64                                                                                2/10 
  Installing       : docker-ce-cli-1:20.10.5-3.el8.x86_64                                                                              3/10 
  Running scriptlet: docker-ce-cli-1:20.10.5-3.el8.x86_64                                                                              3/10 
  Installing       : libslirp-4.3.1-1.module_el8.3.0+475+c50ce30b.x86_64                                                               4/10 
  Installing       : slirp4netns-1.1.8-1.module_el8.3.0+699+d61d9c41.x86_64                                                            5/10 
  Installing       : fuse3-libs-3.2.1-12.el8.x86_64                                                                                    6/10 
  Running scriptlet: fuse3-libs-3.2.1-12.el8.x86_64                                                                                    6/10 
  Installing       : fuse3-3.2.1-12.el8.x86_64                                                                                         7/10 
  Installing       : fuse-overlayfs-1.3.0-2.module_el8.3.0+699+d61d9c41.x86_64                                                         8/10 
  Running scriptlet: fuse-overlayfs-1.3.0-2.module_el8.3.0+699+d61d9c41.x86_64                                                         8/10 
  Installing       : docker-ce-3:20.10.5-3.el8.x86_64                                                                                  9/10 
  Running scriptlet: docker-ce-3:20.10.5-3.el8.x86_64                                                                                  9/10 
  Installing       : docker-ce-rootless-extras-20.10.5-3.el8.x86_64                                                                   10/10 
  Running scriptlet: docker-ce-rootless-extras-20.10.5-3.el8.x86_64                                                                   10/10 
  Running scriptlet: container-selinux-2:2.155.0-1.module_el8.3.0+699+d61d9c41.noarch                                                 10/10 
  Running scriptlet: docker-ce-rootless-extras-20.10.5-3.el8.x86_64                                                                   10/10 
  Verifying        : fuse3-3.2.1-12.el8.x86_64                                                                                         1/10 
  Verifying        : fuse3-libs-3.2.1-12.el8.x86_64                                                                                    2/10 
  Verifying        : container-selinux-2:2.155.0-1.module_el8.3.0+699+d61d9c41.noarch                                                  3/10 
  Verifying        : fuse-overlayfs-1.3.0-2.module_el8.3.0+699+d61d9c41.x86_64                                                         4/10 
  Verifying        : libslirp-4.3.1-1.module_el8.3.0+475+c50ce30b.x86_64                                                               5/10 
  Verifying        : slirp4netns-1.1.8-1.module_el8.3.0+699+d61d9c41.x86_64                                                            6/10 
  Verifying        : containerd.io-1.4.3-3.1.el8.x86_64                                                                                7/10 
  Verifying        : docker-ce-3:20.10.5-3.el8.x86_64                                                                                  8/10 
  Verifying        : docker-ce-cli-1:20.10.5-3.el8.x86_64                                                                              9/10 
  Verifying        : docker-ce-rootless-extras-20.10.5-3.el8.x86_64                                                                   10/10 
Installed products updated.

Installed:
  container-selinux-2:2.155.0-1.module_el8.3.0+699+d61d9c41.noarch         containerd.io-1.4.3-3.1.el8.x86_64                               
  docker-ce-3:20.10.5-3.el8.x86_64                                         docker-ce-cli-1:20.10.5-3.el8.x86_64                             
  docker-ce-rootless-extras-20.10.5-3.el8.x86_64                           fuse-overlayfs-1.3.0-2.module_el8.3.0+699+d61d9c41.x86_64        
  fuse3-3.2.1-12.el8.x86_64                                                fuse3-libs-3.2.1-12.el8.x86_64                                   
  libslirp-4.3.1-1.module_el8.3.0+475+c50ce30b.x86_64                      slirp4netns-1.1.8-1.module_el8.3.0+699+d61d9c41.x86_64           

Complete!
[root@k8s-master centos]#

为了加速Docker pull,设置阿里加速。

[root@k8s-master centos]# sudo mkdir -p /etc/docker
[root@k8s-master centos]# sudo vim /etc/docker/daemon.json
[root@k8s-master centos]#

下面为文件中添加的内容。 

{
   "registry-mirrors" : ["https://mj9kvemk.mirror.aliyuncs.com"]
}

上面已完成Docker之前的所有安装。建议快照保存,以备不时之需。

下面开始安装kubectl、kubelet、kubeadm。

sudo yum install -y kubectl kubelet kubeadm
sudo systemctl enable kubelet
sudo systemctl start kubelet
kubeadm version
kubectl version --client
kubelet --version
[root@k8s-master centos]# sudo yum install -y kubectl kubelet kubeadm
Repository extras is listed more than once in the configuration
Last metadata expiration check: 9:49:51 ago on Thu 04 Mar 2021 05:55:12 AM PST.
Dependencies resolved.
============================================================================================================================================
 Package                                   Architecture              Version                            Repository                     Size
============================================================================================================================================
Installing:
 kubeadm                                   x86_64                    1.20.4-0                           kubernetes                    8.3 M
 kubectl                                   x86_64                    1.20.4-0                           kubernetes                    8.5 M
 kubelet                                   x86_64                    1.20.4-0                           kubernetes                     20 M
Installing dependencies:
 conntrack-tools                           x86_64                    1.4.4-10.el8                       base                          204 k
 cri-tools                                 x86_64                    1.13.0-0                           kubernetes                    5.1 M
 kubernetes-cni                            x86_64                    0.8.7-0                            kubernetes                     19 M
 libnetfilter_cthelper                     x86_64                    1.0.0-15.el8                       base                           24 k
 libnetfilter_cttimeout                    x86_64                    1.0.0-11.el8                       base                           24 k
 libnetfilter_queue                        x86_64                    1.0.4-3.el8                        base                           31 k
 socat                                     x86_64                    1.7.3.3-2.el8                      AppStream                     302 k

Transaction Summary
============================================================================================================================================
Install  10 Packages

Total download size: 61 M
Installed size: 263 M
Downloading Packages:
(1/10): libnetfilter_cttimeout-1.0.0-11.el8.x86_64.rpm                                                       92 kB/s |  24 kB     00:00    
(2/10): libnetfilter_cthelper-1.0.0-15.el8.x86_64.rpm                                                        85 kB/s |  24 kB     00:00    
(3/10): conntrack-tools-1.4.4-10.el8.x86_64.rpm                                                             669 kB/s | 204 kB     00:00    
(4/10): libnetfilter_queue-1.0.4-3.el8.x86_64.rpm                                                           155 kB/s |  31 kB     00:00    
(5/10): socat-1.7.3.3-2.el8.x86_64.rpm                                                                      918 kB/s | 302 kB     00:00    
(6/10): 14bfe6e75a9efc8eca3f638eb22c7e2ce759c67f95b43b16fae4ebabde1549f3-cri-tools-1.13.0-0.x86_64.rpm      5.4 MB/s | 5.1 MB     00:00    
(7/10): e2854f50118a9d9b091effc286aab053346d1fc6473b4d25f43067bfd185b211-kubectl-1.20.4-0.x86_64.rpm        4.8 MB/s | 8.5 MB     00:01    
(8/10): 41b736fab41de415734da929659fe7ed3b53b9c1d345cd8cf9a0570d3038b07b-kubeadm-1.20.4-0.x86_64.rpm        1.4 MB/s | 8.3 MB     00:06    
(9/10): db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm  4.5 MB/s |  19 MB     00:04    
(10/10): 1e577d68c58aa6fb846fdaa49737288f12dc78a2163a8470de930acece49974e-kubelet-1.20.4-0.x86_64.rpm       3.8 MB/s |  20 MB     00:05    
--------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                       9.3 MB/s |  61 MB     00:06     
warning: /var/cache/dnf/kubernetes-d03a9fe438e18cac/packages/14bfe6e75a9efc8eca3f638eb22c7e2ce759c67f95b43b16fae4ebabde1549f3-cri-tools-1.13.0-0.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 3e1ba8d5: NOKEY
Kubernetes                                                                                                   43 kB/s | 4.3 kB     00:00    
Importing GPG key 0xBA07F4FB:
 Userid     : "Google Cloud Packages Automatic Signing Key <[email protected]>"
 Fingerprint: 54A6 47F9 048D 5688 D7DA 2ABE 6A03 0B21 BA07 F4FB
 From       : https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
Key imported successfully
Importing GPG key 0x836F4BEB:
 Userid     : "gLinux Rapture Automatic Signing Key (//depot/google3/production/borg/cloud-rapture/keys/cloud-rapture-pubkeys/cloud-rapture-signing-key-2020-12-03-16_08_05.pub) <[email protected]>"
 Fingerprint: 59FE 0256 8272 69DC 8157 8F92 8B57 C5C2 836F 4BEB
 From       : https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
Key imported successfully
Importing GPG key 0x307EA071:
 Userid     : "Rapture Automatic Signing Key (cloud-rapture-signing-key-2021-03-01-08_01_09.pub)"
 Fingerprint: 7F92 E05B 3109 3BEF 5A3C 2D38 FEEA 9169 307E A071
 From       : https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
Key imported successfully
Kubernetes                                                                                                   14 kB/s | 975  B     00:00    
Importing GPG key 0x3E1BA8D5:
 Userid     : "Google Cloud Packages RPM Signing Key <[email protected]>"
 Fingerprint: 3749 E1BA 95A8 6CE0 5454 6ED2 F09C 394C 3E1B A8D5
 From       : https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                    1/1 
  Installing       : kubectl-1.20.4-0.x86_64                                                                                           1/10 
  Installing       : cri-tools-1.13.0-0.x86_64                                                                                         2/10 
  Installing       : socat-1.7.3.3-2.el8.x86_64                                                                                        3/10 
  Installing       : libnetfilter_queue-1.0.4-3.el8.x86_64                                                                             4/10 
  Running scriptlet: libnetfilter_queue-1.0.4-3.el8.x86_64                                                                             4/10 
  Installing       : libnetfilter_cttimeout-1.0.0-11.el8.x86_64                                                                        5/10 
  Running scriptlet: libnetfilter_cttimeout-1.0.0-11.el8.x86_64                                                                        5/10 
  Installing       : libnetfilter_cthelper-1.0.0-15.el8.x86_64                                                                         6/10 
  Running scriptlet: libnetfilter_cthelper-1.0.0-15.el8.x86_64                                                                         6/10 
  Installing       : conntrack-tools-1.4.4-10.el8.x86_64                                                                               7/10 
  Running scriptlet: conntrack-tools-1.4.4-10.el8.x86_64                                                                               7/10 
  Installing       : kubernetes-cni-0.8.7-0.x86_64                                                                                     8/10 
  Installing       : kubelet-1.20.4-0.x86_64                                                                                           9/10 
  Installing       : kubeadm-1.20.4-0.x86_64                                                                                          10/10 
  Running scriptlet: kubeadm-1.20.4-0.x86_64                                                                                          10/10 
  Verifying        : conntrack-tools-1.4.4-10.el8.x86_64                                                                               1/10 
  Verifying        : libnetfilter_cthelper-1.0.0-15.el8.x86_64                                                                         2/10 
  Verifying        : libnetfilter_cttimeout-1.0.0-11.el8.x86_64                                                                        3/10 
  Verifying        : libnetfilter_queue-1.0.4-3.el8.x86_64                                                                             4/10 
  Verifying        : socat-1.7.3.3-2.el8.x86_64                                                                                        5/10 
  Verifying        : cri-tools-1.13.0-0.x86_64                                                                                         6/10 
  Verifying        : kubeadm-1.20.4-0.x86_64                                                                                           7/10 
  Verifying        : kubectl-1.20.4-0.x86_64                                                                                           8/10 
  Verifying        : kubelet-1.20.4-0.x86_64                                                                                           9/10 
  Verifying        : kubernetes-cni-0.8.7-0.x86_64                                                                                    10/10 
Installed products updated.

Installed:
  conntrack-tools-1.4.4-10.el8.x86_64            cri-tools-1.13.0-0.x86_64                       kubeadm-1.20.4-0.x86_64                   
  kubectl-1.20.4-0.x86_64                        kubelet-1.20.4-0.x86_64                         kubernetes-cni-0.8.7-0.x86_64             
  libnetfilter_cthelper-1.0.0-15.el8.x86_64      libnetfilter_cttimeout-1.0.0-11.el8.x86_64      libnetfilter_queue-1.0.4-3.el8.x86_64     
  socat-1.7.3.3-2.el8.x86_64                    

Complete!
[root@k8s-master centos]#
[root@k8s-master centos]# sudo systemctl enable kubelet
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
[root@k8s-master centos]# sudo systemctl start kubelet
[root@k8s-master centos]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:09:38Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
[root@k8s-master centos]# kubectl version --client
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:12:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
[root@k8s-master centos]# kubelet --version
Kubernetes v1.20.4
[root@k8s-master centos]#

初始化kubernetes集群,注意安装的版本(上一步查询到的版本号)、节点的IP段。

kubeadm init --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=127.0.0.1 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.20.4 --pod-network-cidr=192.168.0.0/16
[root@k8s-master centos]# kubeadm init --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=127.0.0.1 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.20.4 --pod-network-cidr=192.168.0.0/16
W0304 15:51:31.779162   10676 kubelet.go:200] cannot automatically set CgroupDriver when starting the Kubelet: cannot execute 'docker info -f {
   
   {.CgroupDriver}}': exit status 2
[init] Using Kubernetes version: v1.20.4
[preflight] Running pre-flight checks
	[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.18.0-240.10.1.el8_3.x86_64
CONFIG_NAMESPACES: enabled
CONFIG_NET_NS: enabled
CONFIG_PID_NS: enabled
CONFIG_IPC_NS: enabled
CONFIG_UTS_NS: enabled
CONFIG_CGROUPS: enabled
CONFIG_CGROUP_CPUACCT: enabled
CONFIG_CGROUP_DEVICE: enabled
CONFIG_CGROUP_FREEZER: enabled
CONFIG_CGROUP_SCHED: enabled
CONFIG_CPUSETS: enabled
CONFIG_MEMCG: enabled
CONFIG_INET: enabled
CONFIG_EXT4_FS: enabled (as module)
CONFIG_PROC_FS: enabled
CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled (as module)
CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled (as module)
CONFIG_OVERLAY_FS: enabled (as module)
CONFIG_AUFS_FS: not set - Required for aufs.
CONFIG_BLK_DEV_DM: enabled (as module)
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR CRI]: container runtime is not running: output: Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)

Server:
ERROR: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
errors pretty printing info
, error: exit status 1
	[ERROR Service-Docker]: docker service is not active, please run 'systemctl start docker.service'
	[ERROR IsDockerSystemdCheck]: cannot execute 'docker info -f {
   
   {.CgroupDriver}}': exit status 2
	[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[ERROR SystemVerification]: error verifying Docker info: "Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[root@k8s-master centos]#

注意上一步的错误提示,逐步解决,具体根据自己遇到的错误提示,寻找解决方案。解决后,重新执行上一步。下面是我摘抄出来的两条。

1、[ERROR Service-Docker]: docker service is not active, please run 'systemctl start docker.service'
       2、[ERROR IsDockerSystemdCheck]: cannot execute 'docker info -f { {.CgroupDriver}}': exit status 2
       3、[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
       4、[ERROR SystemVerification]: error verifying Docker info: "Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"

[root@k8s-master centos]# systemctl start docker.service
[root@k8s-master centos]# systemctl status docker.service
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2021-03-04 16:05:14 PST; 3s ago
     Docs: https://docs.docker.com
 Main PID: 12121 (dockerd)
    Tasks: 10
   Memory: 46.9M
   CGroup: /system.slice/docker.service
           └─12121 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Mar 04 16:05:13 k8s-master dockerd[12121]: time="2021-03-04T16:05:13.777879065-08:00" level=error msg="Failed to built-in GetDriver graph b>
Mar 04 16:05:13 k8s-master dockerd[12121]: time="2021-03-04T16:05:13.799501504-08:00" level=warning msg="Your kernel does not support cgrou>
Mar 04 16:05:13 k8s-master dockerd[12121]: time="2021-03-04T16:05:13.799537279-08:00" level=warning msg="Your kernel does not support cgrou>
Mar 04 16:05:13 k8s-master dockerd[12121]: time="2021-03-04T16:05:13.799810767-08:00" level=info msg="Loading containers: start."
Mar 04 16:05:13 k8s-master dockerd[12121]: time="2021-03-04T16:05:13.923658021-08:00" level=info msg="Default bridge (docker0) is assigned >
Mar 04 16:05:14 k8s-master dockerd[12121]: time="2021-03-04T16:05:14.009526740-08:00" level=info msg="Loading containers: done."
Mar 04 16:05:14 k8s-master dockerd[12121]: time="2021-03-04T16:05:14.045157929-08:00" level=info msg="Docker daemon" commit=363e9a8 graphdr>
Mar 04 16:05:14 k8s-master dockerd[12121]: time="2021-03-04T16:05:14.045273909-08:00" level=info msg="Daemon has completed initialization"
Mar 04 16:05:14 k8s-master systemd[1]: Started Docker Application Container Engine.
Mar 04 16:05:14 k8s-master dockerd[12121]: time="2021-03-04T16:05:14.102022458-08:00" level=info msg="API listen on /var/run/docker.sock"

[root@k8s-master centos]# systemctl status docker.service
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2021-03-04 16:05:14 PST; 16s ago
     Docs: https://docs.docker.com
 Main PID: 12121 (dockerd)
    Tasks: 10
   Memory: 46.9M
   CGroup: /system.slice/docker.service
           └─12121 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Mar 04 16:05:13 k8s-master dockerd[12121]: time="2021-03-04T16:05:13.777879065-08:00" level=error msg="Failed to built-in GetDriver graph b>
Mar 04 16:05:13 k8s-master dockerd[12121]: time="2021-03-04T16:05:13.799501504-08:00" level=warning msg="Your kernel does not support cgrou>
Mar 04 16:05:13 k8s-master dockerd[12121]: time="2021-03-04T16:05:13.799537279-08:00" level=warning msg="Your kernel does not support cgrou>
Mar 04 16:05:13 k8s-master dockerd[12121]: time="2021-03-04T16:05:13.799810767-08:00" level=info msg="Loading containers: start."
Mar 04 16:05:13 k8s-master dockerd[12121]: time="2021-03-04T16:05:13.923658021-08:00" level=info msg="Default bridge (docker0) is assigned >
Mar 04 16:05:14 k8s-master dockerd[12121]: time="2021-03-04T16:05:14.009526740-08:00" level=info msg="Loading containers: done."
Mar 04 16:05:14 k8s-master dockerd[12121]: time="2021-03-04T16:05:14.045157929-08:00" level=info msg="Docker daemon" commit=363e9a8 graphdr>
Mar 04 16:05:14 k8s-master dockerd[12121]: time="2021-03-04T16:05:14.045273909-08:00" level=info msg="Daemon has completed initialization"
Mar 04 16:05:14 k8s-master systemd[1]: Started Docker Application Container Engine.
Mar 04 16:05:14 k8s-master dockerd[12121]: time="2021-03-04T16:05:14.102022458-08:00" level=info msg="API listen on /var/run/docker.sock"

[root@k8s-master centos]#

从上面可以看到Docker已经启动。然后开始解决第二个问题。

[root@k8s-master centos]# docker info | grep Cgroup
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
[root@k8s-master centos]# vi /usr/lib/systemd/system/docker.service
[root@k8s-master centos]# docker info | grep Cgroup
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
[root@k8s-master centos]# systemctl daemon-reload
[root@k8s-master centos]# systemctl restart docker
[root@k8s-master centos]# docker info | grep Cgroup
 Cgroup Driver: systemd
 Cgroup Version: 1
[root@k8s-master centos]#

可以看到驱动默认为cgroupfs,需要改为systemd。 

编辑文件 /usr/lib/systemd/system/docker.service,在ExecStart命令中添加“--exec-opt native.cgroupdriver=systemd”

修改完配置之后,须重启Docker。

执行重新初始化k8s命令。

重新初始化后的展示如下,里面会有后期执行的提示。

[root@k8s-master centos]# kubeadm init --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=127.0.0.1 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.20.4 --pod-network-cidr=192.168.0.0/16
[init] Using Kubernetes version: v1.20.4
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.187.150 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.187.150 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.187.150 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 58.006390 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: fstpd4.39x3358s73yhyj3z
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.187.150:6443 --token fstpd4.39x3358s73yhyj3z \
    --discovery-token-ca-cert-hash sha256:8785788aea76155ad54e639af72e760d36c13d0abd079e2a7219fc54287ea727 
[root@k8s-master centos]#

到了这块,千万别着急接着往下安装,好好看上面中的每一行日志,都涉及到哪些文件,后期修改主机名、证书过期都与之相关。且还提供了后续执行的操作步骤,按照提示执行。 

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get node
kubectl get pod --all-namespaces
[root@k8s-master centos]# mkdir -p $HOME/.kube
[root@k8s-master centos]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master centos]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master centos]# kubectl get node
NAME         STATUS     ROLES                  AGE     VERSION
k8s-master   NotReady   control-plane,master   3m51s   v1.20.4
[root@k8s-master centos]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   coredns-7f89b7bc75-t8ncc             0/1     Pending   0          3m39s
kube-system   coredns-7f89b7bc75-xc2bw             0/1     Pending   0          3m39s
kube-system   etcd-k8s-master                      1/1     Running   0          3m56s
kube-system   kube-apiserver-k8s-master            1/1     Running   0          3m56s
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          3m56s
kube-system   kube-proxy-czwnl                     1/1     Running   0          3m39s
kube-system   kube-scheduler-k8s-master            1/1     Running   0          3m56s
[root@k8s-master centos]#

可以看到查询到node节点的STATUS为NotReady,因为coredns pod没有启动,缺少网络pod。在之前的操作中是有提示接下来的解决方案的。如下命令,根据自身环境需要选择合适的网络插件。

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
[root@k8s-master centos]# kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created
[root@k8s-master centos]#

 再重新查询信息,节点状态变为正常(Ready)。如果未变成正常状态(Ready),请勿乱操作,请用命令(kubectl describe pods -n kube-system calico-node-bmf86(节点也是通过如下去查询))查询具体原因。因为网络问题(外网)如果长时间无反应,可考虑进行系统重启。

[root@k8s-master centos]# kubectl describe pods -n kube-system calico-node-bmf86
Name:                 calico-node-bmf86
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 k8s-master/192.168.187.150
Start Time:           Thu, 04 Mar 2021 16:30:44 -0800
Labels:               controller-revision-hash=8595d65b74
                      k8s-app=calico-node
                      pod-template-generation=1
Annotations:          <none>
Status:               Pending
IP:                   192.168.187.150
IPs:
  IP:           192.168.187.150
Controlled By:  DaemonSet/calico-node
Init Containers:
  upgrade-ipam:
    Container ID:  docker://4674c640297356646dd60cbd12c6f536def89bf047c16a21894a53f00b54fabb
    Image:         docker.io/calico/cni:v3.18.0
    Image ID:      docker-pullable://calico/cni@sha256:9d692b3ce9003469f5d50de91dc66ee37d76d78715ed5c1f4884b5d901411489
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/cni/bin/calico-ipam
      -upgrade
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 04 Mar 2021 17:13:28 -0800
      Finished:     Thu, 04 Mar 2021 17:13:28 -0800
    Ready:          True
    Restart Count:  0
    Environment Variables from:
      kubernetes-services-endpoint  ConfigMap  Optional: true
    Environment:
      KUBERNETES_NODE_NAME:        (v1:spec.nodeName)
      CALICO_NETWORKING_BACKEND:  <set to the key 'calico_backend' of config map 'calico-config'>  Optional: false
    Mounts:
      /host/opt/cni/bin from cni-bin-dir (rw)
      /var/lib/cni/networks from host-local-net-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from calico-node-token-5hjmd (ro)
  install-cni:
    Container ID:  docker://64511c7d99d1f510724377ca01a70a7fa88a479035157d2734c5d9e508c84132
    Image:         docker.io/calico/cni:v3.18.0
    Image ID:      docker-pullable://calico/cni@sha256:9d692b3ce9003469f5d50de91dc66ee37d76d78715ed5c1f4884b5d901411489
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/cni/bin/install
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 04 Mar 2021 17:13:29 -0800
      Finished:     Thu, 04 Mar 2021 17:13:30 -0800
    Ready:          True
    Restart Count:  0
    Environment Variables from:
      kubernetes-services-endpoint  ConfigMap  Optional: true
    Environment:
      CNI_CONF_NAME:         10-calico.conflist
      CNI_NETWORK_CONFIG:    <set to the key 'cni_network_config' of config map 'calico-config'>  Optional: false
      KUBERNETES_NODE_NAME:   (v1:spec.nodeName)
      CNI_MTU:               <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      SLEEP:                 false
    Mounts:
      /host/etc/cni/net.d from cni-net-dir (rw)
      /host/opt/cni/bin from cni-bin-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from calico-node-token-5hjmd (ro)
  flexvol-driver:
    Container ID:   docker://abd4f7693ee37c6860fbaeb7dde3075c14a7134b9ccfdbff3b5dd7229b2a250e
    Image:          docker.io/calico/pod2daemon-flexvol:v3.18.0
    Image ID:       docker-pullable://calico/pod2daemon-flexvol@sha256:63939cfbd430345c6add6548fc7a22e3082d0738f455ae747dd6264c834bcd4e
    Port:           <none>
    Host Port:      <none>
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 04 Mar 2021 17:15:50 -0800
      Finished:     Thu, 04 Mar 2021 17:15:50 -0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /host/driver from flexvol-driver-host (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from calico-node-token-5hjmd (ro)
Containers:
  calico-node:
    Container ID:   
    Image:          docker.io/calico/node:v3.18.0
    Image ID:       
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:      250m
    Liveness:   exec [/bin/calico-node -felix-live -bird-live] delay=10s timeout=1s period=10s #success=1 #failure=6
    Readiness:  exec [/bin/calico-node -felix-ready -bird-ready] delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment Variables from:
      kubernetes-services-endpoint  ConfigMap  Optional: true
    Environment:
      DATASTORE_TYPE:                     kubernetes
      WAIT_FOR_DATASTORE:                 true
      NODENAME:                            (v1:spec.nodeName)
      CALICO_NETWORKING_BACKEND:          <set to the key 'calico_backend' of config map 'calico-config'>  Optional: false
      CLUSTER_TYPE:                       k8s,bgp
      IP:                                 autodetect
      CALICO_IPV4POOL_IPIP:               Always
      CALICO_IPV4POOL_VXLAN:              Never
      FELIX_IPINIPMTU:                    <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      FELIX_VXLANMTU:                     <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      FELIX_WIREGUARDMTU:                 <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      CALICO_DISABLE_FILE_LOGGING:        true
      FELIX_DEFAULTENDPOINTTOHOSTACTION:  ACCEPT
      FELIX_IPV6SUPPORT:                  false
      FELIX_LOGSEVERITYSCREEN:            info
      FELIX_HEALTHENABLED:                true
    Mounts:
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /sys/fs/ from sysfs (rw)
      /var/lib/calico from var-lib-calico (rw)
      /var/log/calico/cni from cni-log-dir (ro)
      /var/run/calico from var-run-calico (rw)
      /var/run/nodeagent from policysync (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from calico-node-token-5hjmd (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  var-run-calico:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/calico
    HostPathType:  
  var-lib-calico:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/calico
    HostPathType:  
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  sysfs:
    Type:          HostPath (bare host directory volume)
    Path:          /sys/fs/
    HostPathType:  DirectoryOrCreate
  cni-bin-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/cni/bin
    HostPathType:  
  cni-net-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:  
  cni-log-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log/calico/cni
    HostPathType:  
  host-local-net-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/cni/networks
    HostPathType:  
  policysync:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/nodeagent
    HostPathType:  DirectoryOrCreate
  flexvol-driver-host:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
    HostPathType:  DirectoryOrCreate
  calico-node-token-5hjmd:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  calico-node-token-5hjmd
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     :NoSchedule op=Exists
                 :NoExecute op=Exists
                 CriticalAddonsOnly op=Exists
                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                 node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                 node.kubernetes.io/not-ready:NoExecute op=Exists
                 node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                 node.kubernetes.io/unreachable:NoExecute op=Exists
                 node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  47m                default-scheduler  Successfully assigned kube-system/calico-node-bmf86 to k8s-master
  Normal   Pulling    42m (x4 over 47m)  kubelet            Pulling image "docker.io/calico/cni:v3.18.0"
  Warning  Failed     41m (x4 over 46m)  kubelet            Failed to pull image "docker.io/calico/cni:v3.18.0": rpc error: code = Unknown desc = context canceled
  Warning  Failed     41m (x4 over 46m)  kubelet            Error: ErrImagePull
  Normal   BackOff    40m (x7 over 46m)  kubelet            Back-off pulling image "docker.io/calico/cni:v3.18.0"
  Warning  Failed     40m (x7 over 46m)  kubelet            Error: ImagePullBackOff
  Warning  Failed     36m (x2 over 38m)  kubelet            Failed to pull image "docker.io/calico/cni:v3.18.0": rpc error: code = Unknown desc = context canceled
  Warning  Failed     35m (x4 over 38m)  kubelet            Error: ErrImagePull
  Warning  Failed     35m (x2 over 38m)  kubelet            Failed to pull image "docker.io/calico/cni:v3.18.0": rpc error: code = Unknown desc = Error response from daemon: Head https://registry-1.docker.io/v2/calico/cni/manifests/v3.18.0: Get https://auth.docker.io/token?scope=repository%3Acalico%2Fcni%3Apull&service=registry.docker.io: net/http: TLS handshake timeout
  Normal   BackOff    34m (x7 over 37m)  kubelet            Back-off pulling image "docker.io/calico/cni:v3.18.0"
  Warning  Failed     34m (x7 over 37m)  kubelet            Error: ImagePullBackOff
  Normal   Pulling    29m (x6 over 39m)  kubelet            Pulling image "docker.io/calico/cni:v3.18.0"
  Normal   Pulling    5m43s              kubelet            Pulling image "docker.io/calico/cni:v3.18.0"
  Normal   Pulled     5m2s               kubelet            Successfully pulled image "docker.io/calico/cni:v3.18.0" in 40.852077338s
  Normal   Started    5m2s               kubelet            Started container upgrade-ipam
  Normal   Created    5m2s               kubelet            Created container upgrade-ipam
  Normal   Pulled     5m1s               kubelet            Container image "docker.io/calico/cni:v3.18.0" already present on machine
  Normal   Created    5m1s               kubelet            Created container install-cni
  Normal   Started    5m1s               kubelet            Started container install-cni
  Warning  Failed     3m21s              kubelet            Failed to pull image "docker.io/calico/pod2daemon-flexvol:v3.18.0": rpc error: code = Unknown desc = context canceled
  Warning  Failed     3m21s              kubelet            Error: ErrImagePull
  Normal   BackOff    3m21s              kubelet            Back-off pulling image "docker.io/calico/pod2daemon-flexvol:v3.18.0"
  Warning  Failed     3m21s              kubelet            Error: ImagePullBackOff
  Normal   Pulling    3m9s (x2 over 5m)  kubelet            Pulling image "docker.io/calico/pod2daemon-flexvol:v3.18.0"
  Normal   Pulled     2m40s              kubelet            Successfully pulled image "docker.io/calico/pod2daemon-flexvol:v3.18.0" in 28.391346804s
  Normal   Created    2m40s              kubelet            Created container flexvol-driver
  Normal   Started    2m40s              kubelet            Started container flexvol-driver
  Normal   Pulling    2m39s              kubelet            Pulling image "docker.io/calico/node:v3.18.0"
[root@k8s-master centos]# 
[root@k8s-master centos]# kubectl get node
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   51m   v1.20.4
[root@k8s-master centos]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                       READY   STATUS              RESTARTS   AGE
kube-system   calico-kube-controllers-6949477b58-hhrql   0/1     ContainerCreating   0          44m
kube-system   calico-node-bmf86                          0/1     Init:ErrImagePull   0          44m
kube-system   coredns-7f89b7bc75-t8ncc                   0/1     ContainerCreating   0          51m
kube-system   coredns-7f89b7bc75-xc2bw                   0/1     ContainerCreating   0          51m
kube-system   etcd-k8s-master                            1/1     Running             1          51m
kube-system   kube-apiserver-k8s-master                  1/1     Running             1          51m
kube-system   kube-controller-manager-k8s-master         1/1     Running             1          51m
kube-system   kube-proxy-czwnl                           1/1     Running             1          51m
kube-system   kube-scheduler-k8s-master                  1/1     Running             1          51m
[root@k8s-master centos]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                       READY   STATUS              RESTARTS   AGE
kube-system   calico-kube-controllers-6949477b58-hhrql   0/1     ContainerCreating   0          47m
kube-system   calico-node-bmf86                          0/1     PodInitializing     0          47m
kube-system   coredns-7f89b7bc75-t8ncc                   0/1     ContainerCreating   0          53m
kube-system   coredns-7f89b7bc75-xc2bw                   0/1     ContainerCreating   0          53m
kube-system   etcd-k8s-master                            1/1     Running             1          53m
kube-system   kube-apiserver-k8s-master                  1/1     Running             1          53m
kube-system   kube-controller-manager-k8s-master         1/1     Running             1          53m
kube-system   kube-proxy-czwnl                           1/1     Running             1          53m
kube-system   kube-scheduler-k8s-master                  1/1     Running             1          53m
[root@k8s-master centos]#

网络组件初始化比较缓慢,请勿着急。

[root@localhost centos]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                            READY   STATUS              RESTARTS   AGE
kube-system   calico-kube-controllers-6949477b58-wlgvj        0/1     ContainerCreating   0          15h
kube-system   calico-node-5zvll                               0/1     Init:2/3            0          15h
kube-system   coredns-7f89b7bc75-pcw6h                        0/1     ContainerCreating   0          15h
kube-system   coredns-7f89b7bc75-w4w74                        0/1     ContainerCreating   0          15h
kube-system   etcd-localhost.localdomain                      1/1     Running             0          15h
kube-system   kube-apiserver-localhost.localdomain            1/1     Running             1          15h
kube-system   kube-controller-manager-localhost.localdomain   1/1     Running             0          15h
kube-system   kube-proxy-h5472                                1/1     Running             0          15h
kube-system   kube-scheduler-localhost.localdomain            1/1     Running             0          15h
[root@localhost centos]# kubectl get node
NAME                    STATUS   ROLES                  AGE   VERSION
localhost.localdomain   Ready    control-plane,master   15h   v1.20.4
[root@localhost centos]# kubectl describe pods -n kube-system calico-node-5zvll
Name:                 calico-node-5zvll
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 localhost.localdomain/192.168.187.132
Start Time:           Wed, 03 Mar 2021 03:03:45 -0800
Labels:               controller-revision-hash=8595d65b74
                      k8s-app=calico-node
                      pod-template-generation=1
Annotations:          <none>
Status:               Running
IP:                   192.168.187.132
IPs:
  IP:           192.168.187.132
Controlled By:  DaemonSet/calico-node
Init Containers:
  upgrade-ipam:
    Container ID:  docker://fc823029adcda0ce60fcfc9a6b8c3ab426ae40a6bb8691c78b5cde37787b8c8c
    Image:         docker.io/calico/cni:v3.18.0
    Image ID:      docker-pullable://calico/cni@sha256:9d692b3ce9003469f5d50de91dc66ee37d76d78715ed5c1f4884b5d901411489
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/cni/bin/calico-ipam
      -upgrade
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 03 Mar 2021 18:05:23 -0800
      Finished:     Wed, 03 Mar 2021 18:05:23 -0800
    Ready:          True
    Restart Count:  1
    Environment Variables from:
      kubernetes-services-endpoint  ConfigMap  Optional: true
    Environment:
      KUBERNETES_NODE_NAME:        (v1:spec.nodeName)
      CALICO_NETWORKING_BACKEND:  <set to the key 'calico_backend' of config map 'calico-config'>  Optional: false
    Mounts:
      /host/opt/cni/bin from cni-bin-dir (rw)
      /var/lib/cni/networks from host-local-net-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from calico-node-token-btzmn (ro)
  install-cni:
    Container ID:  docker://7fe07cf00cc1945ac10e3c5b079a562a46205753cee0abdd0e26de1a7f0e418e
    Image:         docker.io/calico/cni:v3.18.0
    Image ID:      docker-pullable://calico/cni@sha256:9d692b3ce9003469f5d50de91dc66ee37d76d78715ed5c1f4884b5d901411489
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/cni/bin/install
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 03 Mar 2021 18:05:27 -0800
      Finished:     Wed, 03 Mar 2021 18:05:28 -0800
    Ready:          True
    Restart Count:  0
    Environment Variables from:
      kubernetes-services-endpoint  ConfigMap  Optional: true
    Environment:
      CNI_CONF_NAME:         10-calico.conflist
      CNI_NETWORK_CONFIG:    <set to the key 'cni_network_config' of config map 'calico-config'>  Optional: false
      KUBERNETES_NODE_NAME:   (v1:spec.nodeName)
      CNI_MTU:               <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      SLEEP:                 false
    Mounts:
      /host/etc/cni/net.d from cni-net-dir (rw)
      /host/opt/cni/bin from cni-bin-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from calico-node-token-btzmn (ro)
  flexvol-driver:
    Container ID:   docker://a9107694970d474dc0a578ba49c292fa1a55208f4a62b0683228382a5d623706
    Image:          docker.io/calico/pod2daemon-flexvol:v3.18.0
    Image ID:       docker-pullable://calico/pod2daemon-flexvol@sha256:63939cfbd430345c6add6548fc7a22e3082d0738f455ae747dd6264c834bcd4e
    Port:           <none>
    Host Port:      <none>
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 03 Mar 2021 18:05:54 -0800
      Finished:     Wed, 03 Mar 2021 18:05:54 -0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /host/driver from flexvol-driver-host (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from calico-node-token-btzmn (ro)
Containers:
  calico-node:
    Container ID:   docker://4b3acbad4c9731c90dc0dddaad8e1edbc448401f18226512638931bdff2b0451
    Image:          docker.io/calico/node:v3.18.0
    Image ID:       docker-pullable://calico/node@sha256:e207db848adb688ae3edac439c650b0ec44c0675efc3b735721f7541e0320100
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Wed, 03 Mar 2021 18:06:45 -0800
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      250m
    Liveness:   exec [/bin/calico-node -felix-live -bird-live] delay=10s timeout=1s period=10s #success=1 #failure=6
    Readiness:  exec [/bin/calico-node -felix-ready -bird-ready] delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment Variables from:
      kubernetes-services-endpoint  ConfigMap  Optional: true
    Environment:
      DATASTORE_TYPE:                     kubernetes
      WAIT_FOR_DATASTORE:                 true
      NODENAME:                            (v1:spec.nodeName)
      CALICO_NETWORKING_BACKEND:          <set to the key 'calico_backend' of config map 'calico-config'>  Optional: false
      CLUSTER_TYPE:                       k8s,bgp
      IP:                                 autodetect
      CALICO_IPV4POOL_IPIP:               Always
      CALICO_IPV4POOL_VXLAN:              Never
      FELIX_IPINIPMTU:                    <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      FELIX_VXLANMTU:                     <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      FELIX_WIREGUARDMTU:                 <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      CALICO_DISABLE_FILE_LOGGING:        true
      FELIX_DEFAULTENDPOINTTOHOSTACTION:  ACCEPT
      FELIX_IPV6SUPPORT:                  false
      FELIX_LOGSEVERITYSCREEN:            info
      FELIX_HEALTHENABLED:                true
    Mounts:
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /sys/fs/ from sysfs (rw)
      /var/lib/calico from var-lib-calico (rw)
      /var/log/calico/cni from cni-log-dir (ro)
      /var/run/calico from var-run-calico (rw)
      /var/run/nodeagent from policysync (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from calico-node-token-btzmn (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  var-run-calico:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/calico
    HostPathType:  
  var-lib-calico:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/calico
    HostPathType:  
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  sysfs:
    Type:          HostPath (bare host directory volume)
    Path:          /sys/fs/
    HostPathType:  DirectoryOrCreate
  cni-bin-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/cni/bin
    HostPathType:  
  cni-net-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:  
  cni-log-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log/calico/cni
    HostPathType:  
  host-local-net-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/cni/networks
    HostPathType:  
  policysync:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/nodeagent
    HostPathType:  DirectoryOrCreate
  flexvol-driver-host:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
    HostPathType:  DirectoryOrCreate
  calico-node-token-btzmn:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  calico-node-token-btzmn
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     :NoSchedule op=Exists
                 :NoExecute op=Exists
                 CriticalAddonsOnly op=Exists
                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                 node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                 node.kubernetes.io/not-ready:NoExecute op=Exists
                 node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                 node.kubernetes.io/unreachable:NoExecute op=Exists
                 node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason          Age                From     Message
  ----     ------          ----               ----     -------
  Normal   SandboxChanged  18m                kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          18m                kubelet  Container image "docker.io/calico/cni:v3.18.0" already present on machine
  Normal   Created         18m                kubelet  Created container upgrade-ipam
  Normal   Started         18m                kubelet  Started container upgrade-ipam
  Normal   Pulled          18m                kubelet  Container image "docker.io/calico/cni:v3.18.0" already present on machine
  Normal   Created         18m                kubelet  Created container install-cni
  Normal   Started         17m                kubelet  Started container install-cni
  Normal   Pulling         17m                kubelet  Pulling image "docker.io/calico/pod2daemon-flexvol:v3.18.0"
  Normal   Started         17m                kubelet  Started container flexvol-driver
  Normal   Created         17m                kubelet  Created container flexvol-driver
  Normal   Pulled          17m                kubelet  Successfully pulled image "docker.io/calico/pod2daemon-flexvol:v3.18.0" in 24.078530014s
  Normal   Pulling         17m                kubelet  Pulling image "docker.io/calico/node:v3.18.0"
  Normal   Pulled          16m                kubelet  Successfully pulled image "docker.io/calico/node:v3.18.0" in 50.457868042s
  Normal   Created         16m                kubelet  Created container calico-node
  Normal   Started         16m                kubelet  Started container calico-node
  Warning  Unhealthy       15m (x2 over 16m)  kubelet  Readiness probe failed:
[root@localhost centos]# 

 

[root@k8s-master centos]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-6949477b58-hhrql   1/1     Running   0          70m
kube-system   calico-node-bmf86                          1/1     Running   0          70m
kube-system   coredns-7f89b7bc75-t8ncc                   1/1     Running   0          76m
kube-system   coredns-7f89b7bc75-xc2bw                   1/1     Running   0          76m
kube-system   etcd-k8s-master                            1/1     Running   1          76m
kube-system   kube-apiserver-k8s-master                  1/1     Running   1          76m
kube-system   kube-controller-manager-k8s-master         1/1     Running   1          76m
kube-system   kube-proxy-czwnl                           1/1     Running   1          76m
kube-system   kube-scheduler-k8s-master                  1/1     Running   1          76m

如上的操作中,K8s已安装完成,建议快照保存状态。

开始安装dashboard。如果没有访问的权限,建议发送到微信上,微信访问,可以复制内容。

wget https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended.yaml

 下面为recommend.yaml的具体内容,可能因为网络问题无法下载或者查阅,可直接使用下方的文件内容。注意里面添加了两行内容,分别是type: NodePort、nodePort: 30000。添加时注意冒号的中英文切换。

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30000
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.2.0
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.6
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}
[root@k8s-master ~]# ls -al
total 44
dr-xr-x---.  4 root root  227 Mar  4 17:58 .
dr-xr-xr-x. 17 root root  244 Mar  4 17:11 ..
-rw-------.  1 root root 2739 Feb  9 17:31 anaconda-ks.cfg
-rw-------.  1 root root 3237 Mar  4 17:10 .bash_history
-rw-r--r--.  1 root root   18 May 11  2019 .bash_logout
-rw-r--r--.  1 root root  176 May 11  2019 .bash_profile
-rw-r--r--.  1 root root  176 May 11  2019 .bashrc
drwx------.  2 root root    6 Feb  9 17:31 .cache
-rw-r--r--.  1 root root  100 May 11  2019 .cshrc
drwxr-xr-x.  3 root root   33 Mar  4 16:27 .kube
-rw-------.  1 root root 2065 Feb  9 17:31 original-ks.cfg
-rw-r--r--   1 root root 7591 Mar  4 17:57 recommended.yaml 
-rw-r--r--.  1 root root  129 May 11  2019 .tcshrc
-rw-------.  1 root root 1107 Mar  4 15:43 .viminfo
[root@k8s-master ~]# kubectl create -f recommended.yaml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
[root@k8s-master ~]#

访问https://192.168.187.150:30000/#/login 进行访问,IP切换为自己环境的NodeIp。

[root@localhost Downloads]# kubectl create sa dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
[root@localhost Downloads]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
[root@localhost Downloads]# ADMIN_SECRET=$(kubectl get secrets -n kube-system | grep dashboard-admin | awk '{print $1}')
[root@localhost Downloads]# DASHBOARD_LOGIN_TOKEN=$(kubectl describe secret -n kube-system ${ADMIN_SECRET} | grep -E '^token' | awk '{print $2}')
[root@localhost Downloads]# echo ${DASHBOARD_LOGIN_TOKEN}
eyJhbGciOiJSUzI1NiIsImtpZCI6IkRkbG5jMmNqUWdpOWZrVXp2S0szTX

将生成的Token填充至之前的浏览器页面中,可以正常访问Dashboard页面。Token有效期为24小时。

猜你喜欢

转载自blog.csdn.net/u010313979/article/details/114314485