Install k8s cluster through kubeadmin in centos7

K8s deployment officially provides various installation methods such as kind, minikube, and kubeadmin.
The installation of minikube has been introduced in previous articles, and the deployment is relatively simple. The following introduces the deployment of k8s cluster through kubeadmin.
Insert image description here
A variety of high-availability solutions are provided in production:
Insert image description here

k8s official documentation
This article installs version 1.28.0.
It is recommended to read the official documents carefully. The following operations are basically from the official documents.

1. Environment preparation

Three centos7 virtual machines: 2 cores 4G (the official website requires a minimum of 2 cores 2G)
Kernel version:

 uname -r

Insert image description here

Role ip CPU name
master 192.168.213.9 k8s-kubeadmin-1
node1 192.168.213.10 k8s-kubeadmin-2
node2 192.168.213.11 k8s-kubeadmin-3

Modify the hosts files in the three virtual machines:
Insert image description here
Make sure you can ping each other through the host name.
Modify host name:
View:

hostname

Revise:

sudo hostnamectl set-hostname k8s-kubeadmin-1
2. Installation

Insert image description here

2.1. All node operations: turn off the firewall

Turn off the firewall to avoid having to configure open ports. You don’t have to close it if you don’t want to. If you’re not afraid of the trouble, you can refer to my previous blog to set up open firewall ports.

systemctl stop firewalld  #停止防火墙
systemctl disable firewalld #设置开机不启动
2.2. All node operations: disable selinux
# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

Or set toSELINUX=disabled
Insert image description here

2.3. All node operations: close the swap partition
#永久禁用swap,删除或注释掉/etc/fstab里的swap设备的挂载命令即可
nano /etc/fstab
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

Insert image description here

Restart:

reboot
2.4. All node operations: set synchronization time
yum -y install ntp
systemctl start ntpd
systemctl enable ntpd
2.5. All node operations: enable bridge-nf-call-iptalbes

In the Kubernetes environment, both iptables and IPVS are used to implement network traffic forwarding and load balancing, but they have some differences in implementation methods and functions.

iptables is a tool built into the Linux system that can filter and forward traffic and support network functions such as NAT. In Kubernetes, iptables is mainly used to implement the ClusterIP and NodePort types of Service. When the Service is of ClusterIP type, iptables will add a rule for each Service IP on the node to forward traffic to the IP of the backend Pod. When the Service is of NodePort type, iptables will add a rule on each node to forward traffic from the host's NodePort to the Service IP.

In contrast, IPVS (IP Virtual Server) is a high-performance load balancing tool based on the Linux kernel. It can process traffic in the kernel state, supports multiple load balancing algorithms, and can maintain sessions. In Kubernetes, IPVS can be used to implement service load balancing. Compared with iptables, IPVS has higher performance and more load balancing algorithm choices, and can better cope with high traffic and high concurrency scenarios. IPVS proxy uses iptables for packet filtering, SNAT or masquerading.

In summary, both iptables and IPVS are used in Kubernetes to implement network traffic forwarding and load balancing. iptables is more suitable for implementing service-based load balancing, while IPVS is more suitable for high-performance and high-concurrency scenarios. In actual use, you can choose the appropriate tool according to your needs.
Execute the following command:

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# 设置所需的 sysctl 参数,参数在重新启动后保持不变
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# 应用 sysctl 参数而不重新启动
sudo sysctl --system

通过运行以下指令确认 `br_netfilter``overlay` 模块被加载:

lsmod | grep br_netfilter
lsmod | grep overlay

Verify that the net.bridge.bridge-nf-call-iptables, net.bridge.bridge-nf-call-ip6tables, and net.ipv4.ip_forward system variables are set to 1 in your sysctl configuration by running the following commands:

sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
[root@k8s-kubeadmin-1 /]# sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
2.6. All node operations: install containerd when running the container

Insert image description here
Insert image description here
Install containerd:

yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum -y install containerd.io

Generate config.toml configuration

containerd config default > /etc/containerd/config.toml

Configure systemd cgroup driver:
Set in /etc/containerd/config.toml

sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

Start containerd and start automatically at boot:

systemctl restart containerd && systemctl enable containerd
2.7. All node operations: k8s configures Alibaba Cloud yum source

The official website is configured with a foreign yum address. The speed is slow or due to some factors, all configurations are Alibaba Cloud's yum source.
Insert image description here

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name = Kubernetes
baseurl = https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled = 1
gpgcheck = 0
repo_gpgcheck = 0
gpgkey = https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@k8s-kubeadmin-1 ~]# cd /etc/yum.repos.d
[root@k8s-kubeadmin-1 yum.repos.d]# ll
total 48
-rw-r--r--. 1 root root 1664 Nov 23  2020 CentOS-Base.repo
-rw-r--r--. 1 root root 1309 Nov 23  2020 CentOS-CR.repo
-rw-r--r--. 1 root root  649 Nov 23  2020 CentOS-Debuginfo.repo
-rw-r--r--. 1 root root  314 Nov 23  2020 CentOS-fasttrack.repo
-rw-r--r--. 1 root root  630 Nov 23  2020 CentOS-Media.repo
-rw-r--r--. 1 root root 1331 Nov 23  2020 CentOS-Sources.repo
-rw-r--r--. 1 root root 8515 Nov 23  2020 CentOS-Vault.repo
-rw-r--r--. 1 root root  616 Nov 23  2020 CentOS-x86_64-kernel.repo
-rw-r--r--. 1 root root 1919 Nov 21 03:56 docker-ce.repo
-rw-r--r--  1 root root  287 Nov 29 00:54 kubernetes.repo
[root@k8s-kubeadmin-1 yum.repos.d]#

Install docker:

sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

start up:

sudo systemctl start docker 

boot:

systemctl enable docker

Configure Alibaba Cloud image acceleration:
Insert image description here
You can use the accelerator by modifying the daemon configuration file /etc/docker/daemon.json

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://e6sj15e9.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
2.8. All node operations: yum installs kubeadm, kubelet, and kubectl

This is the installation from the official website:
Insert image description here
Delete historical installations: Historical installations can be executed, uninstalled and reinstalled.

yum -y remove kubelet kubeadm kubectl

Visit to view the installation package details on Alibaba Cloud:

https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

Insert image description here
Insert image description here
Seeing that version 1.28.0 is relatively new and updated on 2023-08-16, choose this one. Add version number when installing:

yum install -y kubelet-1.28.0 kubeadm-1.28.0 kubectl-1.28.0 --disableexcludes=kubernetes
systemctl enable kubelet
2.9. View the required image

Insert image description here
requires mirror preparation.
Insert image description here
You can perform operations such as custom mirroring. I use the query Alibaba Acceleration Mirror exists, and then modify the label to what it needs.

 kubeadm config images list
[root@k8s-kubeadmin-1 yum.repos.d]#  kubeadm config images list
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1

These dependent images are not available in Alibaba Cloud images.

[root@k8s-kubeadmin-1 yum.repos.d]# docker search registry.k8s.io/kube-apiserver:v1.28.4
Error response from daemon: Unexpected status code 404

So the following kubeadmin init command may not succeed.
You need to pull down the image on Alibaba Cloud, and then change the tag to the image tag it requires.

docker tag  registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.0   registry.k8s.io/kube-apiserver:v1.28.4
docker tag  registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.0   registry.k8s.io/kube-controller-manager:v1.28.4
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.0   registry.k8s.io/kube-scheduler:v1.28.4
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.28.0   registry.k8s.io/kube-proxy:v1.28.4
docker tag registry.aliyuncs.com/google_containers/etcd:3.5.9-0 registry.k8s.io/etcd:3.5.9-0
docker tag registry.aliyuncs.com/google_containers/coredns:v1.10.1 registry.k8s.io/coredns/coredns:v1.10.1
docker tag registry.aliyuncs.com/google_containers/pause:3.9  registry.k8s.io/pause:3.6
[root@k8s-kubeadmin-1 yum.repos.d]# docker images
REPOSITORY                                                        TAG       IMAGE ID       CREATED         SIZE
flannel/flannel                                                   v0.22.3   e23f7ca36333   2 months ago    70.2MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.28.0   bb5e0dde9054   3 months ago    126MB
registry.k8s.io/kube-apiserver                                    v1.28.4   bb5e0dde9054   3 months ago    126MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.28.0   f6f496300a2a   3 months ago    60.1MB
registry.k8s.io/kube-scheduler                                    v1.28.4   f6f496300a2a   3 months ago    60.1MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.28.0   4be79c38a4ba   3 months ago    122MB
registry.k8s.io/kube-controller-manager                           v1.28.4   4be79c38a4ba   3 months ago    122MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.28.0   ea1030da44aa   3 months ago    73.1MB
registry.k8s.io/kube-proxy                                        v1.28.4   ea1030da44aa   3 months ago    73.1MB
flannel/flannel-cni-plugin                                        v1.2.0    a55d1bad692b   4 months ago    8.04MB
registry.aliyuncs.com/google_containers/etcd                      3.5.9-0   73deb9a3f702   6 months ago    294MB
registry.k8s.io/etcd                                              3.5.9-0   73deb9a3f702   6 months ago    294MB
registry.k8s.io/coredns/coredns                                   v1.10.1   ead0a4a53df8   9 months ago    53.6MB
registry.aliyuncs.com/google_containers/coredns                   v1.10.1   ead0a4a53df8   9 months ago    53.6MB
registry.aliyuncs.com/google_containers/pause                     3.9       e6f181688397   13 months ago   744kB
registry.k8s.io/pause                                             3.6       e6f181688397   13 months ago   744kB
registry.k8s.io/pause                                             3.9       e6f181688397   13 months ago   744kB
kubernetesui/dashboard                                            latest    07655ddf2eeb   14 months ago   246MB
kubernetesui/dashboard                                            v2.7.0    07655ddf2eeb   14 months ago   246MB
kubernetesui/metrics-scraper                                      latest    421615ce8dbd   2 years ago     34.4MB
kubernetesui/metrics-scraper                                      v1.0.8    421615ce8dbd   2 years ago     34.4MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.17.4   6dec7cfde1e5   3 years ago     116MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.17.4   2e1ba57fe95a   3 years ago     171MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.17.4   7f997fcf3e94   3 years ago     161MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.17.4   5db16c1c7aff   3 years ago     94.4MB
registry.aliyuncs.com/google_containers/coredns                   1.6.5     70f311871ae1   4 years ago     41.6MB
registry.aliyuncs.com/google_containers/etcd                      3.4.3-0   303ce5db0e90   4 years ago     288MB
registry.aliyuncs.com/google_containers/pause                     3.1       da86e6ba6ca1   5 years ago     742kB
kubernetes/pause                                                  latest    f9d5de079539   9 years ago     240kB
2.9. k8s-kubeadmin-1 node execution: install master
kubeadm init \
--apiserver-advertise-address=192.168.213.9 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.28.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--cri-socket=unix:///var/run/cri-dockerd.sock \
--v=5 

Since containerd and Docker Engine (using cri-dockerd) are installed when the above container is running, the cri-socket parameter needs to be specified.
Insert image description here

Insert image description here

If an error occurs during the installation process, reinstall requires resetting the kubeadm installation status:

kubeadm reset --cri-socket=unix:///var/run/cri-dockerd.sock

The reset process does not reset or clear iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually:

iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

If you want to reset the IPVS table, you must run the following command:

ipvsadm -C
our Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.213.9:6443 --token askdfkjsdfkljkldffj\
        --discovery-token-ca-cert-hash sha256:kjlksjdfkasdkjflksdfljdfkdf

Then follow the prompts:
To enable non-root users to run kubectl, run the following commands, which are also part of the kubeadm init output:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Or, if you are root, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

Now execute:

kubectl get node
[root@k8s-kubeadmin-1 yum.repos.d]# kubectl get node
NAME              STATUS   ROLES           AGE     VERSION
k8s-kubeadmin-1   NoReady    control-plane   4h31m   v1.28.0

The child node joins the k8s-kubeadmin-1 node:
Format:

kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>
kubeadm join 192.168.213.9:6443 --token s5inwf.17rdxvhjalwyzj92 \
        --discovery-token-ca-cert-hash sha256:ce85d2ceaea7311ac3e58ee355d34ee9235702e3415d43b84f78da682210ee09 \
        --cri-socket=unix:///var/run/cri-dockerd.sock --v=5

It is possible that the token has expired:
Execute k8s-kubeadmin-1 to create the token:

kubeadm token create

Will output:

5didvk.d09sbcov8ph2amjw

If you don't have a value for --discovery-token-ca-cert-hash, you can get it by executing the following chain of commands on the control plane node:

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
   openssl dgst -sha256 -hex | sed 's/^.* //'

The output is similar to the following:

8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78

Then execute:

kubectl get node
[root@k8s-kubeadmin-1 yum.repos.d]# kubectl get node
NAME              STATUS   ROLES           AGE     VERSION
k8s-kubeadmin-1   NoReady    control-plane   4h31m   v1.28.0
k8s-kubeadmin-2   NoReady    <none>          4h7m    v1.28.0
k8s-kubeadmin-3   NoReady    <none>          4h7m    v1.28.0

Requires the Pod Network add-on to be installed

3.0, k8s-kubeadmin-1 node execution: Install Pod network add-on-Container Network Interface (CNI)

Insert image description here
Download and install:

wget https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz
mkdir -pv /opt/cni/bin
tar zxvf cni-plugins-linux-amd64-v1.3.0.tgz -C /opt/cni/bin/
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Execute againkubectl get node

[root@k8s-kubeadmin-1 yum.repos.d]# kubectl get node
NAME              STATUS   ROLES           AGE     VERSION
k8s-kubeadmin-1   Ready    control-plane   4h31m   v1.28.0
k8s-kubeadmin-2   Ready    <none>          4h7m    v1.28.0
k8s-kubeadmin-3   Ready    <none>          4h7m    v1.28.0

Check the status of the pods in the kube-system namespace:

[root@k8s-kubeadmin-1 yum.repos.d]# kubectl get pods -n kube-system
NAME                                         READY   STATUS             RESTARTS         AGE
coredns-66f779496c-9tqbt                     1/1     Running            0                4h42m
coredns-66f779496c-wzvts                     1/1     Running            0                4h42m
dashboard-metrics-scraper-5657497c4c-v2dn4   1/1     Running            0                3h
etcd-k8s-kubeadmin-1                         1/1     Running            0                4h42m
kube-apiserver-k8s-kubeadmin-1               1/1     Running            0                4h42m
kube-controller-manager-k8s-kubeadmin-1      1/1     Running            0                4h42m
kube-proxy-bwksp                             1/1     Running            0                4h19m
kube-proxy-gdd49                             1/1     Running            0                4h42m
kube-proxy-svj87                             1/1     Running            0                4h18m
kube-scheduler-k8s-kubeadmin-1               1/1     Running            0                4h42m
kubernetes-dashboard-76f4b5bc7d-gjm79        0/1     CrashLoopBackOff   26 (4m14s ago)   124m
3.1. Install kubernetes-dashboard

Pull the kubernetes-dashboard resource configuration list yaml file

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

If there is no other way to see the outside world, it may be slower or the pull may fail. Here are the files I pulled, which can be copied and used:
Two of them are used Images: kubernetesui/dashboard:v2.7.0, kubernetesui/metrics-scraper:v1.0.8, which are not available on Alibaba Cloud Image Accelerator. You can find what is on the accelerator and then modify it to what it needs through tags.
Pull the configuration on all three machines.
The following files need to be modified in several places;

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      name: https  # 源文件没有name
      nodePort: 32001 # 源文件没有nodePort
  type: NodePort # 源文件没有nodePort
  selector:
    k8s-app: kubernetes-dashboard

Source File:

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.7.0
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {
    
    }
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.8
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {
    
    }

deploy:

kubectl apply -f [你的本地路径]/recommended.yaml

Create dashboard-adminuser.yaml locally

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
kubectl apply -f [你的文件路径]/dashboard-adminuser.yaml
kubectl -n kubernetes-dashboard create token admin-user

Save the output token and use it for logging in later.
Get all pods in the namespace:

[root@k8s-kubeadmin-1 yum.repos.d]#  kubectl get pods --all-namespaces
NAMESPACE              NAME                                         READY   STATUS             RESTARTS         AGE
kube-flannel           kube-flannel-ds-5r52b                        1/1     Running            0                4h35m
kube-flannel           kube-flannel-ds-9jvk4                        1/1     Running            0                4h35m
kube-flannel           kube-flannel-ds-jbc85                        1/1     Running            0                4h35m
kube-system            coredns-66f779496c-9tqbt                     1/1     Running            0                5h8m
kube-system            coredns-66f779496c-wzvts                     1/1     Running            0                5h8m
kube-system            dashboard-metrics-scraper-5657497c4c-v2dn4   1/1     Running            0                3h27m
kube-system            etcd-k8s-kubeadmin-1                         1/1     Running            0                5h9m
kube-system            kube-apiserver-k8s-kubeadmin-1               1/1     Running            0                5h9m
kube-system            kube-controller-manager-k8s-kubeadmin-1      1/1     Running            0                5h9m
kube-system            kube-proxy-bwksp                             1/1     Running            0                4h45m
kube-system            kube-proxy-gdd49                             1/1     Running            0                5h8m
kube-system            kube-proxy-svj87                             1/1     Running            0                4h45m
kube-system            kube-scheduler-k8s-kubeadmin-1               1/1     Running            0                5h9m
kube-system            kubernetes-dashboard-76f4b5bc7d-gjm79        0/1     CrashLoopBackOff   30 (7m54s ago)   150m
kubernetes-dashboard   dashboard-metrics-scraper-5657497c4c-mk9hk   1/1     Running            0                4h28m
kubernetes-dashboard   kubernetes-dashboard-78f87ddfc-v6l57         1/1     Running            0                4h28m

View all services under the namespace: NodePort (publish) type

 kubectl get svc --all-namespaces
NAMESPACE              NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
default                kubernetes                  ClusterIP   10.96.0.1        <none>        443/TCP                  5h10m
kube-system            kube-dns                    ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP   5h10m
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   10.109.201.223   <none>        8000/TCP                 160m
kubernetes-dashboard   kubernetes-dashboard        NodePort    10.105.61.238    <none>        443:32001/TCP            157m

kubernetes-dashboard is deployed to the k8s-kubeadmin-2 node.
Insert image description here

access:https://k8s-kubeadmin-2:32001/Insert image description here
Insert image description here

Guess you like

Origin blog.csdn.net/qq_22744093/article/details/134693973