Centos7.8kubeadm deploys a single master cluster

Insert picture description here

In order to meet the various versions of the developer's kubeflowdemand for k8s choose 1.14.5version kubernetes.

kubeadm Single cluster deployment kubernetes-1.14.5

Implementation environment:

CPU name IP address cpu and memory Roles Internet mode platform
master1 192.168.221.133 2H4G master NAT (static ip) vmware
node1 192.168.221.132 2H4G node NAT (static ip) vmware
node2 192.168.221.131 2H4G node NAT (static ip) vmware

Execute the following commands on different nodes

Execute the following command on 192.168.221.133

hostnamectl set-hostname master1

Execute the following command on 192.168.221.132

hostnamectl set-hostname node1

Execute the following command on 192.168.221.131

hostnamectl set-hostname node2

All three nodes execute the following commands

Disabled firewalldandSELinux

systemctl stop firewalld && systemctl disable firewalld
sed -i 's||SELINUX=enforcing|SELINUX=disabled|g' /etc/selinux/config

Disable swappartition

sed -i '11s/\/dev/# \/dev/g' /etc/fstab
swapoff -a

Server proofreading time

# 安装 chrony 服务,centos7.8默认自带了,没有的按如下安装
yum install -y chrony
systemctl start chronyd
systemctl enable chronyd

Install common software

yum install -y net-tools iproute lrzsz vim bash-completion wget tree bridge-utils unzip bind-utils git gcc

Modify /etc/hosts to add IPthe corresponding resolution of each node and host name

cat >> /etc/hosts << EOF
192.168.221.133 master1
192.168.221.132 node1
192.168.221.131 node2
EOF

yum settings

mkdir /etc/yum.repos.d/ori
mv /etc/yum.repos.d/CentOS-* /etc/yum.repos.d/ori/
cat > /etc/yum.repos.d/CentOS-Base.repo << "EOF"
# CentOS-Base.repo
#
# The mirror system uses the connecting IP address of the client and the
# update status of each mirror to pick mirrors that are updated to and
# geographically close to the client.  You should use this for CentOS updates
# unless you are manually picking other mirrors.
#
# If the mirrorlist= does not work for you, as a fall back you can try the
# remarked out baseurl= line instead.
#
#

[base]
name=CentOS-$releasever - Base
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/os/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

#released updates
[updates]
name=CentOS-$releasever - Updates
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/updates/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/extras/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/centosplus/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus
gpgcheck=1
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
EOF

Install epeland configure the epelsource

yum install -y epel-release

cat > /etc/yum.repos.d/epel.repo <<"EOF"
[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7

[epel-debuginfo]
name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch/debug
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1

[epel-source]
name=Extra Packages for Enterprise Linux 7 - $basearch - Source
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/SRPMS
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1
EOF

yum clean all

yum makecache

Upgrade the kernel

Reason: centos7.xThe system comes with the 3.10.xkernel, there are some Bugs lead to run. Docker.KubernetesUnstable.

  1. Get source
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm

  1. Start the installation, the installation is complete check /boot/grub2/grub.cfgthe corresponding kernel menuentrywhether to include initrd16configuration, if not, the installation again

    yum --enablerepo=elrepo-kernel install -y kernel-lt 
    
  2. View installed kernel

    [root@master1 k8s]# awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
    0 : CentOS Linux (4.4.243-1.el7.elrepo.x86_64) 7 (Core)
    1 : CentOS Linux (3.10.0-1127.el7.x86_64) 7 (Core)
    2 : CentOS Linux (0-rescue-7324391132a64a85a2043ee161564bc5) 7 (Core)
    
  3. Set the boot kernel to4.4.243

    grub2-set-default 0
    sed -i 's/saved/0/g'  /etc/default/grub
    
  4. Close NUMAand regenerate the grub2configuration files

    sed -i 's/quiet/quiet numa=off/g' /etc/default/grub
    
    grub2-mkconfig -o /boot/grub2/grub.cfg
    

Configure the IPVSkernel

Reason: By default, Kube-proxyit would be kubeadmto deploy the cluster iptablesoperation mode

It should be noted that when the kernel version is greater than 4.19the nf_conntrack_ipv4module, the module is removed , and the kubernetesofficial recommendation is to nf_conntrackreplace it, otherwise the nf_conntrack_ipv4module cannot be found if an error is reported

yum install -y ipset ipvsadm
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

chmod +x /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules

Modify kernel parameters

cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF

modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf

Modify the description of openable files

echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf

Install docker

  1. Install required packages

    sudo yum install -y yum-utils \
      device-mapper-persistent-data \
      lvm2
    
  2. Modify domestic docker source

    sudo yum-config-manager \
        --add-repo \
        https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docker-ce.repo
    
  3. Add Alibaba Cloud's image acceleration address

    [ ! -d /etc/docker ] && mkdir /etc/docker
    cat > /etc/docker/daemon.json <<EOF
    {
      "exec-opts": ["native.cgroupdriver=systemd"],
      "log-driver": "json-file",
      "log-opts": {
        "max-size": "100m"
      },
      "storage-driver": "overlay2",
      "storage-opts": [
        "overlay2.override_kernel_check=true"
      ],
      "registry-mirrors": ["https://95tfc660.mirror.aliyuncs.com"]
    }
    EOF
    
  4. installationdocker-ce

    yum -y install docker-ce-18.06.0.ce
    
  5. Start up and set up auto start

    systemctl start docker && systemctl enable docker
    

Installation kubelet, kubeadmandkubectl

Reason: kubeletto run Clusterall of the nodes, and is responsible for starting Pod containers, kubeadmfor initialization Cluster, Kubectlis a kubernetescommand line tool that kubectlcan deploy and manage applications, view a variety of resources, create, delete, and update various components.

  1. Add Alibaba Cloud and kubernetesrelated yum

    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    
  2. Installation and kubernetesversion consistent kubelet-1.14.5, kubeadm-1.14.5,kubectl-1.14.5

    yum install -y kubeadm-1.14.5 kubelet-1.14.5 kubectl-1.14.5
    
  3. Start kubectlthe auto-completion function of the command

    # 安装并配置bash-completion
    yum install -y bash-completion
    echo 'source /usr/share/bash-completion/bash_completion' >> /etc/profile
    source /etc/profile
    echo "source <(kubectl completion bash)" >> ~/.bashrc
    source ~/.bashrc
    

Initialization master1(executed on 192.168.221.133)

kubeadm init Is the key

1 Use: kubeadm config print init-defaultsYou can print the configuration file that needs to be used by default for cluster initialization. After partially modifying the configuration file, we kubeadm initneed to use this configuration file when initializing the cluster.

kubeadm config print init-defaults > kubeadm-config.yaml

amend as below:

[root@k8s-master root] cd /usr/local/install-k8s/core
[root@k8s-master core]# kubeadm config print init-defaults
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.158.221.133     # 默认为1.2.3.4修改为本机内网IP  192.168.221.133
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master             
  taints:
  - effect: NoSchedule    # master节点不负责pod的调度,也就是master节点不充当work节点的角色
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers # 默认k8s.gcr.io,修改为阿里云地址:registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.14.5    # 注意:生成配置文件和我们要安装的k8s版本不同,需要为v1.14.5
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"      # 新增pod的网段,默认的配置文件是没有这个pod子网的,我们新增它,如果不新增,需要在初始化指定pod的子网段
  serviceSubnet: 10.96.0.0/12               # SVC的子网
scheduler: {}
kubeProxy:
  config:
    featureGates:
      SupportIPVSProxyMode: true
    mode: ipvs

2 Before carrying out initialization, we can look at kubeadmthe deployment of kubernetes-1.14.5all components which are installed ** (non-essential) **

kubeadm config images list --kubernetes-version=1.14.5

3 generated using the above configuration file to initialize the master node, the kubeadm-init.loglog records node node joins the cluster mode, but can also command kubeadm token create --print-join-commandto find the way to join the cluster

kubeadm init --config=kubeadm-config.yaml | tee kubeadm-init.log

kubeadm-init.logThe content of the completed initialization output is recorded in. According to the output content, you can basically see Kubernetesthe key steps required to manually initialize and install a cluster. Among them are the following key contents: (I did not use kubeadm-config.yamlthis file when I initialized below , but the kubeadm initfollowing series of parameters are actually an essence, these parameters are replaced by kubeadm-config.yamlthe corresponding default values in the construction process )

[root@master1 lib]# kubeadm init --apiserver-advertise-address 192.168.221.133 --kubernetes-version="v1.14.5" --pod-network-cidr=10.244.0.0/16 --image-repository=registry.aliyuncs.com/google_containers | tee kubeadm-init.log
[init] Using Kubernetes version: v1.14.5
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master1 localhost] and IPs [192.168.221.133 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master1 localhost] and IPs [192.168.221.133 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.221.133]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.502648 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node master1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: xzg9lr.3pcwp2ca81mi09pr
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.221.133:6443 --token xzg9lr.3pcwp2ca81mi09pr \
    --discovery-token-ca-cert-hash sha256:f66b4383e963b04314fca375da4682404d85bac5df7571513469ddf3dec726b6 

init:指定版本进行初始化操作
preflight:初始化前的检查和下载所需要的 Docker 镜像文件
kubelet-start:生成 kubelet 的配置文件 var/lib/kubelet/config.yaml,没有这个文件 kubelet 无法启动,所以初始化之前的 kubelet 实际上启动不会成功
certificates:生成 Kubernetes 使用的证书,存放在 /etc/kubernetes/pki 目录中
kubeconfig:生成 KubeConfig 文件,存放在 /etc/kubernetes 目录中,组件之间通信需要使用对应文件
control-plane:使用 /etc/kubernetes/manifest 目录下的 YAML 文件,安装 Master 组件
etcd:使用 /etc/kubernetes/manifest/etcd.yaml 安装 Etcd 服务
wait-control-plane:等待 control-plan 部署的 Master 组件启动
apiclient:检查 Master 组件服务状态。
uploadconfig:更新配置
kubelet:使用 configMap 配置 kubelet
patchnode:更新 CNI 信息到 Node 上,通过注释的方式记录
mark-control-plane:为当前节点打标签,打了角色 Master,和不可调度标签,这样默认就不会使用 Master 节点来运行 Pod
bootstrap-token:生成 token 记录下来,后边使用 kubeadm join 往集群中添加节点时会用到
addons:安装附加组件 CoreDNS 和 kube-proxy

After initialization is complete, kubeadm-init.logthe file will be added to the node master1method suggest that two other nodenodes to join the copy execution master1node, we do not do this first step, because the flat network components flannel we have not installed, and now three nodes of the network also Did not get through.

Configuration kubectlcommand

Whether in the master node or the node node, the kubectlfollowing configuration must be performed to be able to execute commands

root user configuration

cat << EOF >> ~/.bashrc
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
source ~/.bashrc

General user configuration

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

View k8snode status

[root@k8s-master1 ~]# kubectl get node
NAME         STATUS     ROLES    AGE     VERSION
master1   NotReady   master   7m46s   v1.14.5

At this time, the status of the master node is NotReadybecause our k8scluster requires a flat network to exist, because we have not built a flanneldnetwork plug-in. Due to the above kubeadm init, we did not carry out kubeadm joinall of now we do not see nodethe information

master1installationflannel

[root@k8s-master ~]# mkdir k8s
wget -P k8s/ https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 访问不了这个网址,下不下来,可以借助阿里云的ECS可以下载
 
sed -i 's#quay.io#quay-mirror.qiniu.com#g' k8s/kube-flannel.yml
# kube-flannel.yml 里面的镜像地址quay.io/coreos/flannel:v0.13.0 拉取不下来,修改为国内的地址后,还拉取不下来

docker load -i flanneld-v0.13.0-amd64.docker
# 由于是镜像,可以把该镜像从github拉下来,https://github.com/coreos/flannel/releases,然后 
docker load -i flanneld-v0.13.0-amd64.docker

# 再次执行部署flannel
kubectl apply -f k8s/kube-flannel.yml

The viewing nodestatus should be Readystatus at this time

[root@master1 k8s]# kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
master1   Ready    master   23m   v1.14.5

Check master1the status of each component on and confirm that each component is in a healthy state:

[root@master1 k8s]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {
    
    "health":"true"} 

Now we need to add nodenodes to the cluster, and kubeadm jointhere are two ways to obtain command parameters:

One way: that we are in the above kubeadm-init.logdocuments can be found

Method 2: Execute the following command

kubeadm token create --print-join-command

In node1and node2were added to both execute commands on the node (only this line command is executed in the node node)

kubeadm join 192.168.221.133:6443 --token xzg9lr.3pcwp2ca81mi09pr \
    --discovery-token-ca-cert-hash sha256:f66b4383e963b04314fca375da4682404d85bac5df7571513469ddf3dec726b6

In master1view status of each node and the pod assembly on the way to run

[root@master1 k8s]# kubectl get pod -n kube-system -o wide
NAME                              READY   STATUS    RESTARTS   AGE    IP                NODE      NOMINATED NODE   READINESS GATES
coredns-f7567f89b-nlv5q           1/1     Running   0          3h4m   10.244.0.2        master1   <none>           <none>
coredns-f7567f89b-zvhkc           1/1     Running   0          3h4m   10.244.0.3        master1   <none>           <none>
etcd-master1                      1/1     Running   0          3h3m   192.168.221.133   master1   <none>           <none>
kube-apiserver-master1            1/1     Running   0          3h3m   192.168.221.133   master1   <none>           <none>
kube-controller-manager-master1   1/1     Running   0          3h3m   192.168.221.133   master1   <none>           <none>
kube-flannel-ds-mp9tb             1/1     Running   0          157m   192.168.221.131   node1     <none>           <none>
kube-flannel-ds-s9knn             1/1     Running   0          157m   192.168.221.133   master1   <none>           <none>
kube-flannel-ds-zf8vk             1/1     Running   0          157m   192.168.221.132   node2     <none>           <none>
kube-proxy-dmk7g                  1/1     Running   0          144m   192.168.221.132   node2     <none>           <none>
kube-proxy-kcgvg                  1/1     Running   0          144m   192.168.221.133   master1   <none>           <none>
kube-proxy-wfx92                  1/1     Running   0          144m   192.168.221.131   node1     <none>           <none>
kube-scheduler-master1            1/1     Running   0          3h3m   192.168.221.133   master1   <none>           <none>
[root@master1 k8s]# kubectl get nodes
NAME      STATUS   ROLES    AGE    VERSION
master1   Ready    master   3h5m   v1.14.5
node1     Ready    <none>   3h     v1.14.5
node2     Ready    <none>   3h     v1.14.5

testDNS

kubectl run curl --image=radial/busyboxplus:curl -it
# 进入应用后,解析DNS,这里一定是可以解析出默认DNS,否则后续pod启动无法分配ip
nslookup kubernetes.default

kube-proxy Turn onipvs

kubectl get configmap kube-proxy -n kube-system -o yaml > kube-proxy-configmap.yaml
sed -i 's/mode: ""/mode: "ipvs"/' kube-proxy-configmap.yaml
kubectl apply -f kube-proxy-configmap.yaml
rm -f kube-proxy-configmap.yaml
kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'

Or directly edit the configmap related to kube-proxy

kubectl edit configmap kube-proxy -n kube-system
kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'

View IPVSconfiguration

yum install -y ipvsadm
ipvsadm -ln

Supplementary extension: one k8sof the graphical management toolskuboard

wget  https://kuboard.cn/install-script/kuboard.yaml
kubectl apply -f kuboard.yaml 
# 如果您参考 www.kuboard.cn 提供的文档安装 Kuberenetes,可在第一个 Master 节点上执行此命令

# 获取token,为登录kuboard做准备
echo $(kubectl -n kube-system get secret $(kubectl -n kube-system get secret | grep kuboard-user | awk '{print $1}') -o go-template='{
    
    {.data.token}}' | base64 -d)

eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJvYXJkLXVzZXItdG9rZW4teHoyeGciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoia3Vib2FyZC11c2VyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZWE2OTdlN2EtMjk4NC0xMWViLWIxYjEtMDAwYzI5YzJkNzM1Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmt1Ym9hcmQtdXNlciJ9.E7ttJCiuI6PR3Kib2Sx0OP5uk2ECqQyXV_yJjFuRFN6pNn2hwcC2Rw0BXoBCjpGuO1YXIIrnA08tOacTLXpLNTtpgP8nI31368CkyiDBQTBHPIICXqpQJodMnLQICBF4FQuO1hbRzlc5AevWxBuCKGhMPgrrdvdztQT_i8f26GgeXoymD3NR_aP6FFZHKeJDYqF30ftxdqaTqmjXpKlvFkPW50mO06SWLikZw_pBKE42RvCap4scnKFwd5S6gfRm2buPl4ufoD-x3NwrAIrjw8mNfwKZds4NtE0Kq9BuDTrIe73XdZ6jMNCT4wS47GuLXL0FG6Uwz7tfGAfPGG8wAA

Visit address: http://192.168.221.133:32567/dashboard
Insert picture description here

Guess you like

Origin blog.csdn.net/weixin_48505120/article/details/110222276