centos7.8kubeadm部署单master集群

在这里插入图片描述

为了满足开发人员的各个版本的 kubeflow对k8s的需求,选择1.14.5 版本的 kubernetes.

kubeadm 单集群部署 kubernetes-1.14.5

实现环境:

主机名 IP地址 cpu和内存 角色 上网模式 平台
master1 192.168.221.133 2H4G master NAT(静态ip) vmware
node1 192.168.221.132 2H4G node NAT(静态ip) vmware
node2 192.168.221.131 2H4G node NAT(静态ip) vmware

在不同的节点上执行下面命令

在 192.168.221.133 执行下面命令

hostnamectl set-hostname master1

在 192.168.221.132 执行下面命令

hostnamectl set-hostname node1

在 192.168.221.131 执行下面命令

hostnamectl set-hostname node2

三节点都执行下面命令

禁用 firewalldSELinux

systemctl stop firewalld && systemctl disable firewalld
sed -i 's||SELINUX=enforcing|SELINUX=disabled|g' /etc/selinux/config

禁用swap 分区

sed -i '11s/\/dev/# \/dev/g' /etc/fstab
swapoff -a

服务器校对时间

# 安装 chrony 服务,centos7.8默认自带了,没有的按如下安装
yum install -y chrony
systemctl start chronyd
systemctl enable chronyd

安装常用软件

yum install -y net-tools iproute lrzsz vim bash-completion wget tree bridge-utils unzip bind-utils git gcc

修改 /etc/hosts 添加各个节点的IP和主机名的对应解析

cat >> /etc/hosts << EOF
192.168.221.133 master1
192.168.221.132 node1
192.168.221.131 node2
EOF

yum设置

mkdir /etc/yum.repos.d/ori
mv /etc/yum.repos.d/CentOS-* /etc/yum.repos.d/ori/
cat > /etc/yum.repos.d/CentOS-Base.repo << "EOF"
# CentOS-Base.repo
#
# The mirror system uses the connecting IP address of the client and the
# update status of each mirror to pick mirrors that are updated to and
# geographically close to the client.  You should use this for CentOS updates
# unless you are manually picking other mirrors.
#
# If the mirrorlist= does not work for you, as a fall back you can try the
# remarked out baseurl= line instead.
#
#

[base]
name=CentOS-$releasever - Base
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/os/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

#released updates
[updates]
name=CentOS-$releasever - Updates
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/updates/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/extras/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/centosplus/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus
gpgcheck=1
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
EOF

安装epel并配置epel

yum install -y epel-release

cat > /etc/yum.repos.d/epel.repo <<"EOF"
[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7

[epel-debuginfo]
name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch/debug
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1

[epel-source]
name=Extra Packages for Enterprise Linux 7 - $basearch - Source
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/SRPMS
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1
EOF

yum clean all

yum makecache

升级内核

缘由:centos7.x 系统自带的3.10.x内核存在一些Bugs.导致运行的 Docker.Kubernetes 不稳定。

  1. 获取源
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm

  1. 开始安装,安装完成后检查/boot/grub2/grub.cfg中对应内核menuentry 中是否包含 initrd16配置,如果没有,再安装一次

    yum --enablerepo=elrepo-kernel install -y kernel-lt 
    
  2. 查看已安装的内核

    [root@master1 k8s]# awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
    0 : CentOS Linux (4.4.243-1.el7.elrepo.x86_64) 7 (Core)
    1 : CentOS Linux (3.10.0-1127.el7.x86_64) 7 (Core)
    2 : CentOS Linux (0-rescue-7324391132a64a85a2043ee161564bc5) 7 (Core)
    
  3. 设置启动内核为4.4.243

    grub2-set-default 0
    sed -i 's/saved/0/g'  /etc/default/grub
    
  4. 关闭 NUMA 和重新生成grub2配置文件

    sed -i 's/quiet/quiet numa=off/g' /etc/default/grub
    
    grub2-mkconfig -o /boot/grub2/grub.cfg
    

配置IPVS内核

缘由:默认情况下,Kube-proxy将在kubeadm部署的集群中以 iptables模式运行

需要注意的是,当内核版本大于4.19时,移除了nf_conntrack_ipv4模块,kubernetes官方建议使用nf_conntrack代替,否则报错无法找到nf_conntrack_ipv4模块

yum install -y ipset ipvsadm
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

chmod +x /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules

修改内核参数

cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF

modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf

修改可打开的文件描述

echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf

安装docker

  1. 安装所需软件包

    sudo yum install -y yum-utils \
      device-mapper-persistent-data \
      lvm2
    
  2. 修改国内docker源

    sudo yum-config-manager \
        --add-repo \
        https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docker-ce.repo
    
  3. 添加阿里云的镜像加速地址

    [ ! -d /etc/docker ] && mkdir /etc/docker
    cat > /etc/docker/daemon.json <<EOF
    {
      "exec-opts": ["native.cgroupdriver=systemd"],
      "log-driver": "json-file",
      "log-opts": {
        "max-size": "100m"
      },
      "storage-driver": "overlay2",
      "storage-opts": [
        "overlay2.override_kernel_check=true"
      ],
      "registry-mirrors": ["https://95tfc660.mirror.aliyuncs.com"]
    }
    EOF
    
  4. 安装docker-ce

    yum -y install docker-ce-18.06.0.ce
    
  5. 启动和设置开机自启动

    systemctl start docker && systemctl enable docker
    

安装 kubeletkubeadm、和 kubectl

缘由:kubelet运行在 Cluster 所有节点,负责启动 Pod 和容器,kubeadm 用于初始化 ClusterKubectlkubernetes命令行工具,通过kubectl可以部署和管理应用,查看各种资源、创建、删除和更新各种组件。

  1. 添加阿里云的和kubernetes有关的yum

    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    
  2. 安装和kubernetes 版本相一致的kubelet-1.14.5kubeadm-1.14.5kubectl-1.14.5

    yum install -y kubeadm-1.14.5 kubelet-1.14.5 kubectl-1.14.5
    
  3. 启动kubectl命令的自动补全功能

    # 安装并配置bash-completion
    yum install -y bash-completion
    echo 'source /usr/share/bash-completion/bash_completion' >> /etc/profile
    source /etc/profile
    echo "source <(kubectl completion bash)" >> ~/.bashrc
    source ~/.bashrc
    

初始化master1 (在192.168.221.133上执行)

kubeadm init 是关键

1 使用kubeadm config print init-defaults 可以打印集群初始化默认需要使用的配置文件,对配置文件部分修改之后我们kubeadm init 初始化集群时需要用到这个配置文件

kubeadm config print init-defaults > kubeadm-config.yaml

修改如下:

[root@k8s-master root] cd /usr/local/install-k8s/core
[root@k8s-master core]# kubeadm config print init-defaults
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.158.221.133     # 默认为1.2.3.4修改为本机内网IP  192.168.221.133
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master             
  taints:
  - effect: NoSchedule    # master节点不负责pod的调度,也就是master节点不充当work节点的角色
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers # 默认k8s.gcr.io,修改为阿里云地址:registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.14.5    # 注意:生成配置文件和我们要安装的k8s版本不同,需要为v1.14.5
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"      # 新增pod的网段,默认的配置文件是没有这个pod子网的,我们新增它,如果不新增,需要在初始化指定pod的子网段
  serviceSubnet: 10.96.0.0/12               # SVC的子网
scheduler: {}
kubeProxy:
  config:
    featureGates:
      SupportIPVSProxyMode: true
    mode: ipvs

2 在进行初始化之前,我们可以看看 kubeadm 部署 kubernetes-1.14.5 都安装了哪些组件**(非必须)**

kubeadm config images list --kubernetes-version=1.14.5

3 使用上面生成配置文件在master节点上进行初始化,这个 kubeadm-init.log 日志记录着node节点加入集群中的方式,不过也可以通过命令kubeadm token create --print-join-command 找到这个加入集群方式的

kubeadm init --config=kubeadm-config.yaml | tee kubeadm-init.log

kubeadm-init.log中记录了完成的初始化输出的内容,根据输出的内容基本上可以看出手动初始化安装一个Kubernetes集群所需要的关键步骤。 其中有以下关键内容:(下面我初始化的时候没有用到kubeadm-config.yaml这个文件,而是kubeadm init 后面跟的一系列的参数,其实是一个本质,这些参数就是在构建过程中替换了kubeadm-config.yaml中相对应的默认值)

[root@master1 lib]# kubeadm init --apiserver-advertise-address 192.168.221.133 --kubernetes-version="v1.14.5" --pod-network-cidr=10.244.0.0/16 --image-repository=registry.aliyuncs.com/google_containers | tee kubeadm-init.log
[init] Using Kubernetes version: v1.14.5
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master1 localhost] and IPs [192.168.221.133 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master1 localhost] and IPs [192.168.221.133 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.221.133]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.502648 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node master1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: xzg9lr.3pcwp2ca81mi09pr
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.221.133:6443 --token xzg9lr.3pcwp2ca81mi09pr \
    --discovery-token-ca-cert-hash sha256:f66b4383e963b04314fca375da4682404d85bac5df7571513469ddf3dec726b6 

init:指定版本进行初始化操作
preflight:初始化前的检查和下载所需要的 Docker 镜像文件
kubelet-start:生成 kubelet 的配置文件 var/lib/kubelet/config.yaml,没有这个文件 kubelet 无法启动,所以初始化之前的 kubelet 实际上启动不会成功
certificates:生成 Kubernetes 使用的证书,存放在 /etc/kubernetes/pki 目录中
kubeconfig:生成 KubeConfig 文件,存放在 /etc/kubernetes 目录中,组件之间通信需要使用对应文件
control-plane:使用 /etc/kubernetes/manifest 目录下的 YAML 文件,安装 Master 组件
etcd:使用 /etc/kubernetes/manifest/etcd.yaml 安装 Etcd 服务
wait-control-plane:等待 control-plan 部署的 Master 组件启动
apiclient:检查 Master 组件服务状态。
uploadconfig:更新配置
kubelet:使用 configMap 配置 kubelet
patchnode:更新 CNI 信息到 Node 上,通过注释的方式记录
mark-control-plane:为当前节点打标签,打了角色 Master,和不可调度标签,这样默认就不会使用 Master 节点来运行 Pod
bootstrap-token:生成 token 记录下来,后边使用 kubeadm join 往集群中添加节点时会用到
addons:安装附加组件 CoreDNS 和 kube-proxy

初始化完成之后,kubeadm-init.log 文件中会有节点加入 master1 方法提示,其他两个node节点复制执行即可加入master1节点,我们暂时先不做这一步,因为网络扁平化组件 flannel 我们还没安装,现在三个节点的网络还没打通。

配置 kubectl 命令

无论在master节点或node节点,要能够执行kubectl命令必须进行以下配置

root用户配置

cat << EOF >> ~/.bashrc
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
source ~/.bashrc

普通用户配置

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看k8s节点情况

[root@k8s-master1 ~]# kubectl get node
NAME         STATUS     ROLES    AGE     VERSION
master1   NotReady   master   7m46s   v1.14.5

此时主节点状态为NotReady,因为我们k8s集群要求一个扁平化的网络存在,由于我们还没构建flanneld网络插件。由于在上面 kubeadm init后,我们没有进行kubeadm join 所有现在我们还看不到 node 的信息

master1安装flannel

[root@k8s-master ~]# mkdir k8s
wget -P k8s/ https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 访问不了这个网址,下不下来,可以借助阿里云的ECS可以下载
 
sed -i 's#quay.io#quay-mirror.qiniu.com#g' k8s/kube-flannel.yml
# kube-flannel.yml 里面的镜像地址quay.io/coreos/flannel:v0.13.0 拉取不下来,修改为国内的地址后,还拉取不下来

docker load -i flanneld-v0.13.0-amd64.docker
# 由于是镜像,可以把该镜像从github拉下来,https://github.com/coreos/flannel/releases,然后 
docker load -i flanneld-v0.13.0-amd64.docker

# 再次执行部署flannel
kubectl apply -f k8s/kube-flannel.yml

此时查看node状态应该为Ready状态

[root@master1 k8s]# kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
master1   Ready    master   23m   v1.14.5

查看一下master1 上各个组件的状态,确认个组件都处于healthy状态:

[root@master1 k8s]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {
    
    "health":"true"} 

现在我们需要把node节点,加入集群中来,获得kubeadm join命令参数方式有两种:

方式一:在上面我们那个 kubeadm-init.log 文件可以找到

方式二:执行下面命令

kubeadm token create --print-join-command

node1node2 上分别都执行加入节点的命令(只有这一行命令是在 node节点执行的)

kubeadm join 192.168.221.133:6443 --token xzg9lr.3pcwp2ca81mi09pr \
    --discovery-token-ca-cert-hash sha256:f66b4383e963b04314fca375da4682404d85bac5df7571513469ddf3dec726b6

master1 上查看 节点和各个组件以pod的方式来运行的状态

[root@master1 k8s]# kubectl get pod -n kube-system -o wide
NAME                              READY   STATUS    RESTARTS   AGE    IP                NODE      NOMINATED NODE   READINESS GATES
coredns-f7567f89b-nlv5q           1/1     Running   0          3h4m   10.244.0.2        master1   <none>           <none>
coredns-f7567f89b-zvhkc           1/1     Running   0          3h4m   10.244.0.3        master1   <none>           <none>
etcd-master1                      1/1     Running   0          3h3m   192.168.221.133   master1   <none>           <none>
kube-apiserver-master1            1/1     Running   0          3h3m   192.168.221.133   master1   <none>           <none>
kube-controller-manager-master1   1/1     Running   0          3h3m   192.168.221.133   master1   <none>           <none>
kube-flannel-ds-mp9tb             1/1     Running   0          157m   192.168.221.131   node1     <none>           <none>
kube-flannel-ds-s9knn             1/1     Running   0          157m   192.168.221.133   master1   <none>           <none>
kube-flannel-ds-zf8vk             1/1     Running   0          157m   192.168.221.132   node2     <none>           <none>
kube-proxy-dmk7g                  1/1     Running   0          144m   192.168.221.132   node2     <none>           <none>
kube-proxy-kcgvg                  1/1     Running   0          144m   192.168.221.133   master1   <none>           <none>
kube-proxy-wfx92                  1/1     Running   0          144m   192.168.221.131   node1     <none>           <none>
kube-scheduler-master1            1/1     Running   0          3h3m   192.168.221.133   master1   <none>           <none>
[root@master1 k8s]# kubectl get nodes
NAME      STATUS   ROLES    AGE    VERSION
master1   Ready    master   3h5m   v1.14.5
node1     Ready    <none>   3h     v1.14.5
node2     Ready    <none>   3h     v1.14.5

测试DNS

kubectl run curl --image=radial/busyboxplus:curl -it
# 进入应用后,解析DNS,这里一定是可以解析出默认DNS,否则后续pod启动无法分配ip
nslookup kubernetes.default

kube-proxy 开启ipvs

kubectl get configmap kube-proxy -n kube-system -o yaml > kube-proxy-configmap.yaml
sed -i 's/mode: ""/mode: "ipvs"/' kube-proxy-configmap.yaml
kubectl apply -f kube-proxy-configmap.yaml
rm -f kube-proxy-configmap.yaml
kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'

或者直接编辑kube-proxy 有关的configmap

kubectl edit configmap kube-proxy -n kube-system
kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'

查看IPVS配置

yum install -y ipvsadm
ipvsadm -ln

补充扩展:k8s的图形化管理工具之 kuboard

wget  https://kuboard.cn/install-script/kuboard.yaml
kubectl apply -f kuboard.yaml 
# 如果您参考 www.kuboard.cn 提供的文档安装 Kuberenetes,可在第一个 Master 节点上执行此命令

# 获取token,为登录kuboard做准备
echo $(kubectl -n kube-system get secret $(kubectl -n kube-system get secret | grep kuboard-user | awk '{print $1}') -o go-template='{
    
    {.data.token}}' | base64 -d)

eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJvYXJkLXVzZXItdG9rZW4teHoyeGciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoia3Vib2FyZC11c2VyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZWE2OTdlN2EtMjk4NC0xMWViLWIxYjEtMDAwYzI5YzJkNzM1Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmt1Ym9hcmQtdXNlciJ9.E7ttJCiuI6PR3Kib2Sx0OP5uk2ECqQyXV_yJjFuRFN6pNn2hwcC2Rw0BXoBCjpGuO1YXIIrnA08tOacTLXpLNTtpgP8nI31368CkyiDBQTBHPIICXqpQJodMnLQICBF4FQuO1hbRzlc5AevWxBuCKGhMPgrrdvdztQT_i8f26GgeXoymD3NR_aP6FFZHKeJDYqF30ftxdqaTqmjXpKlvFkPW50mO06SWLikZw_pBKE42RvCap4scnKFwd5S6gfRm2buPl4ufoD-x3NwrAIrjw8mNfwKZds4NtE0Kq9BuDTrIe73XdZ6jMNCT4wS47GuLXL0FG6Uwz7tfGAfPGG8wAA

访问地址:http://192.168.221.133:32567/dashboard
在这里插入图片描述

猜你喜欢

转载自blog.csdn.net/weixin_48505120/article/details/110222276
今日推荐