Kubernetes Practical Combat (11)-Reinstalling the Kubernetes Cluster

1 Delete all k8s nodes (node)

$ kubectl get nodes 
NAME           STATUS   ROLES                  AGE   VERSION
ops-master-1   Ready    control-plane,master   25h   v1.21.9
ops-worker-1   Ready    <none>                 25h   v1.21.9
ops-worker-2   Ready    <none>                 25h   v1.21.9

kubectl drain safely expels all pods on the node. --ignore-daemonsets often needs to be specified. This is because deamonset will ignore the SchedulingDisabled label (when using kubectl drain, the node will be automatically labeled with the unschedulable SchedulingDisabled label), so the deamonset controller controls After the pod is deleted, it may be started on this node immediately, which will become an infinite loop. Therefore, daemonset is ignored here.

# kubectl drain ops-worker-2 --ignore-daemonsets 
node/ops-worker-2 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-wmz4r, kube-system/kube-proxy-84gcx
evicting pod kube-system/calico-kube-controllers-6b9fbfff44-rl2mh
pod/calico-kube-controllers-6b9fbfff44-rl2mh evicted
node/ops-worker-2 evicted

ops-worker-2 becomes SchedulingDisabled.

$ kubectl get nodes 
NAME           STATUS                     ROLES                  AGE   VERSION
ops-master-1   Ready                      control-plane,master   25h   v1.21.9
ops-worker-1   Ready                      <none>                 25h   v1.21.9
ops-worker-2   Ready,SchedulingDisabled   <none>                 25h   v1.21.9

Delete node ops-worker-2.

$ kubectl delete nodes ops-worker-2
node "ops-worker-2" deleted
$ kubectl get nodes 
NAME           STATUS   ROLES                  AGE   VERSION
ops-master-1   Ready    control-plane,master   25h   v1.21.9
ops-worker-1   Ready    <none>                 25h   v1.21.9

The remaining nodes perform similar operations.

ops-worker-1:

$ kubectl drain ops-worker-1 --ignore-daemonsets 
node/ops-worker-1 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-g6ntb, kube-system/kube-proxy-nf2rm
evicting pod kube-system/coredns-59d64cd4d4-g27ht
evicting pod kube-system/calico-kube-controllers-5d4b78db86-sgtcb
pod/calico-kube-controllers-5d4b78db86-sgtcb evicted
pod/coredns-59d64cd4d4-g27ht evicted
node/ops-worker-1 evicted
$ kubectl delete nodes ops-worker-1
node "ops-worker-1" deleted
$ kubectl get nodes 
NAME           STATUS   ROLES                  AGE   VERSION
ops-master-1   Ready    control-plane,master   25h   v1.21.9

ops-master-1: 

$ kubectl drain ops-master-1 --ignore-daemonsets
$ kubectl delete nodes ops-master-1
$ kubectl get nodes 
No resources found

2 kubeadm initialization 

$ kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.21.0 --pod-network-cidr=172.25.0.0/16
[init] Using Kubernetes version: v1.21.0
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR Port-6443]: Port 6443 is in use
        [ERROR Port-10259]: Port 10259 is in use
        [ERROR Port-10257]: Port 10257 is in use
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
        [ERROR Port-10250]: Port 10250 is in use
        [ERROR Port-2379]: Port 2379 is in use
        [ERROR Port-2380]: Port 2380 is in use
        [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

At this time, kubeadm is re-initialized, but an error is reported. Looking at the error message, you can find that:The port is occupied and the configuration file already exists.

When reinitializing the k8s cluster, you need to clear the original settings.

$ kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1211 21:13:17.763894   26047 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get node registration: failed to get corresponding node: nodes "ops-master-1" not found
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1211 21:13:19.642179   26047 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

Re-initialize kubeadm.

$ kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.21.0 --pod-network-cidr=172.25.0.0/16
[init] Using Kubernetes version: v1.21.0
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local ops-master-1] and IPs [10.96.0.1 10.220.43.203]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost ops-master-1] and IPs [10.220.43.203 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost ops-master-1] and IPs [10.220.43.203 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 11.002252 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node ops-master-1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node ops-master-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 6viszv.cpokyex9yilcl21q
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.220.43.203:6443 --token 6viszv.cpokyex9yilcl21q \
        --discovery-token-ca-cert-hash sha256:13c82c866ecff39980a9c649f2ace2758c84015e10e44f8bb0a2735cd9d247e5 

Create directories and configuration files as required.

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp: overwrite ‘/root/.kube/config’? y
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

3 Add worker nodes to the k8s cluster

Next, add the other two worker nodes to the k8s cluster.

Add the ops-worker-1 node to the k8s cluster.

kubeadm join 10.220.43.203:6443 --token 6viszv.cpokyex9yilcl21q \
--discovery-token-ca-cert-hash sha256:13c82c866ecff39980a9c649f2ace2758c84015e10e44f8bb0a2735cd9d247e5 
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
        [ERROR Port-10250]: Port 10250 is in use
        [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

 When the work node rejoins the k8s cluster, it also needs to clear the original settings.

$ kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1211 21:21:31.827043   28238 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://10.220.43.203:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1211 21:21:33.947314   28238 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

Add the ops-worker-1 node to the k8s cluster again. You can see that the ops-worker-1 node has been successfully added to the k8s cluster. 

$ kubeadm join 10.220.43.203:6443 --token 6viszv.cpokyex9yilcl21q         --discovery-token-ca-cert-hash sha256:13c82c866ecff39980a9c649f2ace2758c84015e10e44f8bb0a2735cd9d247e5 
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
$ kubectl get nodes
NAME           STATUS   ROLES                  AGE     VERSION
ops-master-1   Ready    control-plane,master   8m55s   v1.21.9
ops-worker-1   Ready    <none>                 2m35s   v1.21.9

The ops-worker-2 node also performs similar operations.

$ kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1211 21:21:55.477393    7815 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://10.220.43.203:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1211 21:21:57.848908    7815 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
$ kubeadm join 10.220.43.203:6443 --token 6viszv.cpokyex9yilcl21q         --discovery-token-ca-cert-hash sha256:13c82c866ecff39980a9c649f2ace2758c84015e10e44f8bb0a2735cd9d247e5 
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Check the k8s cluster node status.

$ kubectl get nodes
NAME           STATUS   ROLES                  AGE     VERSION
ops-master-1   Ready    control-plane,master   8m55s   v1.21.9
ops-worker-1   Ready    <none>                 2m35s   v1.21.9
ops-worker-2   Ready    <none>                 2m7s    v1.21.9

4 Install calico 

Because the k8s cluster has been installed once before, and the calico plug-in has also been installed. After reinstallation, calico is not installed, but the status of kubectl get nodes is Ready because the Ready status has been written into the etcd database. , the status has not been updated, so you need to reinstall calico.

$ kubectl apply -f calico.yaml 
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
serviceaccount/calico-cni-plugin created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
$  kubectl get nodes
NAME           STATUS   ROLES                  AGE     VERSION
ops-master-1   Ready    control-plane,master   11m     v1.21.9
ops-worker-1   Ready    <none>                 5m34s   v1.21.9
ops-worker-2   Ready    <none>                 5m6s    v1.21.9
$ kubectl get pods -n kube-system -o wide
NAME                                       READY   STATUS    RESTARTS   AGE     IP              NODE           NOMINATED NODE   READINESS GATES
calico-kube-controllers-5d4b78db86-qspld   1/1     Running   0          63s     172.16.50.129   ops-worker-2   <none>           <none>
calico-node-mm4wq                          1/1     Running   0          63s     10.220.43.204   ops-worker-1   <none>           <none>
calico-node-p8gh8                          1/1     Running   0          63s     10.220.43.205   ops-worker-2   <none>           <none>
calico-node-xg6mf                          1/1     Running   0          63s     10.220.43.203   ops-master-1   <none>           <none>
coredns-59d64cd4d4-rnpb2                   1/1     Running   0          11m     172.16.13.2     ops-master-1   <none>           <none>
coredns-59d64cd4d4-wfbgk                   1/1     Running   0          11m     172.16.13.1     ops-master-1   <none>           <none>
etcd-ops-master-1                          1/1     Running   0          11m     10.220.43.203   ops-master-1   <none>           <none>
kube-apiserver-ops-master-1                1/1     Running   0          11m     10.220.43.203   ops-master-1   <none>           <none>
kube-controller-manager-ops-master-1       1/1     Running   0          11m     10.220.43.203   ops-master-1   <none>           <none>
kube-proxy-dpmpb                           1/1     Running   0          5m17s   10.220.43.205   ops-worker-2   <none>           <none>
kube-proxy-hd2sj                           1/1     Running   0          5m45s   10.220.43.204   ops-worker-1   <none>           <none>
kube-proxy-wxm2z                           1/1     Running   0          11m     10.220.43.203   ops-master-1   <none>           <none>
kube-scheduler-ops-master-1                1/1     Running   0          11m     10.220.43.203   ops-master-1   <none>           <none>

Guess you like

Origin blog.csdn.net/ygq13572549874/article/details/134924263