[Reprint] Solution: After kubeadm init master has been in a state notready

After kubeadm init master has been in a state notready

HTTPS: // blog.csdn.net/wangmiaoyan/article/details/101216496 

very strange not my side of 

my side there is this file: 

but I will after this document was distributed better social 

[root @ k8s106 net 2.d] # pwd
 / etc / CNI / net.d 
[@ k8s106 the root net.d] CAT # 10 - flannel.conf 
{ 
  " name " : " cbr0 " ,
   " type " : " flannel " ,
   " the delegate " : {
     " isDefaultGateway " : to true 
  } 
} 
[root @ k8s106 net.d] #

 


When kubeadm installation Kubernetes, cluster state detection, master has been in a state notready

 

[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 32m v1.16.0
node1 NotReady <none> 8m2s v1.16.0

 

Find the problem, first check the status of pods

[root@master ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-58cc8c89f4-pkx82 0/1 Pending 0 34m
kube-system coredns-58cc8c89f4-sddpq 0/1 Pending 0 34m
kube-system etcd-master 1/1 Running 0 34m
kube-system kube-apiserver-master 1/1 Running 0 34m
kube-system kube-controller-manager-master 1/1 Running 0 33m
kube-system kube-proxy-4fj8z 1/1 Running 0 10m
kube-system kube-proxy-v54nh 1/1 Running 0 34m
kube-system kube-scheduler-master 1/1 Running 0 34m

 


Coredns has been found in a pending state, further to see kuberctl.services log

[root@master ~]# journalctl -f -u kubelet.service
-- Logs begin at Mon 2019-09-23 22:55:58 CST. --
Sep 24 01:33:44 master kubelet[6213]: E0924 01:33:44.107180 6213 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Sep 24 01:33:46 master kubelet[6213]: W0924 01:33:46.416805 6213 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d

 



See a network problem, it should be flannel

[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel unchanged
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg unchanged
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"

 

Solution, another link

[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel configured
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg unchanged
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

 

Check node status, or find notready. View Log

journalctl -f -u kubelet.service 
-- Logs begin at Mon 2019-09-23 22:55:58 CST. --
Sep 24 14:36:17 master kubelet[6213]: E0924 14:36:17.023967 6213 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Sep 24 14:36:19 master kubelet[6213]: W0924 14:36:19.214442 6213 cni.go:202] Error validating CNI config &{cbr0 false [0xc0006a22a0 0xc0006a2360] [123 10 32 32 34 110 97 109 101 34 58 32 34 99 98 114 48 34 44 10 32 32 34 112 108 117 103 105 110 115 34 58 32 91 10 32 32 32 32 123 10 32 32 32 32 32 32 34 116 121 112 101 34 58 32 34 102 108 97 110 110 101 108 34 44 10 32 32 32 32 32 32 34 100 101 108 101 103 97 116 101 34 58 32 123 10 32 32 32 32 32 32 32 32 34 104 97 105 114 112 105 110 77 111 100 101 34 58 32 116 114 117 101 44 10 32 32 32 32 32 32 32 32 34 105 115 68 101 102 97 117 108 116 71 97 116 101 119 97 121 34 58 32 116 114 117 101 10 32 32 32 32 32 32 125 10 32 32 32 32 125 44 10 32 32 32 32 123 10 32 32 32 32 32 32 34 116 121 112 101 34 58 32 34 112 111 114 116 109 97 112 34 44 10 32 32 32 32 32 32 34 99 97 112 97 98 105 108 105 116 105 101 115 34 58 32 123 10 32 32 32 32 32 32 32 32 34 112 111 114 116 77 97 112 112 105 110 103 115 34 58 32 116 114 117 101 10 32 32 32 32 32 32 125 10 32 32 32 32 125 10 32 32 93 10 125 10]}: [plugin flannel does not support config version ""]
Sep 24 14:36:19 master kubelet[6213]: W0924 14:36:19.214617 6213 cni.go:237] Unable to update cni config: no valid networks found in /etc/cni/net.d

 


We found error plugin flannel does not support config version, modify the configuration file

/etc/cni/net.d/ vim 10 - flannel.conflist
 // add cni version
 // document reads as follows 
{
 " name " : " cbr0 " ,
 " cniVersion " : " 0.2.0 " ,
 " plugins " : [ 
{ 
" type " : " flannel " ,
 " the delegate " : {
 " hairpinMode " : to true ,
 "isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}

 


After the modification, run

systemctl daemon-reload 

View cluster status, master found normal, in a ready state; but the Node
1 node or state in notready

[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 14h v1.16.0
node1 NotReady <none> 13h v1.16.0

 

The solution: go to node1, view kubectl log and found an error or no valid networks found in /etc/cni/net.d
will also add node1 version number cni re-start, you can see the cluster status to normal

[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 14h v1.16.0
node1 Ready <none> 14h v1.16.0

 

References:
kubeadm Kubernetes best https://www.kubernetes.org.cn/5462.html 1.14 installation
kubernetes installation errors (kube-dns state has been Pending, master node is NotReady)
HTTPS: //blog.csdn the .NET / u013355826 / Article This article was / the Details / 82,786,649
How to FIX flannel CNI plugin Error:. [flannel does not Support plugin config Version ""]
https://stackoverflow.com/questions/58037620/how-to-fix-flannel- -plugin-error-CNI plugin-flannel-does-not-Support-config-VE
Kubernetes Nodes NotReady Solutions
https://blog.csdn.net/qq_21816375/article/details/80222689
--------- -------
Disclaimer: this article is the original article CSDN bloggers "wangmiaoyan", and follow CC 4.0 BY-SA copyright agreement, reproduced, please attach the original source link and this statement.
Original link: https: //blog.csdn.net/wangmiaoyan/article/details/101216496

Guess you like

Origin www.cnblogs.com/jinanxiaolaohu/p/12205357.html