k8s添加node节点操作异常总结

    环境:kubeadm方式不是的k8s集群

问题:

    1.查看源有系统安装的kubeadm、kubelet、kubectl的版本

[root@k8s-3 ~]# yum list kubeadm kubelet kubectl 
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.nju.edu.cn
 * epel: ftp.riken.jp
 * extras: mirrors.nju.edu.cn
 * updates: mirrors.nju.edu.cn
已安装的软件包
kubeadm.x86_64                                                                1.17.4-0                                                                @kubernetes
kubectl.x86_64                                                                1.17.4-0                                                                @kubernetes
kubelet.x86_64                                                                1.17.4-0                                                                @kubernetes
可安装的软件包
kubeadm.x86_64                                                                1.18.0-0                                                                kubernetes 
kubectl.x86_64                                                                1.18.0-0                                                                kubernetes 
kubelet.x86_64                                                                1.18.0-0                                                                kubernetes

    在添加的node上安装相同的版本

yum install -y  kubeadm-1.17.4-0   kubectl-1.17.4-0     kubelet-1.17.4-0

    2.执行添加node时提示报错

没有报错截图了,只能写个大概吧:提示查看kubelet的信息,操作kubelet的时候直接启动kubelet服务,又报错如下

[root@k8s3-1 kubernetes]# journalctl  -xeu kubelet 
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit kubelet.service has finished starting up.
-- 
-- The start-up result is done.
4月 05 23:45:52 k8s3-1 kubelet[5469]: F0405 23:45:52.985015    5469 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed 
4月 05 23:45:52 k8s3-1 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
4月 05 23:45:52 k8s3-1 systemd[1]: Unit kubelet.service entered failed state.
4月 05 23:45:52 k8s3-1 systemd[1]: kubelet.service failed.
4月 05 23:46:04 k8s3-1 systemd[1]: kubelet.service holdoff time over, scheduling restart.
4月 05 23:46:04 k8s3-1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.

查找后发现确实没有这个文件,想着从其他的node上拷贝这个文件,在启动服务,但是还是报错,静下心来回忆了一下,启动kubelet服务是kubeadm join操作时自动执行的,自己只需要把docker 和kubelet的服务配置城开始启动就行,查看自己的配置kubelet确实没有开始启动,操作之后,执行join命令查看kubelet服务启动成功,

master查看你节点信息正常

[root@k8s-3 ~]# kubectl get nodes -n kube-system  -o wide
NAME     STATUS     ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8s-3    Ready      master   15d   v1.17.4   192.168.191.30   <none>        CentOS Linux 7 (Core)   3.10.0-1062.18.1.el7.x86_64   docker://19.3.8
k8s-4    Ready      node     13d   v1.17.4   192.168.191.31   <none>        CentOS Linux 7 (Core)   3.10.0-1062.18.1.el7.x86_64   docker://19.3.8
k8s3-1   NotReady   <none>   10h   v1.17.4   192.168.191.22   <none>        CentOS Linux 7 (Core)   3.10.0-1062.18.1.el7.x86_64   docker://19.3.8

    3.新加入节点的网络pod启动问题

    该节点上的flannel的pod一直在初始化状态,kube-proxy确收正常的,flannel和kube-proxy都是daemonset

[root@k8s-3 ~]# kubectl get pods  -n kube-system  -o wide
NAME                            READY   STATUS     RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES
coredns-7f9c544f75-hdfjm        1/1     Running    6          15d     10.244.0.15      k8s-3    <none>           <none>
coredns-7f9c544f75-w62rh        1/1     Running    6          15d     10.244.0.14      k8s-3    <none>           <none>
etcd-k8s-3                      1/1     Running    9          12d     192.168.191.30   k8s-3    <none>           <none>
kube-apiserver-k8s-3            1/1     Running    9          15d     192.168.191.30   k8s-3    <none>           <none>
kube-controller-manager-k8s-3   1/1     Running    76         15d     192.168.191.30   k8s-3    <none>           <none>
kube-flannel-ds-amd64-dqv9t     1/1     Running    0          15m     192.168.191.30   k8s-3    <none>           <none>
kube-flannel-ds-amd64-mw6bq     0/1     Init:0/1   0          15m     192.168.191.22   k8s3-1   <none>           <none>
kube-flannel-ds-amd64-rsl68     1/1     Running    0          15m     192.168.191.31   k8s-4    <none>           <none>
kube-proxy-54kv7                1/1     Running    1          10h     192.168.191.22   k8s3-1   <none>           <none>
kube-proxy-7jwmj                1/1     Running    17         15d     192.168.191.30   k8s-3    <none>           <none>
kube-proxy-psrgh                1/1     Running    4          6d22h   192.168.191.31   k8s-4    <none>           <none>
kube-scheduler-k8s-3            1/1     Running    74         15d     192.168.191.30   k8s-3    <none>           <none>


查看kubelet的日志,提示flannel的配置文件不存在,想着人工拷贝一份,但是觉得不应该这样,肯定是其他地方出问题了,最后想到pod启动时还需要镜像文件,又查看了docker 的images确实没有flannel的镜像,人工导入一份

Apr  6 11:32:58 k8s3-1 kubelet: W0406 11:32:58.346394    2867 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Apr  6 11:33:01 k8s3-1 kubelet: E0406 11:33:01.498881    2867 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Apr  6 11:33:03 k8s3-1 kubelet: W0406 11:33:03.347829    2867 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Apr  6 11:33:06 k8s3-1 kubelet: E0406 11:33:06.530602    2867 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Apr  6 11:33:08 k8s3-1 kubelet: W0406 11:33:08.348352    2867 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Apr  6 11:33:11 k8s3-1 kubelet: E0406 11:33:11.572273    2867 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Apr  6 11:33:13 k8s3-1 kubelet: W0406 11:33:13.350727    2867 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Apr  6 11:33:16 k8s3-1 kubelet: E0406 11:33:16.599437    2867 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

手动导入flannel的镜像

[root@k8s3-1 ~]# docker images
REPOSITORY                                                       TAG                 IMAGE ID            CREATED             SIZE
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy   v1.17.4             6dec7cfde1e5        3 weeks ago         116MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause        3.1                 da86e6ba6ca1        2 years ago         742kB
[root@k8s3-1 ~]# docker load --input flannel.tar 
256a7af3acb1: Loading layer [==================================================>]  5.844MB/5.844MB
d572e5d9d39b: Loading layer [==================================================>]  10.37MB/10.37MB
57c10be5852f: Loading layer [==================================================>]  2.249MB/2.249MB
7412f8eefb77: Loading layer [==================================================>]  35.26MB/35.26MB
05116c9ff7bf: Loading layer [==================================================>]   5.12kB/5.12kB
Loaded image: quay.io/coreos/flannel:v0.12.0-amd64



再次查看master上的node信息

[root@k8s-3 ~]# kubectl get pods  -n kube-system  -o wide
NAME                            READY   STATUS    RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES
coredns-7f9c544f75-hdfjm        1/1     Running   6          15d     10.244.0.15      k8s-3    <none>           <none>
coredns-7f9c544f75-w62rh        1/1     Running   6          15d     10.244.0.14      k8s-3    <none>           <none>
etcd-k8s-3                      1/1     Running   9          12d     192.168.191.30   k8s-3    <none>           <none>
kube-apiserver-k8s-3            1/1     Running   9          15d     192.168.191.30   k8s-3    <none>           <none>
kube-controller-manager-k8s-3   1/1     Running   77         15d     192.168.191.30   k8s-3    <none>           <none>
kube-flannel-ds-amd64-dqv9t     1/1     Running   0          44m     192.168.191.30   k8s-3    <none>           <none>
kube-flannel-ds-amd64-mw6bq     1/1     Running   0          44m     192.168.191.22   k8s3-1   <none>           <none>
kube-flannel-ds-amd64-rsl68     1/1     Running   0          44m     192.168.191.31   k8s-4    <none>           <none>
kube-proxy-54kv7                1/1     Running   3          11h     192.168.191.22   k8s3-1   <none>           <none>
kube-proxy-7jwmj                1/1     Running   17         15d     192.168.191.30   k8s-3    <none>           <none>
kube-proxy-psrgh                1/1     Running   4          6d23h   192.168.191.31   k8s-4    <none>           <none>
kube-scheduler-k8s-3            1/1     Running   75         15d     192.168.191.30   k8s-3    <none>           <none>
[root@k8s-3 ~]# kubectl get nodes -n kube-system  -o wide
NAME     STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8s-3    Ready    master   15d   v1.17.4   192.168.191.30   <none>        CentOS Linux 7 (Core)   3.10.0-1062.18.1.el7.x86_64   docker://19.3.8
k8s-4    Ready    node     13d   v1.17.4   192.168.191.31   <none>        CentOS Linux 7 (Core)   3.10.0-1062.18.1.el7.x86_64   docker://19.3.8
k8s3-1   Ready    node     11h   v1.17.4   192.168.191.22   <none>        CentOS Linux 7 (Core)   3.10.0-1062.18.1.el7.x86_64   docker://19.3.8


最后docker为什么没有下载flannel镜像,这个没有确定原因


设置kubelet国内的pause镜像
cat >/etc/sysconfig/kubelet<<EOF
KUBELET_EXTRA_ARGS="--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"
EOF


猜你喜欢

转载自blog.51cto.com/12182612/2485100