kube-controller-manager CIDR allocation failed

查看 kube-controller-manager 报错, 说 ip 地址段已经没有可用的 ip了

# kubectl logs -f kube-controller-manager-loc-k8s-master -n kube-system
...
I0517 08:56:08.159382       1 event.go:291] "Event occurred" object="loc-node37" kind="Node" apiVersion="v1" type="Normal" reason="CIDRNotAvailable" message="Node loc-node37 status is now: CIDRNotAvailable"
E0517 08:58:55.524439       1 controller_utils.go:260] Error while processing Node Add/Delete: failed to allocate cidr from cluster cidr at idx:0: CIDR allocation failed; there are no remaining CIDRs left to allocate in the accepted range
...

Node 节点分配 podCIDR ,需要 kube-controller-manager 开启 allocate-node-cidrstrue,它和 cluster-cidr 参数共同使用的时候,controller-manager 会为所有的 Node 资源分配容器IP段, 并将结果写入到PodCIDR 字段。检查环境 kube-controller-manager 的配置文件,发现问题所在。如下图,环境设置了cluster-cidr 为 172.16.100.0/24, 同时设置了node-cidr-mask-size为24,node-cidr-mask-size 参数用来表示kubernetes 管理集群中节点的cidr掩码长度,默认是24位,需要从 cluster-cidr 里面分配地址段,而设置的cluster-cidr 显然无法满足这个掩码要求,导致 kube-controller-manager 为节点分配地址失败。

master 修改 kube-controller-manager 配置文件 --node-cidr-mask-size 参数

# vim /etc/kubernetes/manifests/kube-controller-manager.yaml
......
    - kube-controller-manager
    - --allocate-node-cidrs=true
    - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --bind-address=127.0.0.1
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --cluster-cidr=172.16.100.0/20                 # 修改这个
    - --cluster-name=kubernetes
    - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
    - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
    - --controllers=*,bootstrapsigner,tokencleaner
    - --kubeconfig=/etc/kubernetes/controller-manager.conf
    - --leader-elect=true
    - --node-cidr-mask-size=20                        # 修改这个
    - --port=0
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --root-ca-file=/etc/kubernetes/pki/ca.crt
    - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
    - --service-cluster-ip-range=192.168.0.0/24
    - --use-service-account-credentials=true
......

修改后等待几秒后会自动生效,无需重启,kubernetes 版本为 1.19.20,修改后再次查看日志就没有上述报错了

# ps -ef|grep controller   # 可以看到启动命令的--node-cidr-mask-size参数已经修改了

# kubectl logs -f kube-controller-manager-loc-master35 -n kube-system

Reference:
https://www.jianshu.com/p/29e9c967f029

猜你喜欢

转载自blog.csdn.net/qq_25854057/article/details/124827366
今日推荐