kubectl get componentstatus ERROR:HTTP probe failed with statuscode: 503

通过kubectl命令可以查看k8s各组件的状态:

[root@wecloud-test-k8s-1 ~]# kubectl get cs
NAME                 STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-2 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} etcd-0 Healthy {"health": "true"} 

这里分享一个问题的解决方法,我再多次执行查看状态的时候发现etcd的状态总是会有部分节点出现Unhealthy的状态。

[root@wecloud-test-k8s-1 ~]# kubectl get componentstatuses
NAME                 STATUS      MESSAGE                                  ERROR
controller-manager   Healthy     ok                                       
scheduler            Healthy     ok                                       
etcd-0               Healthy     {"health": "true"} etcd-2 Healthy {"health": "true"} etcd-1 Unhealthy HTTP probe failed with statuscode: 503 [root@wecloud-test-k8s-1 ~]# kubectl get componentstatuses NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"} etcd-2 Unhealthy HTTP probe failed with statuscode: 503 etcd-1 Unhealthy HTTP probe failed with statuscode: 503 

现象是etcd的监控状态非常不稳定,查看日志发现etcd服务的各节点之间的心跳检测出现了问题:

root@zhangchi-ThinkPad-T450s:~# ssh 192.168.99.189
[root@wecloud-test-k8s-2 ~]# systemctl status etcd
● etcd.service - Etcd Server
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2018-04-09 22:56:31 CST; 1 day 10h ago Docs: https://github.com/coreos Main PID: 17478 (etcd) CGroup: /system.slice/etcd.service └─17478 /usr/local/bin/etcd --name infra1 --cert-file=/etc/kubernetes/ssl/kubernetes.pem --key-file=/etc/kubernetes/ssl/kubern... 4月 11 09:33:35 wecloud-test-k8s-2.novalocal etcd[17478]: e23bf6fd185b2dc5 [quorum:2] has received 1 MsgVoteResp votes and 1 vote ...ctions 4月 11 09:33:36 wecloud-test-k8s-2.novalocal etcd[17478]: e23bf6fd185b2dc5 received MsgVoteResp from c9b9711086e865e3 at term 337 4月 11 09:33:36 wecloud-test-k8s-2.novalocal etcd[17478]: e23bf6fd185b2dc5 [quorum:2] has received 2 MsgVoteResp votes and 1 vote ...ctions 4月 11 09:33:36 wecloud-test-k8s-2.novalocal etcd[17478]: e23bf6fd185b2dc5 became leader at term 337 4月 11 09:33:36 wecloud-test-k8s-2.novalocal etcd[17478]: raft.node: e23bf6fd185b2dc5 elected leader e23bf6fd185b2dc5 at term 337 4月 11 09:33:41 wecloud-test-k8s-2.novalocal etcd[17478]: timed out waiting for read index response 4月 11 09:33:46 wecloud-test-k8s-2.novalocal etcd[17478]: failed to send out heartbeat on time (exceeded the 100ms timeout for 401...516ms) 4月 11 09:33:46 wecloud-test-k8s-2.novalocal etcd[17478]: server is likely overloaded 4月 11 09:33:46 wecloud-test-k8s-2.novalocal etcd[17478]: failed to send out heartbeat on time (exceeded the 100ms timeout for 401.80886ms) 4月 11 09:33:46 wecloud-test-k8s-2.novalocal etcd[17478]: server is likely overloaded Hint: Some lines were ellipsized, use -l to show in full. 

报错信息主要为:failed to send out heartbeat on time (exceeded the 100ms timeout for 401.80886ms)

心跳检测报错主要与以下因素有关(磁盘速度、cpu性能和网络不稳定问题):

etcd使用了raft算法,leader会定时地给每个follower发送心跳,如果leader连续两个心跳时间没有给follower发送心跳,etcd会打印这个log以给出告警。通常情况下这个issue是disk运行过慢导致的,leader一般会在心跳包里附带一些metadata,leader需要先把这些数据固化到磁盘上,然后才能发送。写磁盘过程可能要与其他应用竞争,或者因为磁盘是一个虚拟的或者是SATA类型的导致运行过慢,此时只有更好更快磁盘硬件才能解决问题。etcd暴露给Prometheus的metrics指标walfsyncduration_seconds就显示了wal日志的平均花费时间,通常这个指标应低于10ms。

第二种原因就是CPU计算能力不足。如果是通过监控系统发现CPU利用率确实很高,就应该把etcd移到更好的机器上,然后通过cgroups保证etcd进程独享某些核的计算能力,或者提高etcd的priority。

第三种原因就可能是网速过慢。如果Prometheus显示是网络服务质量不行,譬如延迟太高或者丢包率过高,那就把etcd移到网络不拥堵的情况下就能解决问题。但是如果etcd是跨机房部署的,长延迟就不可避免了,那就需要根据机房间的RTT调整heartbeat-interval,而参数election-timeout则至少是heartbeat-interval的5倍。

本次实验是在openstack云主机上进行的,所以磁盘io不足是已知的问题,所以需要修改hearheat-interval的值(调大一些)。

在etcd服务节点上修改/etc/etcd/etcd.conf文件,添加如下内容:

6秒检测频率

ETCD_HEARTBEAT_INTERVAL=6000     
ETCD_ELECTION_TIMEOUT=30000

然后重启etcd服务

猜你喜欢

转载自www.cnblogs.com/wangjq19920210/p/9257156.html