Ceph-块设备实践操作报错汇总

  1. clock skew detected错误
# 查看集群状态:
[cephadm@ceph-master my-cluster]$ ceph status
  cluster:
    id:     ce89b98d-91a5-44b5-a546-6648492b1646
    health: HEALTH_WARN
            clock skew detected on mon.ceph-node02, mon.ceph-node03            #出现clock skew detected错误 
...

clock skew detected相关错误, 解决方案:

首先,查看NTP服务是否启动
# 查看ntp服务是否启动
systemctl status ntpd
# 启动ntp服务
systemctl start ntpd
# 设置ntpd服务开机自启
systemctl enable ntpd
然后,修改ceph中的时间偏差阈值
# 修改.ceph文件,在gloabl中补充两条信息
mon clock drift allowed = 2
mon clock drift warn backoff = 30
# 将ceph.conf推送到其他节点中
ceph-deploy --overwrite-conf config push ceph-node01 ceph-node02 ceph-node03 ceph-master
# 重启mon服务
systemctl restart ceph-mon.targe
最后,查看集群状态
[root@ceph-master cephadm]# ceph -s
  cluster:
    id:     ce89b98d-91a5-44b5-a546-6648492b1646
    health: HEALTH_OK

成功, 问题解决!


  1. 100.000% pgs unknown
# 这个,ε=(´ο`*))),说来不好意思,我一下午在干嘛?原来是没有装盘。基础概念不牢靠, 这就是下场!
# 我一般都是懵懵懂懂,很多事都得做很多遍才能明白什么意思,这也就是学习不好的原因吧!

#解决办法
for dev in /dev/sdb /dev/sdc /dev/sdd
do
ceph-deploy disk zap ceph-master $dev
ceph-deploy osd create ceph-master --data $dev
ceph-deploy disk zap ceph-node01 $dev
ceph-deploy osd create ceph-node01 --data $dev
ceph-deploy disk zap ceph-node02 $dev
ceph-deploy osd create ceph-node02 --data $dev
ceph-deploy disk zap ceph-node03 $dev
ceph-deploy osd create ceph-node03 --data $dev
done

# 正常状态
[root@ceph-master cephadm]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME            STATUS REWEIGHT PRI-AFF
-1       0.57477 root default
-3       0.14369     host ceph-master
 0   hdd 0.04790         osd.0            up  1.00000 1.00000
 1   hdd 0.04790         osd.1            up  1.00000 1.00000
 2   hdd 0.04790         osd.2            up  1.00000 1.00000
-5       0.14369     host ceph-node01
 3   hdd 0.04790         osd.3            up  1.00000 1.00000
 4   hdd 0.04790         osd.4            up  1.00000 1.00000
 5   hdd 0.04790         osd.5            up  1.00000 1.00000
-7       0.14369     host ceph-node02
 6   hdd 0.04790         osd.6            up  1.00000 1.00000
 7   hdd 0.04790         osd.7            up  1.00000 1.00000
 8   hdd 0.04790         osd.8            up  1.00000 1.00000
-9       0.14369     host ceph-node03
 9   hdd 0.04790         osd.9            up  1.00000 1.00000
10   hdd 0.04790         osd.10           up  1.00000 1.00000
11   hdd 0.04790         osd.11           up  1.00000 1.00000
[root@ceph-master cephadm]# ceph -s
  cluster:
    id:     ce89b98d-91a5-44b5-a546-6648492b1646
    health: HEALTH_WARN
            Long heartbeat ping times on back interface seen, longest is 1234.319 msec
            Long heartbeat ping times on front interface seen, longest is 1227.693 msec

  services:
    mon: 4 daemons, quorum ceph-master,ceph-node02,ceph-node03,ceph-node01
    mgr: ceph-master(active), standbys: ceph-node02, ceph-node03, ceph-node01
    osd: 12 osds: 12 up, 12 in

  data:
    pools:   1 pools, 128 pgs
    objects: 0  objects, 0 B
    usage:   12 GiB used, 576 GiB / 588 GiB avail
    pgs:     128 active+clean
发布了21 篇原创文章 · 获赞 0 · 访问量 2611

猜你喜欢

转载自blog.csdn.net/u012720518/article/details/105510564