CEPH -S集群报错汇总

版权声明: https://blog.csdn.net/qq_32485197/article/details/88892264

问题一:

ceph -s

health HEALTH_WARN
            too many PGs per OSD (320 > max 300)

查询当前每个osd下最大的pg报警值:

[root@k8s-master01 ~]# ceph --show-config  | grep mon_pg_warn_max_per_osd
mon_pg_warn_max_per_osd = 300

解决方案:在配置文件中,调大集群的此选项的告警阀值;方法如下,在mon节点的ceph.conf(/etc/ceph/ceph.conf)配置文件中添加:

[root@k8s-master01 ~]# vim /etc/ceph/ceph.conf 
[global]
.......
mon_pg_warn_max_per_osd = 1000
重启monitor服务:

[root@k8s-master01 ~]# vim /etc/ceph/ceph.conf 
[root@k8s-master01 ~]# systemctl restart ceph-mon.target

[root@k8s-master01 ~]# ceph -s
    cluster 794a56bb-db20-433d-a47d-d6bc9dd586d3
     health HEALTH_OK
     monmap e1: 3 mons at {k8s-master01=192.168.0.164:6789/0,k8s-master02=192.168.0.165:6789/0,k8s-master03=192.168.0.166:6789/0}
            election epoch 12, quorum 0,1,2 k8s-master01,k8s-master02,k8s-master03
      fsmap e7: 1/1/1 up {0=k8s-master01=up:active}, 2 up:standby
     osdmap e31: 2 osds: 2 up, 2 in
            flags sortbitwise,require_jewel_osds
      pgmap v184: 320 pgs, 3 pools, 7555 bytes data, 20 objects
            218 MB used, 199 GB / 199 GB avail
                 320 active+clean

猜你喜欢

转载自blog.csdn.net/qq_32485197/article/details/88892264