ceph remove some of the problems osd

1. Delete the OSD
the OSD delete created.
Osd data and logs on the same disk
will osd.0 kicked out of the cluster, perform ceph osd out 0
stop osd this process, the implementation of systemctl stop ceph-osd @ 0

Then execute: ceph osd crush remove osd.0, this time osd.0 no longer osd tree in the

Execution ceph auth del osd.0 and ceph osd rm 0, but this time successfully deleted the original data and log directory still, that is, data is still

ceph remove some of the problems osd

At this point we will be / dev / sdd disk umount, and then erase the disk so the data will be completely deleted, execute umount / dev / sdd, and then perform ceph-disk zap / dev / sdd

But I found umount can not, can not delete folders

ceph remove some of the problems osd

method

ceph remove some of the problems osd

Creating osd

ceph-deploy osd create --data /dev/sdd node1

Because of the original lvm records, can not re-create, suggesting the original record lvm

Vg record sequence number

ceph-volume lvm list

ceph remove some of the problems osd

Delete record vg

vgremove ceph-7216ab35-9637-4930-9537-afe9f1525efa

ceph remove some of the problems osd

At this point sdd out

ceph remove some of the problems osd

In the process of creating osd times

ceph-deploy osd create --data /dev/sdd node1

ceph remove some of the problems osd

ceph remove some of the problems osd

ceph remove some of the problems osd

osd.0 in clusters into

ceph osd in osd.0

ceph remove some of the problems osd

Guess you like

Origin blog.51cto.com/11434894/2437772