1. Delete the OSD
the OSD delete created.
Osd data and logs on the same disk
will osd.0 kicked out of the cluster, perform ceph osd out 0
stop osd this process, the implementation of systemctl stop ceph-osd @ 0
Then execute: ceph osd crush remove osd.0, this time osd.0 no longer osd tree in the
Execution ceph auth del osd.0 and ceph osd rm 0, but this time successfully deleted the original data and log directory still, that is, data is still
At this point we will be / dev / sdd disk umount, and then erase the disk so the data will be completely deleted, execute umount / dev / sdd, and then perform ceph-disk zap / dev / sdd
But I found umount can not, can not delete folders
method
Creating osd
ceph-deploy osd create --data /dev/sdd node1
Because of the original lvm records, can not re-create, suggesting the original record lvm
Vg record sequence number
ceph-volume lvm list
Delete record vg
vgremove ceph-7216ab35-9637-4930-9537-afe9f1525efa
At this point sdd out
In the process of creating osd times
ceph-deploy osd create --data /dev/sdd node1
osd.0 in clusters into
ceph osd in osd.0