2. The configuration of ceph-deploy uses RBD

illustrate

After deploying the ceph cluster ( active + clean state ),

Let's practice ceph block device (ie RBD or RADOS block device).
We need to initialize and use the volume (image) created by the ceph cluster on a new client node

 


 On the ceph-client node, install ceph

1. Install ceph for ceph-client on the admin node
  1. ceph-deploy install ceph-client
2. Distribute ceph.conf and ceph.client.admin.keyring to the ceph-client node
  1. ceph-deploy admin
3. Confirm that the keyring file has read permission
  1. chmod +r /etc/ceph/ceph.client.admin.keyring

On the admin node, create a block device pool

1. On the admin node, create a pool named rbd
         1. ceph osd pool create rbd 8
2. On the admin node, initialize the pool
  1. rbd pool init rbd

On the ceph-client node, configure, use block devices

1. On the ceph-client node, create a block device image
  1. rbd create foo --size 4096 --image-feature layering [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring]
2. On the ceph-client node, map the image to the block device
  1. rbd map foo --name client.admin [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring]
3. On the ceph-client node, create a file system for the block device
  1. mkfs.ext4 -m0 /dev/rbd/rbd/foo
4. On the ceph-client node, mount the block device
  1. mkdir /mnt/ceph-block-device
  2. mount /dev/rbd/rbd/foo /mnt/ceph-block-device
  3. cd /mnt/ceph-block-device
5. Configure block devices to be automatically mapped and mounted on boot (and unmounted/unmapped on shutdown) - see: http://docs.ceph.com/docs/master/man/8/rbdmap/

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325264724&siteId=291194637