Reprinted https://my.oschina.net/wangzilong/blog/1595081
ceph snapshots, clones
ceph is a very good back-end storage systems. Including the most commonly used memory block, object storage, the file system. Here we talk about the most used block storage.
Principles and mechanisms of block storage as we all know, but the store also supports quick snapshots and clones.
1, snapshot
ceph snapshot is to do a read-only copy of the source image, for later recovery.
[root@ceph-admin ceph]# rbd ls test_pool7 testRBD test_rbd7 test_rbd_clone7 [root@ceph-admin ceph]# rbd info test_pool7/testRBD rbd image 'testRBD': size 1024 MB in 256 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.e3bcda74b0dc51 format: 2 features: layering flags:
2, clone
Here is mainly about cloning can be based on a snapshot mirroring, cloning used it is cow, called a copy on write slang also known as "copy-on-write" a little more closely called "copy-on-write again." Here is a snapshot of cloning clone created based on only creates a map to the source (source here is a snapshot of) logic, not yet allocated to clone a real physical space. As I think we all understand. While snapshots are read-only, but cloning is based snapshot creation readable and writable. When we write the mirrored clone of execution, the system will really give the cloned image allocate physical space. The cloned images or image is written clones are normal use and image itself is the same. This is called a cow. When the pair clones did not write but when read, will read the snapshot to be cloned, understand the above truth we know from the mirror snapshots of clones is dependent on the snapshot, once the snapshot is deleted then the clone mirror also destroyed, so we have to protect this snapshot.
# Create a snapshot [root @ ceph-admin ceph] # rbd snap create test_pool7 / testRBD @ testRBD-snap --- which is a mirror copy of testRBD # view snapshots [root @ Ceph Ceph-ADMIN] # rbd SNAP List test_pool7 / testRBD snapID SIZE NAME 7 testRBD-SNAP 1024 MB # create snapshots of clones [root @ ceph-admin ceph] # rbd clone test_pool7 / testRBD @ testRBD-snap test_pool7 / testRBD-snap-clone ---- snapshots clone 2017-12- 14 26: 05: 48.941845 7fe1f4082d80 -1 librbd: snapshot parent the MUST bE protected rbd: clone error: (22) Invalid argument # above error, tell you need to be protected before the snapshot to create a clone ------- remind this error it right, or else reminded of # snapshots protection [root @ ceph-admin ceph] # rbd snap protect test_pool7 / testRBD @ testRBD-snap ----- snapshots protection create a clone # [root @ ceph-admin ceph] # rbd clone test_pool7 / testRBD @ testRBD-snap test_pool7 / testRBD-snap-clone ----- again snapshot cloning # View clone [root @ ceph-admin ceph] # rbd ls test_pool7 | grep clone | grep RBD testRBD-SNAP-clone # to view details cloning [root @ Ceph-ADMIN Ceph] # rbd info test_pool7 / testRBD-SNAP-clone rbd Image 'testRBD-SNAP-clone': size 1024 MB in 256 Objects Order 22 is (4096 kB Objects) block_name_prefix: rbd_data.e3f94a2ae8944a the format: 2 Features: layering the flags: parent: test_pool7 / SNAP-testRBD @ testRBD Coverlap: 1024 MB
We can see the successful cloning of a mirror is dependent on the snapshot, to see the parent, overlap
If you do not want to rely on snapshots, clones and snapshots need to do a consolidation
[root@ceph-admin ceph]# rbd flatten test_pool7/testRBD-snap-clone---对克隆进行合并 Image flatten: 100% complete...done. [root@ceph-admin ceph]# rbd info test_pool7/testRBD-snap-clone rbd image 'testRBD-snap-clone': size 1024 MB in 256 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.e3f94a2ae8944a format: 2 features: layering flags:
Now the image has been cloned does not depend on the snapshot, and not see the parent and overlap
Now we can delete the snapshot
# Released a snapshot of protection [root @ Ceph Ceph-ADMIN] # rbd SNAP Unprotect test_pool7 / testRBD @ testRBD-SNAP # delete snapshot [root @ ceph-admin ceph] # rbd snap rm test_pool7 / testRBD @ testRBD-snap