ceph specified OSD create pool

https://my.oschina.net/wangzilong/blog/1549690

 

ceph cluster allows mixed type of disk, such as the SSD is part of the disk, part of STAT. If the high-speed disk SSD for some small businesses, some businesses require STAT, when creating a resource pool can be created on certain specified OSD.

    Step 8 The basic steps:

        Only STAT currently no SSD, but does not affect the results.

1 acquisition crush map

[root@ceph-admin getcrushmap]# ceph osd getcrushmap -o /opt/getcrushmap/crushmap
got crush map from osdmap epoch 2482

2 decompile crush map

[root@ceph-admin getcrushmap]# crushtool -d crushmap -o decrushmap

3 Modify crush map

    Add the following two bucket root default later

root ssd {
	id -5
	alg straw
	hash 0
	item osd.0 weight 0.01
}
root stat {
        id -6
        alg straw
        hash 0
        item osd.1 weight 0.01
}

    Add the following rule in the rules section:

rule ssd{
	ruleset 1
	type replicated
	min_size 1
	max_size 10
	step take ssd
	step chooseleaf firstn 0 type osd
	step emit
}
rule stat{
        ruleset 2
        type replicated
        min_size 1
        max_size 10
        step take stat step chooseleaf firstn 0 type osd step emit } 

4 Compile crush map

[root@ceph-admin getcrushmap]# crushtool -c decrushmap -o newcrushmap

5 injection crush map

[root@ceph-admin getcrushmap]# ceph osd setcrushmap -i /opt/getcrushmap/newcrushmap 
set crush map
[root@ceph-admin getcrushmap]# ceph osd tree
ID WEIGHT  TYPE NAME           UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-6 0.00999 root stat                                             
 1 0.00999     osd.1                up  1.00000          1.00000 
-5 0.00999 root ssd                                              
 0 0.00999     osd.0                up  1.00000          1.00000 
-1 0.58498 root default                                          
-2 0.19499     host ceph-admin                                   
 2 0.19499         osd.2            up  1.00000          1.00000 
-3 0.19499 host ceph-node1 0 0.19499 osd.0 up 1.00000 1.00000 -4 0.19499 host ceph-node2 1 0.19499 osd.1 up 1.00000 1.00000 # 重新查看osd tree 的时候已经看见这个树已经变了。添加了名称为stat和SSD的两个bucket

6 create a resource pool

[root@ceph-admin getcrushmap]# ceph osd pool create ssd_pool 8 8
pool 'ssd_pool' created
[root@ceph-admin getcrushmap]# ceph osd pool create stat_pool 8 8
pool 'stat_pool' created
[root@ceph-admin getcrushmap]# ceph osd dump|grep ssd pool 28 'ssd_pool' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2484 flags hashpspool stripe_width 0 [root@ceph-admin getcrushmap]# ceph osd dump|grep stat pool 29 'stat_pool' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2486 flags hashpspool stripe_width 0 

Note: two resource pools ssd_pool and stat_pool of crush_ruleset just created is 0, the following needs to be modified.

7 modify resource pool storage rules

[root@ceph-admin getcrushmap]# ceph osd pool set ssd_pool crush_ruleset 1
set pool 28 crush_ruleset to 1 [root@ceph-admin getcrushmap]# ceph osd pool set stat_pool crush_ruleset 2 set pool 29 crush_ruleset to 2 [root@ceph-admin getcrushmap]# ceph osd dump|grep ssd pool 28 'ssd_pool' replicated size 3 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2488 flags hashpspool stripe_width 0 [root@ceph-admin getcrushmap]# ceph osd dump|grep stat pool 29 'stat_pool' replicated size 3 min_size 2 crush_ruleset 2 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2491 flags hashpspool stripe_width 0 # luminus 版本设置pool规则的语法是 [root@ceph-admin ceph]# ceph osd pool set ssd crush_rule ssd set pool 2 crush_rule to ssd [root@ceph-admin ceph]# ceph osd pool set stat crush_rule stat set pool 1 crush_rule to stat 

8 Verify

    Before verifying whether ssd_pool and take a look inside objects stat_pool

[root@ceph-admin getcrushmap]# rados ls -p ssd_pool
[root@ceph-admin getcrushmap]# rados ls -p stat_pool
#这两个资源池中都没有对象

    Command to add an object with rados two resource pool

[root@ceph-admin getcrushmap]# rados -p ssd_pool put test_object1 /etc/hosts
[root@ceph-admin getcrushmap]# rados -p stat_pool put test_object2 /etc/hosts
[root@ceph-admin getcrushmap]# rados ls -p ssd_pool
test_object1
[root@ceph-admin getcrushmap]# rados ls -p stat_pool
test_object2
#对象添加成功
[root@ceph-admin getcrushmap]# ceph osd map ssd_pool test_object1
osdmap e2493 pool 'ssd_pool' (28) object 'test_object1' -> pg 28.d5066e42 (28.2) -> up ([0], p0) acting ([0,1,2], p0) [root@ceph-admin getcrushmap]# ceph osd map stat_pool test_object2 osdmap e2493 pool 'stat_pool' (29) object 'test_object2' -> pg 29.c5cfe5e9 (29.1) -> up ([1], p1) acting ([1,0,2], p1) 

The above verification results can be seen, test_object1 stored in osd.0, test_object2 stored in osd.1. To achieve the intended purpose

Guess you like

Origin www.cnblogs.com/wangmo/p/11125697.html