ceph集群添加osd

获取osd的ID 这个操作是在管理节点上执行
[root@node-4 osd]# ceph osd create
2

对磁盘做处理
[root@node-4 ~]# parted /dev/sdb mktable gpt
Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? yes
Information: You may need to update /etc/fstab.

[root@node-4 ~]# parted /dev/sdb mkpart osd.2 1 100g
Information: You may need to update /etc/fstab.

[root@node-4 ~]# mkfs -t xfs -f /dev/sdb1
meta-data=/dev/sdb1 isize=256 agcount=4, agsize=6103424 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=24413696, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal log bsize=4096 blocks=11920, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0

[root@node-4 ~]#mkdir -p /data/osd.2
[root@node-4 ~]#mkdir -p /var/lib/ceph/osd/ceph-2
[root@node-4 ~]#mount /dev/sdb1 /data/osd.2

安装新osd的相关,初始化 OSD 数据目录
[root@node-4 osd]# ceph-osd -i 2 --mkfs --mkkey
2019-06-14 11:07:37.837706 7f8c33c2c7c0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
2019-06-14 11:07:37.938011 7f8c33c2c7c0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
2019-06-14 11:07:37.941115 7f8c33c2c7c0 -1 filestore(/var/lib/ceph/osd/ceph-2) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
2019-06-14 11:07:37.956386 7f8c33c2c7c0 -1 created object store /var/lib/ceph/osd/ceph-2 journal /var/lib/ceph/osd/ceph-2/journal for osd.2 fsid f690764c-d9b6-471c-b4c6-c7deaf72f8ad
2019-06-14 11:07:37.956446 7f8c33c2c7c0 -1 auth: error reading file: /var/lib/ceph/osd/ceph-2/keyring: can't open /var/lib/ceph/osd/ceph-2/keyring: (2) No such file or directory
2019-06-14 11:07:37.956522 7f8c33c2c7c0 -1 created new key in keyring /var/lib/ceph/osd/ceph-2/keyring
注册此 OSD 的密钥
[root@node-4 osd]# ceph auth add osd.2 osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd/ceph-2/keyring
added key for osd.2
[root@node-4 osd]# ceph osd tree
[root@node-4 osd]# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.03998 root default
-2 0.01999 host node-5
0 0.01999 osd.0 up 1.00000 1.00000
-3 0.01999 host node-4
1 0.01999 osd.1 up 1.00000 1.00000
2 0 osd.2 down 0 1.00000
此 OSD 加入 CRUSH 图之后,它就能接收数据了
[root@node-4 osd]# ceph osd crush add osd.2 0.01999 root=default host=node-4
add item id 2 name 'osd.2' weight 0.01999 at location {host=node-4,root=default} to crush map
[root@node-4 osd]# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.05997 root default
-2 0.01999 host node-5
0 0.01999 osd.0 up 1.00000 1.00000
-3 0.03998 host node-4
1 0.01999 osd.1 up 1.00000 1.00000
2 0.01999 osd.2 down 0 1.00000
启动osd进程
[root@node-4 osd]# ceph-osd -i 2
starting osd.2 at :/0 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
[root@node-4 osd]# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.05997 root default
-2 0.01999 host node-5
0 0.01999 osd.0 up 1.00000 1.00000
-3 0.03998 host node-4
1 0.01999 osd.1 up 1.00000 1.00000
2 0.01999 osd.2 up 1.00000 1.00000

猜你喜欢

转载自www.cnblogs.com/mrwuzs/p/11023085.html