Use ceph-deploy storage cluster deployment ceph

 

First, prepare the environment

Create two hosts, ip address and host name as follows

  192.168.2.100, the host name ceph-1

  192.168.2.101, the host name ceph-2

Each host a newly added data disk, partition, partition can according to their needs, where four sub-partitions.

deploying at least two nodes ceph two, at least three data disk or partition.

ceph-1 node as the node deployment deployment operation is performed.

 

Two, ceph-deploy admin node node 

1, the source added yum

vim /etc/yum.repos.d/ceph.repo

[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-infernalis/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc
priority=1

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-infernalis/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-infernalis/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc
priority=1

2, host name resolution

vim /etc/hosts

192.168.2.100   ceph-1
192.168.2.101 ceph-2

3, NTP time synchronization

vim /etc/chrony.conf

allow 192.168.2.0/16
local stratum 10

systemctl restart chronyd
systemctl enable chronyd

chronyc sources -v

4, free secret landing

ssh-keygen
ssh-copy-id ceph-1
ssh-copy-id ceph-2

5, disk partition format

fdisk /dev/sdb
mkfs.xfs /dev/sdb1

5, mounting ceph-deploy

yum install ceph-deploy -y

6, the new cluster, generate a configuration file

mkdir ceph-cluster && cd ceph-cluster
ceph-deploy new ceph-1 ceph-2

7, modified ceph-cluster / ceph.conf

#添加网络地址
public_network= 192.168.2.0/24

#副本pg数为2,默认为3,最小工作size为默认size - (默认size/2)
osd pool default size = 2

#官方建议平均每个osd 的pg数量不小于30,即pg num > (osd_num) * 30 / 2(副本数)
osd pool default pg num = 1024
osd pool default pgp num = 1024

8、安装ceph软件包

ceph-deploy install ceph-1 ceph-2

#或者每个节点安装
yum -y install ceph ceph-radosgw

9、传送ceph.conf

ceph-deploy --overwrite-conf config push ceph-1 ceph-2

10、查看各节点磁盘

ceph-deploy disk list ceph-1 ceph-2

11、初始化mon节点

ceph-deploy mon create-initial

如果报错:RuntimeError: config file /etc/ceph/ceph.conf exists with different content; use --overwrite-conf to overwrite,执行如下命令后再执行上述命令
ceph-deploy --overwrite-conf mon create ceph{3,1,2}
ceph -s # 查看mon是否添加成功

12、配置admin key 到每个节点

ceph-deploy admin ceph-1 ceph-2

13、添加osd

ceph-deploy --overwrite-conf osd prepare ceph-1:sdb1 ceph-1:sdb2 ceph-1:sdb3 ceph-1:sdb4 ceph-2:sdc1 ceph-2:sdc2 ceph-2:sdc3 ceph-2:sdc4
 
ceph-deploy --overwrite-conf osd activate ceph-1:sdb1 ceph-1:sdb2 ceph-1:sdb3 ceph-1:sdb4 ceph-2:sdc1 ceph-2:sdc2 ceph-2:sdc3 ceph-2:sdc4

 

 清理环境

ceph-deploy purge ceph-1 ceph-2
ceph-deploy purgedata ceph-1 ceph-2
ceph-deploy forgetkeys
rm -f ceph.*





Guess you like

Origin www.cnblogs.com/chenli90/p/12069100.html