ceph 部署

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/Hello_NB1/article/details/80982646

这是一篇说明使用 kvm 虚拟机来部署 ceph 的文章

一、安装三台虚拟机

1. centos 7 镜像: 

    CentOS-7-x86_64-Minimal-1804.iso

2. 安装 qemu-kvm, libvirt-daemon, libvirt-client

host:
# yum -y install qemu-kvm libvirt-daemon libvirt-client

3. 使用 centos 7 镜像安装 三台 虚拟机

虚拟机配置:
    4 cpu
    8G memory
    50G 系统盘
    3 * 100G 数据盘
    1 虚拟网卡(能连外网)

二、部署前准备

1. 修改主机名:

node-1:
# hostnamectl set-hostname node-1

node-2:
# hostnamectl set-hostname node-2

node-3:
# hostnamectl set-hostname node-3

2. 设置网络:

node-1:
# sed -i 's/ONBOOT=.*/ONBOOT="yes"/g' /etc/sysconfig/network-scripts/ifcfg-eth0

node-2:
# sed -i 's/ONBOOT=.*/ONBOOT="yes"/g' /etc/sysconfig/network-scripts/ifcfg-eth0

node-3:
# sed -i 's/ONBOOT=.*/ONBOOT="yes"/g' /etc/sysconfig/network-scripts/ifcfg-eth0

3. 设置无密钥:

假设 node-1, node-2, node-3 的 ip 分别是:
192.168.122.149, 192.168.122.151, 192.168.122.30

a. 添加到 /etc/hosts
node-1:
# echo "192.168.122.149 node-1" >> /etc/hosts
# echo "192.168.122.151 node-2" >> /etc/hosts
# echo "192.168.122.30  node-3" >> /etc/hosts
b. 使用以下脚本完成 无密钥设置
node-1:
# cat ssh_copy_id.sh
#!/bin/sh

fail()
{
	echo "'$@' failed"
	exit 1
}

rod()
{
	$@
	if [ $? -ne 0 ]; then
		fail $@
	fi
}

echo $@
for host in $@
do
	if [ ! -e ~/.ssh/id_rsa.pub ]; then
		rod ssh-keygen
	fi
	rod ssh-copy-id root@${host}
	rod scp ssh_copy_id.sh root@${host}:/root
	rod scp /etc/hosts root@${host}:/etc/hosts
done

echo "Done."

c. 执行脚本
node-1:
# bash ssh_copy_id.sh node-1 node-2 node-3

node-2:
# bash ssh_copy_id.sh node-1 node-2 node-3

node-3:
# bash ssh_copy_id.sh node-1 node-2 node-3 

4. 关闭 SElinux

node-1:
# setenforce 0
# sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config

node-2:
# setenforce 0
# sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config

node-3:
# setenforce 0
# sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config

5. 关闭防火墙

node-1:
# systemctl stop firewalld ; systemctl disable firewalld ; iptables -F

node-2:
# systemctl stop firewalld ; systemctl disable firewalld ; iptables -F

node-3:
# systemctl stop firewalld ; systemctl disable firewalld ; iptables -F

三、部署 ceph 集群

1. 安装 ceph-deploy 和 ceph rpm 包:

a. 安装 ceph 前的依赖: epel 源 及 deltarpm 包
node-1:
# yum -y install epel-release deltarpm

node-2:
# yum -y install epel-release deltarpm

node-3:
# yum -y install epel-release deltarpm

b. 配置 163 ceph 源
node-1:
# cat /etc/yum.repos.d/163.repo 
[163-ceph-nnoarch]
name=163-ceph-noarch
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/noarch/
gpgcheck=0

[163-ceph]
name=163-ceph
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/x86_64/
gpgcheck=0

将 163.repo 传到 node-2, node-3
node-1:
# scp /etc/yum.repos.d/163.repo node-2:/etc/yum.repos.d/
# scp /etc/yum.repos.d/163.repo node-3:/etc/yum.repos.d/

c. 安装 ceph-deploy 和 ceph
node-1:
# yum makecache
# yum -y install ceph-deploy ceph

node-2:
# yum makecache
# yum -y install ceph-deploy ceph

node-3:
# yum makecache
# yum -y install ceph-deploy ceph 

2. ntp 服务

a. 安装 ntp rpm 包
node-1:
# yum -y install ntp

node-2:
# yum -y install ntp

node-3:
# yum -y install ntp

b. 同步时间及开启 ntpd 服务
node-1:
# ntpdate 0.centos.pool.ntp.org
# systemctl start ntpd
# systemctl enable ntpd

node-2:
# ntpdate 0.centos.pool.ntp.org
# systemctl start ntpd
# systemctl enable ntpd

node-3:
# ntpdate 0.centos.pool.ntp.org
# systemctl start ntpd
# systemctl enable ntpd

3. 部署 ceph 集群

node-1:
# mkdir ceph && cd ceph
# ceph-deploy new node-1 node-2 node-3
# ceph-deploy mon create-initial
# ceph-deploy admin node-1 node-2 node-3
# ceph-deploy mgr create node-1 node-2 node-3

4. 查看集群状态

node-1:
# ceph -s
  cluster:
    id:     553dfacf-f127-491f-b687-9df3ba83e7c0
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum node-3,node-1,node-2
    mgr: node-1(active), standbys: node-3, node-2
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 bytes
    usage:   0 kB used, 0 kB / 0 kB avail
    pgs:     

可以看到现在集群的状态是 HEALTH_OK

四、添加 osd (bluestore)

1. 部署 osd

node-1:
添加 node-1 上 vdb, vdc, vdd 作为 osd
# ceph-deploy osd create node-1 --data /dev/vdb
# ceph-deploy osd create node-1 --data /dev/vdc
# ceph-deploy osd create node-1 --data /dev/vdd

node-1:
添加 node-2 上 vdb, vdc, vdd 作为 osd
# ceph-deploy osd create node-2 --data /dev/vdb
# ceph-deploy osd create node-2 --data /dev/vdc
# ceph-deploy osd create node-2 --data /dev/vdd

node-1:
添加 node-3 上 vdb, vdc, vdd 作为 osd
# ceph-deploy osd create node-3 --data /dev/vdb
# ceph-deploy osd create node-3 --data /dev/vdc
# ceph-deploy osd create node-3 --data /dev/vdd

2. 查看集群状态:

node-1:
# ceph -s
  cluster:
    id:     553dfacf-f127-491f-b687-9df3ba83e7c0
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum node-3,node-1,node-2
    mgr: node-1(active), standbys: node-3, node-2
    osd: 9 osds: 9 up, 9 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 bytes
    usage:   9258 MB used, 890 GB / 899 GB avail
    pgs:     

到此,部署集群完毕!(完)

猜你喜欢

转载自blog.csdn.net/Hello_NB1/article/details/80982646