ceph在CentOS7.2部署教程

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/qq_21816375/article/details/83651037

系统

[root@ceph-1 ~]# cat /etc/redhat-release 
CentOS Linux release 7.2.1511 (Core) 

主机

hostname ip 功能
ceph-1 10.39.47.63 deploy、mon1、osd1
ceph-2 10.39.47.64 mon1、osd1
ceph-3 10.39.47.65 mon1、osd1

主机硬盘

[root@ceph-1 ~]# lsblk 
NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda    253:0    0  20G  0 disk 
└─vda1 253:1    0  20G  0 part /
vdb    253:16   0   4G  0 disk [SWAP]
vdc    253:32   0  80G  0 disk 

安装wget ntp vim工具

yum -y install wget ntp vim

添加host

[root@ceph-1 ~]# cat /etc/hosts
...
10.39.47.63 ceph-1
10.39.47.64 ceph-2
10.39.47.65 ceph-3

如果以前安装失败需要环境清理

ps aux|grep ceph |awk '{print $2}'|xargs kill -9
ps -ef|grep ceph
#确保此时所有ceph进程都已经关闭!!!如果没有关闭,多执行几次。
umount /var/lib/ceph/osd/*
rm -rf /var/lib/ceph/osd/*
rm -rf /var/lib/ceph/mon/*
rm -rf /var/lib/ceph/mds/*
rm -rf /var/lib/ceph/bootstrap-mds/*
rm -rf /var/lib/ceph/bootstrap-osd/*
rm -rf /var/lib/ceph/bootstrap-rgw/*
rm -rf /var/lib/ceph/tmp/*
rm -rf /etc/ceph/*
rm -rf /var/run/ceph/*

需要在每个主机上执行以下指令

修改yum源

yum clean all
curl http://mirrors.aliyun.com/repo/Centos-7.repo >/etc/yum.repos.d/CentOS-Base.repo
curl http://mirrors.aliyun.com/repo/epel-7.repo >/etc/yum.repos.d/epel.repo 
sed -i '/aliyuncs/d' /etc/yum.repos.d/CentOS-Base.repo
sed -i '/aliyuncs/d' /etc/yum.repos.d/epel.repo
yum makecache

增加ceph的源

vim /etc/yum.repos.d/ceph.repo
##内容如下
[ceph]
name=ceph
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0

安装ceph客户端

yum makecache
yum install ceph ceph-radosgw rdate -y

关闭selinux&firewalld

sed -i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
setenforce 0
systemctl stop firewalld 
systemctl disable firewalld

同步各个节点时间

yum -y install rdate
rdate -s time-a.nist.gov
echo rdate -s time-a.nist.gov >> /etc/rc.d/rc.local 
chmod +x /etc/rc.d/rc.local

开始部署

在部署节点(ceph-1)安装ceph-deploy,下文的部署节点统一指ceph-1

[root@ceph-1 ~]# yum -y install ceph-deploy
[root@ceph-1 ~]# ceph-deploy --version 
1.5.39
[root@ceph-1 ~]# ceph -v 
ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e)

设置免密码登录

[root@ceph-1 cluster]# ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
54:f8:9b:25:56:3b:b1:ce:fc:6d:c5:61:b1:55:79:49 root@ceph-1
The key's randomart image is:
+--[ RSA 2048]----+
|         ..   .E=|
|        ..  o  +o|
|        .. . +  =|
|       .  + =  + |
|        S. O ....|
|          o +   o|
|             . ..|
|              . o|
|               . |
+-----------------+
[root@ceph-1 cluster]# ssh-copy-id 10.39.47.63
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
Warning: Permanently added '10.39.47.63' (ECDSA) to the list of known hosts.
[email protected]'s password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '10.39.47.63'"
and check to make sure that only the key(s) you wanted were added.

[root@ceph-1 cluster]# ssh-copy-id 10.39.47.64
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
Warning: Permanently added '10.39.47.64' (ECDSA) to the list of known hosts.
[email protected]'s password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '10.39.47.64'"
and check to make sure that only the key(s) you wanted were added.

[root@ceph-1 cluster]# ssh-copy-id 10.39.47.65
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
Warning: Permanently added '10.39.47.65' (ECDSA) to the list of known hosts.
[email protected]'s password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '10.39.47.65'"
and check to make sure that only the key(s) you wanted were added.

验证

[root@ceph-1 cluster]# ssh 10.39.47.65
Warning: Permanently added '10.39.47.65' (ECDSA) to the list of known hosts.
Last login: Fri Nov  2 10:06:39 2018 from 10.4.95.63
[root@ceph-3 ~]#

在部署节点创建部署目录并开始部署

[root@ceph-1 ~]# mkdir cluster
[root@ceph-1 ~]# cd cluster/
[root@ceph-1 cluster]# ceph-deploy new ceph-1 ceph-2 ceph-3

执行完之后生成一下文件

[root@ceph-1 cluster]# ls -l 
total 16
-rw-r--r-- 1 root root  235 Nov  2 10:40 ceph.conf
-rw-r--r-- 1 root root 4879 Nov  2 10:40 ceph-deploy-ceph.log
-rw------- 1 root root   73 Nov  2 10:40 ceph.mon.keyring

根据自己的IP配置向ceph.conf中添加public_network,并稍微增大mon之间时差允许范围(默认为0.05s,现改为2s):

[root@ceph-1 cluster]# echo public_network=10.39.47.0/24 >> ceph.conf
[root@ceph-1 cluster]# echo mon_clock_drift_allowed = 2 >> ceph.conf
[root@ceph-1 cluster]# cat ceph.conf 
[global]
fsid = 4a3e86f0-1511-4ad7-9f69-b435ae16dc28
mon_initial_members = ceph-1, ceph-2, ceph-3
mon_host = 10.39.47.63,10.39.47.64,10.39.47.65
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

public_network=10.39.47.0/24
mon_clock_drift_allowed = 2

开始部署monitor

[root@ceph-1 cluster] ceph-deploy mon create-initial
//执行成功之后显示
[root@ceph-1 cluster]# ls -l 
total 56
-rw------- 1 root root   113 Nov  2 10:45 ceph.bootstrap-mds.keyring
-rw------- 1 root root    71 Nov  2 10:45 ceph.bootstrap-mgr.keyring
-rw------- 1 root root   113 Nov  2 10:45 ceph.bootstrap-osd.keyring
-rw------- 1 root root   113 Nov  2 10:45 ceph.bootstrap-rgw.keyring
-rw------- 1 root root   129 Nov  2 10:45 ceph.client.admin.keyring
-rw-r--r-- 1 root root   292 Nov  2 10:43 ceph.conf
-rw-r--r-- 1 root root 27974 Nov  2 10:45 ceph-deploy-ceph.log
-rw------- 1 root root    73 Nov  2 10:40 ceph.mon.keyring

查看集群状态

[root@ceph-1 cluster]# ceph -s
    cluster 4a3e86f0-1511-4ad7-9f69-b435ae16dc28
     health HEALTH_ERR
            no osds
     monmap e1: 3 mons at {ceph-1=10.39.47.63:6789/0,ceph-2=10.39.47.64:6789/0,ceph-3=10.39.47.65:6789/0}
            election epoch 6, quorum 0,1,2 ceph-1,ceph-2,ceph-3
     osdmap e1: 0 osds: 0 up, 0 in
            flags sortbitwise,require_jewel_osds
      pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
            0 kB used, 0 kB / 0 kB avail
                  64 creating

开始部署OSD

ceph-deploy --overwrite-conf osd prepare ceph-1:/dev/vdc  ceph-2:/dev/vdc  ceph-3:/dev/vdc  --zap-disk
ceph-deploy --overwrite-conf osd activate ceph-1:/dev/vdc1 ceph-2:/dev/vdc1  ceph-3:/dev/vdc1

部署完成之后查看集群状态

[root@ceph-1 cluster]# ceph -s 
    cluster 4a3e86f0-1511-4ad7-9f69-b435ae16dc28
     health HEALTH_OK
     monmap e1: 3 mons at {ceph-1=10.39.47.63:6789/0,ceph-2=10.39.47.64:6789/0,ceph-3=10.39.47.65:6789/0}
            election epoch 6, quorum 0,1,2 ceph-1,ceph-2,ceph-3
     osdmap e14: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds
      pgmap v28: 64 pgs, 1 pools, 0 bytes data, 0 objects
            322 MB used, 224 GB / 224 GB avail
                  64 active+clean

查看osd

[root@ceph-1 cluster]# ceph osd tree
ID WEIGHT  TYPE NAME       UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 0.21959 root default                                      
-2 0.07320     host ceph-1                                   
 0 0.07320         osd.0        up  1.00000          1.00000 
-3 0.07320     host ceph-2                                   
 1 0.07320         osd.1        up  1.00000          1.00000 
-4 0.07320     host ceph-3                                   
 2 0.07320         osd.2        up  1.00000          1.00000

查看pool有多种方式
这个rdb pool默认创建的pool

[root@ceph-1 cluster]# rados lspools
rbd
[root@ceph-1 cluster]# ceph osd lspools
0 rbd,

创建POOL

[root@ceph-1 cluster]# ceph osd pool create testpool 64
pool 'testpool' created
[root@ceph-1 cluster]# ceph osd lspools
0 rbd,1 testpool,

参考:
INSTALLATION (CEPH-DEPLOY)
Ceph 快速部署(Centos7+Jewel)
ceph学习之pool

猜你喜欢

转载自blog.csdn.net/qq_21816375/article/details/83651037