分布式存储ceph——部署ceph

前言:

      很多朋友想学ceph,但是开始ceph部署就让初学者举步为艰,ceph部署时由于国外源的问题(具体大家应该懂得),下载和安装软件便会卡住,停止不前。即使配置搭建了国内源后,执行ceph-deploy install 时又跑去了国外的源下载,很是无语呀!!!这样导致我们停下了学习ceph的脚步,所以笔者就在这里编写了这篇文章,只要掌握了通过国内源找到并下载对应正确的ceph版本rpm包到本地,部署ceph简直小意思!

一、部署准备:

准备3台机器,linux系统为centos7.6版本
(1)所有ceph集群节点(不包括客户端)设置静态域名解析;
vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.254.163 dlp
172.16.254.64 node1
172.16.254.65 node2
172.16.254.66 node3
(2)所有集群节点(包括客户端)创建cent用户,并设置密码,后执行如下命令:
useradd cent && echo "123" | passwd --stdin cent
echo-e 'Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph
chmod440 /etc/sudoers.d/ceph
(3)在部署节点切换为cent用户,设置无密钥登陆各节点包括客户端节点
ceph@dlp15:17:01~#ssh-keygen
ceph@dlp15:17:01~#ssh-copy-id con1
ceph@dlp15:17:01~#ssh-copy-id con2
ceph@dlp15:17:01~#ssh-copy-id con3
(4)在部署节点切换为cent用户,在cent用户家目录,设置如下文件:vi~/.ssh/config# create new ( define all nodes and users )
Host dlp
      Hostname dlp
      User cent
Host node1
      Hostname node1
      User cent
Host node2
      Hostname node2
      User cent
Host node3
      Hostname node3
      User cent

chmod600 ~/.ssh/config 

二、所有节点配置国内ceph源:

(1)all-node(包括客户端)在/etc/yum.repos.d/创建 ceph-yunwei.repo
[ceph-yunwei]
name=ceph-yunwei-install
baseurl=https://mirrors.aliyun.com/centos/7.6.1810/storage/x86_64/ceph-jewel/
enable=1
gpgcheck=0
或者也可以将如上内容添加到现有的 CentOS-Base.repo 中。
 
(2)到国内ceph源中 https://mirrors.aliyun.com/centos/7.6.1810/storage/x86_64/ceph-jewel/下载如下所需rpm包。注意:红色框中为ceph-deploy的rpm,只需要在部署节点安装,下载需要到 https://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/中找到最新对应的ceph-deploy-xxxxx.noarch.rpm 下载
ceph-10.2.11-0.el7.x86_64.rpm
ceph-base-10.2.11-0.el7.x86_64.rpm
ceph-common-10.2.11-0.el7.x86_64.rpm
ceph-deploy-1.5.39-0.noarch.rpm
ceph-devel-compat-10.2.11-0.el7.x86_64.rpm
cephfs-java-10.2.11-0.el7.x86_64.rpm
ceph-fuse-10.2.11-0.el7.x86_64.rpm
ceph-libs-compat-10.2.11-0.el7.x86_64.rpm
ceph-mds-10.2.11-0.el7.x86_64.rpm
ceph-mon-10.2.11-0.el7.x86_64.rpm
ceph-osd-10.2.11-0.el7.x86_64.rpm
ceph-radosgw-10.2.11-0.el7.x86_64.rpm
ceph-resource-agents-10.2.11-0.el7.x86_64.rpm
ceph-selinux-10.2.11-0.el7.x86_64.rpm
ceph-test-10.2.11-0.el7.x86_64.rpm
libcephfs1-10.2.11-0.el7.x86_64.rpm
libcephfs1-devel-10.2.11-0.el7.x86_64.rpm
libcephfs_jni1-10.2.11-0.el7.x86_64.rpm
libcephfs_jni1-devel-10.2.11-0.el7.x86_64.rpm
librados2-10.2.11-0.el7.x86_64.rpm
librados2-devel-10.2.11-0.el7.x86_64.rpm
libradosstriper1-10.2.11-0.el7.x86_64.rpm
libradosstriper1-devel-10.2.11-0.el7.x86_64.rpm
librbd1-10.2.11-0.el7.x86_64.rpm
librbd1-devel-10.2.11-0.el7.x86_64.rpm
librgw2-10.2.11-0.el7.x86_64.rpm
librgw2-devel-10.2.11-0.el7.x86_64.rpm
python-ceph-compat-10.2.11-0.el7.x86_64.rpm
python-cephfs-10.2.11-0.el7.x86_64.rpm
python-rados-10.2.11-0.el7.x86_64.rpm
python-rbd-10.2.11-0.el7.x86_64.rpm
rbd-fuse-10.2.11-0.el7.x86_64.rpm
rbd-mirror-10.2.11-0.el7.x86_64.rpm
rbd-nbd-10.2.11-0.el7.x86_64.rpm
(3)将下载好的rpm拷贝到所有节点,并安装。注意ceph-deploy-xxxxx.noarch.rpm 只有部署节点用到,其他节点不需要,部署节点也需要安装其余的rpm
 
(4)在部署节点(cent用户下执行):安装 ceph-deploy, 在root用户下,进入下载好的rpm包目录,执行:
yum localinstall -y ./*

(或者sudo yum install ceph-deploy)

创建ceph工作目录
mkdir ceph &&  cd ceph
注意:如遇到如下报错:
yum install python-distribute
可能不能安装成功,报如下问题:将python-distribute remove 再进行安装(或者 yum remove  python-setuptools -y)
注意:如果不是安装上述方法添加的rpm,用的是网络源,每个节点必须yum install ceph ceph-radosgw -y
 
(5)在部署节点(cent用户下执行):配置新集群
ceph-deploy new node1 node2 node3
vim ./ceph.conf
添加:osd pool default size = 2
 可选参数如下:
public_network = 192.168.254.0/24
cluster_network = 172.16.254.0/24
osd_pool_default_size = 3
osd_pool_default_min_size = 1
osd_pool_default_pg_num = 8
osd_pool_default_pgp_num = 8
osd_crush_chooseleaf_type = 1
 
[mon]
mon_clock_drift_allowed = 0.5
 
[osd]
osd_mkfs_type = xfs
osd_mkfs_options_xfs = -f
filestore_max_sync_interval = 5
filestore_min_sync_interval = 0.1
filestore_fd_cache_size = 655350
filestore_omap_header_cache_size = 655350
filestore_fd_cache_random = true
osd op threads = 8
osd disk threads = 4
filestore op threads = 8
max_open_files = 655350
 
(6)在部署节点执行(cent用户下执行):所有节点安装ceph软件
所有节点有如下软件包:
root@rab116:13:59~/cephjrpm#ls
ceph-10.2.11-0.el7.x86_64.rpm               ceph-resource-agents-10.2.11-0.el7.x86_64.rpm    librbd1-10.2.11-0.el7.x86_64.rpm
ceph-base-10.2.11-0.el7.x86_64.rpm          ceph-selinux-10.2.11-0.el7.x86_64.rpm            librbd1-devel-10.2.11-0.el7.x86_64.rpm
ceph-common-10.2.11-0.el7.x86_64.rpm        ceph-test-10.2.11-0.el7.x86_64.rpm               librgw2-10.2.11-0.el7.x86_64.rpm
ceph-devel-compat-10.2.11-0.el7.x86_64.rpm  libcephfs1-10.2.11-0.el7.x86_64.rpm              librgw2-devel-10.2.11-0.el7.x86_64.rpm
cephfs-java-10.2.11-0.el7.x86_64.rpm        libcephfs1-devel-10.2.11-0.el7.x86_64.rpm        python-ceph-compat-10.2.11-0.el7.x86_64.rpm
ceph-fuse-10.2.11-0.el7.x86_64.rpm          libcephfs_jni1-10.2.11-0.el7.x86_64.rpm          python-cephfs-10.2.11-0.el7.x86_64.rpm
ceph-libs-compat-10.2.11-0.el7.x86_64.rpm   libcephfs_jni1-devel-10.2.11-0.el7.x86_64.rpm    python-rados-10.2.11-0.el7.x86_64.rpm
ceph-mds-10.2.11-0.el7.x86_64.rpm           librados2-10.2.11-0.el7.x86_64.rpm               python-rbd-10.2.11-0.el7.x86_64.rpm
ceph-mon-10.2.11-0.el7.x86_64.rpm           librados2-devel-10.2.11-0.el7.x86_64.rpm         rbd-fuse-10.2.11-0.el7.x86_64.rpm
ceph-osd-10.2.11-0.el7.x86_64.rpm           libradosstriper1-10.2.11-0.el7.x86_64.rpm        rbd-mirror-10.2.11-0.el7.x86_64.rpm
ceph-radosgw-10.2.11-0.el7.x86_64.rpm       libradosstriper1-devel-10.2.11-0.el7.x86_64.rpm  rbd-nbd-10.2.11-0.el7.x86_64.rpm
所有节点安装上述软件包(包括客户端):
yum localinstall ./* -y
 
(7)在部署节点执行,所有节点安装ceph软件
ceph-deploy install dlp node1 node2 node3
 
(8)在部署节点初始化集群(cent用户下执行):
ceph-deploy mon create-initial
 
(9)在osd节点prepare Object Storage Daemon :
mkdir /data && chown ceph.ceph /data
 
(10)每个节点将第二块硬盘做分区,并格式化为xfs文件系统挂载到/data:
root@con116:45:22/#fdisk /dev/vdb
 
root@con116:45:22/#lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda         252:0    0   40G  0 disk
├─vda1      252:1    0  512M  0 part /boot
└─vda2      252:2    0 39.5G  0 part
 ├─cl-root 253:0    0 35.5G  0 lvm  /
 └─cl-swap 253:1    0    4G  0 lvm  [SWAP]
vdb         252:16   0   10G  0 disk
└─vdb1      252:17   0   10G  0 part
 
root@rab116:54:35/#mkfs -t xfs /dev/vdb1
 
root@rab116:54:50/#mount /dev/vdb1 /data/
 
root@rab116:56:39/#lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda         252:0    0   40G  0 disk
├─vda1      252:1    0  512M  0 part /boot
└─vda2      252:2    0 39.5G  0 part
 ├─cl-root 253:0    0 35.5G  0 lvm  /
 └─cl-swap 253:1    0    4G  0 lvm  [SWAP]
vdb         252:16   0   10G  0 disk
└─vdb1      252:17   0   10G  0 part /data
(11)在/data/下面创建osd挂载目录:
mkdir /data/osd
chown -R ceph.ceph /data/
chmod 750 /data/osd/
ln -s /data/osd /var/lib/ceph
注意:准备前先将硬盘做文件系统 xfs,挂载到/var/lib/ceph/osd,并且注意属主和属主为ceph:
列出节点磁盘:ceph-deploy disk list node1
擦净节点磁盘:ceph-deploy disk zap node1:/dev/vdb1
 
(12)准备Object Storage Daemon:
ceph-deploy osd prepare node1:/var/lib/ceph/osd node2:/var/lib/ceph/osd node3:/var/lib/ceph/osd
 
(13)激活Object Storage Daemon:
ceph-deploy osd activate node1:/var/lib/ceph/osd node2:/var/lib/ceph/osd node3:/var/lib/ceph/osd
 
(14)在部署节点transfer config files
ceph-deploy admin dlp node1 node2 node3
sudo chmod 644 /etc/ceph/ceph.client.admin.keyring
 
(15)在ceph集群中任意节点检测:
ceph -s
 
 

三、客户端设置:

(1)客户端也要有cent用户:
useradd cent && echo "123" | passwd --stdin cent
echo-e 'Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph
chmod440 /etc/sudoers.d/ceph
在部署节点执行,安装ceph客户端及设置:
ceph-deploy install controller
ceph-deploy admin controller
 
(2)客户端执行
sudo chmod 644 /etc/ceph/ceph.client.admin.keyring
 
(3)客户端执行,块设备rdb配置:
创建rbd:rbd create disk01 --size 10G --image-feature layering
列示rbd:rbd ls -l
映射rbd的image map:sudo rbd map disk01
显示map:rbd showmapped
格式化disk01文件系统xfs:sudo mkfs.xfs /dev/rbd0
挂载硬盘:sudo mount /dev/rbd0 /mnt
验证是否挂着成功:df -hT
 
(4)File System配置:
在部署节点执行,选择一个node来创建MDS:
ceph-deploy mds create node1
以下操作在node1上执行:
sudo chmod 644 /etc/ceph/ceph.client.admin.keyring
在MDS节点node1上创建 cephfs_data 和  cephfs_metadata 的 pool
ceph osd pool create cephfs_data 128
ceph osd pool create cephfs_metadata 128
开启pool:
ceph fs new cephfs cephfs_metadata cephfs_data
显示ceph fs:
ceph fs ls
ceph mds stat
 
以下操作在客户端执行,安装ceph-fuse:
yum -y install ceph-fuse
获取admin key:
sshcent@node1"sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring" > admin.key
chmod600 admin.key
挂载ceph-fs:
mount-t ceph node1:6789:/ /mnt -o name=admin,secretfile=admin.key
df-hT
 
停止ceph-mds服务:
systemctl stop ceph-mds@node1
ceph mds fail 0
ceph fs rm cephfs --yes-i-really-mean-it
 
ceph osd lspools
显示结果:0 rbd,1 cephfs_data,2 cephfs_metadata,
 
ceph osd pool rm cephfs_metadata cephfs_metadata --yes-i-really-really-mean-it
 
删除环境:
ceph-deploy purge dlp node1 node2 node3 controller
ceph-deploy purgedata dlp node1 node2 node3 controller
ceph-deploy forgetkeys
rm -rf ceph*

猜你喜欢

转载自www.cnblogs.com/cloudhere/p/10519647.html