Ceph manual deployment analysis (nautilus)

Ready to work

4 virtual machines, dual network cards and dual hard disks, one network card is in nat mode, and the other is in host mode (in accordance with the cluster network, only cluster intercommunication is required)

Network Planning
Insert picture description here

Official website minimum hardware requirements
Insert picture description here
system environment deployment (the clusters are all deployed in the same way, the same for 4 machines)

1. Turn off the firewall and selinux (the production environment must be turned on)

systemctl stop firewalld
systemctl disable firewalld

sed -i 's/=enforcing/=disabled/' /etc/selinux/config

2. Configure the host name and hosts

echo 'ceph1' > /etc/hostname
#ceph1-4

vi /etc/hosts
192.168.26.131 ceph1
192.168.26.146 ceph2
192.168.26.147 ceph3
192.168.26.148 ceph4

3. NTP configuration

Just check the build I wrote before

If you already have a server, you can use it directly. If you don’t have one, you can choose one as the server, and the latter three can synchronize the internal network. The
cluster is very impressed with time, so NTP must be configured to avoid complicated problems.

4. SSH without password

Manual deployment is not necessary, document view

5. Yum source and command completion installation

vi /etc/yum.repos.d/ceph.repo

[ceph]
name=Ceph packages for $basearch
baseurl=https://download.ceph.com/rpm-nautilus/el7/$basearch
enabled=1
priority=2
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc

[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-nautilus/el7/noarch
enabled=1
priority=2
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc

[ceph-source]
name=Ceph source packages
baseurl=https://download.ceph.com/rpm-nautilus/el7/SRPMS
enabled=0
priority=2
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
#添加官方ceph的nautilus版本源(V14),也可以添加阿里云的修改地址mirrors.aliyun.com,关闭认证即可

yum install epel-release -y && yum -y install bash-completion yum-plugin-priorities
yum makecache
#安装扩展源和命令补全

priority=2 Prevent yum from taking the epel extension source
yum-plugin-priorities priority plug-in, centos8 can not be installed

6. Install ceph

yum install snappy leveldb gdisk python-argparse gperftools-libs
#必要的安装包
yum install ceph -y

[root@ceph1 ~]# ceph -v
ceph version 14.2.12 (2f3caa3b8b3d5c5f2719a1e9d8e7deea5ae1a5c6) nautilus (stable)
#14的最后一个版本,稳定都是选择当前版本的前一个版本
#重启系统(主机名和命令补全都是需要重启才能生效)

7. Configuration file

vi /etc/ceph/ceph.conf

[global]
fsid = c6c3aaaf-ec5b-4e16-826c-2b3fb41d8de8
#uuidgen生成本地uuid
mon initial members = ceph1,ceph2,ceph3
#法定成员,进行仲裁,都是采取奇数,防止脑裂
mon host = 192.168.26.131,192.168.26.146,192.168.26.147
#mon主机
public network = 192.168.26.0/24
cluster network = 192.168.32.0/24
#ceph网络
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
#ceph认证协议cephx,也就是需要进行认证
osd journal size = 1024
#新建的osd日志大小,为保证操作的事务性,先将操作记录在journal中再应用到文件系统
osd pool default size = 3
osd pool default min size = 2
#副本策略,副本3个,最小2个(满足的情况下)
osd pool default pg num = 256
osd pool default pgp num = 256
#PG的数量需要合理计算得出,[参考](https://blog.csdn.net/yangshihuz/article/details/107827379)
osd crush chooseleaf type = 1
#单台OSD应用副本数,针对crush算法修改,0是单台设置OSD
[mon]
mon clock drift allowed = 0.50
#把时钟偏移设置成0.5s,默认是0.05s,由于ceph集群中存在异构PC,导致时钟偏移总是大于0.05s,设置成0.5s

PG calculation
Insert picture description here

Official calculation tool: https://ceph.com/pgcalc/

Clock offset error
Insert picture description here

8. Secret key creation

ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
#创建mon守护进程秘钥

ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin \
--cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
#创建admin管理员用户及授权访问权限秘钥

ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring \
--gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd'
#生成一个引导osd密钥环,生成一个client.bootstrap-osd用户并将用户添加到密钥环中

ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
#把生成的秘钥添加到ceph.mon.keyring

9, mon operation

monmaptool --create --add ceph1 192.168.26.131 --fsid 02f5dc88-fe43-44ed-a496-2f2b6cecf0f6 /tmp/monmap
#基于主机名、主机地址、uuid生成monmap,也就是mon的映射信息

sudo -u ceph mkdir /var/lib/ceph/mon/ceph-ceph1
#创建mon的数据目录

chown ceph.ceph /tmp/ceph.mon.keyring
sudo -u ceph ceph-mon --mkfs -i ceph1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring	
#初始化mon

ll /var/lib/ceph/mon/ceph-ceph1/ 
total 8
-rw------- 1 ceph ceph  77 Oct 29 16:33 keyring
-rw------- 1 ceph ceph   8 Oct 29 16:33 kv_backend
drwxr-xr-x 2 ceph ceph 112 Oct 29 16:33 store.db

10. Start mon

systemctl start ceph-mon@ceph1
systemctl enable ceph-mon@ceph1

To add only one mon node, you need to change the configuration information to one, otherwise the cluster information cannot be queried, and the ceph -s command will become the master, or the execution of the OSD node will also become the master

11. Add a new mon node

#复制ceph1的配置信息到ceph2、ceph3
scp /etc/ceph/ceph.* 192.168.26.146:/etc/ceph/
scp /var/lib/ceph/bootstrap-osd/ceph.keyring 192.168.26.146:/var/lib/ceph/bootstrap-osd/
scp /tmp/ceph.mon.keyring 192.168.26.146:/tmp/ceph.mon.keyring
scp /tmp/monmap 192.168.26.146:/tmp/

chown ceph.ceph /tmp/ceph.mon.keyring
sudo -u ceph mkdir /var/lib/ceph/mon/ceph-ceph2
sudo -u ceph ceph-mon --mkfs -i ceph2 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
#初始化mon,中间如果报错的话,需要先把/var/lib/ceph/mon/ceph-ceph2下文件全部删除

systemctl start ceph-mon@ceph2
systemctl enable ceph-mon@ceph2

#添加ceph3节点只需要把ceph2修改ceph3

12. Add OSD storage

installation method

  • Installation based on auxiliary tools: ceph-volume
  • Manual installation

Storage format

  • filestore: deprecated
  • bluestore: preferred

Manual installation based on the bluestore storage format (multiple nodes of the same operation osd)

scp /var/lib/ceph/bootstrap-osd/ceph.keyring 192.168.26.146:/var/lib/ceph/bootstrap-osd/
#把ceph1上的osd启动秘钥复制到需要安装osd的节点上
UUID=$(uuidgen)
OSD_SECRET=$(ceph-authtool --gen-print-key)
#取2个变量值,uuid的值和osd的秘钥的值

cp /var/lib/ceph/bootstrap-osd/ceph.keyring  /etc/ceph/
#这个不拷贝,下面创建会报错,坑
ID=$(echo "{\"cephx_secret\": \"$OSD_SECRET\"}" | \
   ceph osd new $UUID -i - \
   -n client.bootstrap-osd -k /var/lib/ceph/bootstrap-osd/ceph.keyring)
#创建osd
mkdir /var/lib/ceph/osd/ceph-$ID
#创建osd的目录
mkfs.xfs /dev/{
    
    DEV}
mount /dev/{
    
    DEV} /var/lib/ceph/osd/ceph-$ID
#准备osd的驱动设备格式化和挂载,永久挂载(/dev/sdb /var/lib/ceph/osd/ceph-0 xfs defaults 0 0)
ceph-authtool --create-keyring /var/lib/ceph/osd/ceph-$ID/keyring \
     --name osd.$ID --add-key $OSD_SECRET
#生成osd的密钥文件
ceph-osd -i $ID --mkfs --osd-uuid $UUID
#初始化osd,会遇到报错,再执行一次就行了(或者检查/var/lib/ceph/osd/ceph-0/下文件是否生成)
chown -R ceph:ceph /var/lib/ceph/osd/ceph-$ID
systemctl enable ceph-osd@0
systemctl start ceph-osd@0
#这边启动要执行真实的osdID,不然会导致重启后osd服务起不来(查看id号:echo $ID)

Insert picture description here

Problems after installation

1. 1 monitors have not enabled msgr2
requires the V2 version of mon to be turned on

ceph mon enable-msgr2
#ceph1上执行

Insert picture description here2、 no active mgr

The current main function of Ceph-MGR is to expose some indicators of the cluster to the outside world, that is, to monitor (can be used in ceph dashboard)

sudo -u ceph mkdir /var/lib/ceph/mgr/ceph-ceph1
#创建mgr的数据目录
ceph auth get-or-create mgr.ceph1 mon 'allow profile mgr' osd 'allow *' mds 'allow *' -o /var/lib/ceph/mgr/ceph-ceph1/keyring
#创建秘钥和导出keyring文件
systemctl enable mgr@ceph1
systemctl start ceph-mgr@ceph1
#启动

The preliminary installation has been completed.
Insert picture description here
View the status information of mon and the leader of
mon ceph mon stat
ceph mon dump
Insert picture description hereView the status information of
osd ceph osd tree
Insert picture description here

Reference: https://docs.ceph.com/en/latest/install/index_manual/

Guess you like

Origin blog.csdn.net/yangshihuz/article/details/109328586