Server configuration time, make free local close, set the firewall policy configuration selinux (omitted)
Configuration yum source
(need to rely on open ceph mounting base, epel ceph and source)
Configuration ceph source
# yum clean all && yum makecache
Installation Package ceph
(
ceph-Deploy> = 2.0.1
)
# yum -y install ceph-deploy ceph
Ceph cluster deployment
# cd /etc/ceph/
# ceph-deploy new $HOSTNAME
Modify the configuration file
# vim ceph.conf
osd pool default size = 1
osd pool default min size = 1
mon osd max split count = 1024
my max = 512 pg per OSD
mon allow pool delete = true
Monitor key production deployment initialization
# ceph-deploy mon create-initial
Mgr deployment
# ceph-deploy mgr create $HOSTNAME
Open the dashboard ( not necessarily step )
(
The following is the step L version
)
# ceph mgr module enable dashboard
# ceph config-key set mgr/dashboard/server_addr X.X.X.X
# ceph config-key set mgr/dashboard/server_port 8443
Restart ceph make the configuration files to take effect
(
this is to allow the restart dashbord configuration files and configuration of the above configuration to take effect, so the restart ceph.target
)
# systemctl restart ceph.target
URL query
# ceph mgr services
(
After viewing the command URL, open it in a browser, L UI version without a password
)
Copy key
(if there are multiple nodes to be synchronized passwords,
not necessarily step
)
# ceph-deploy admin $HOSTNAME2 $HOSTNAME3
Creating osd
# ceph-deploy osd create --data /dev/sdb $HOSTNAME
( Error "error: GPT headers found, they must be removed on: / dev / sdb", using "# sgdisk --zap-all / dev / sdb" to solve )
Delete osd
(0 to osd number)
# systemctl stop ceph-osd@0
# ceph osd purge osd.0 --yes-i-really-mean-it
Delete lvm
# Lvdisplay View
# lvremove /dev/ceph-265dddd7-ef18-42f7-869e-58e669638032/osd-data-3fa4b9df-6a59-476a-8aaa-4138b29acce9 删除
# Ceph-deploy disk zap $ HOSTNAME / dev / sdb disk format
Creating a Storage Pool
(pg pgp based on the actual number and fill out a single copy of each osd not more than 100 pg, multiple copies, osd number * 100 / number of copies)
# ceph osd pool create mytest 256 256
Set the type of pool
# ceph osd pool application enable mytest rbd
Create Volume
# rbd create -s 100M mytest/rbd-test
Clear ceph cluster, install and uninstall package
# ceph-deploy purge $HOSTNAME