Use ceph-deploy open storage version Installation L

Use ceph-deploy open storage version Installation L

Server configuration time, make free local close, set the firewall policy configuration selinux (omitted)

Configuration yum source (need to rely on open ceph mounting base, epel ceph and source)

 

Modify the source base
# wget -O /etc/yum.repos.d/CentOS-Base.repo  http://mirrors.aliyun.com/repo/Centos-7.repo

Modify epel source

# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

Configuration ceph source

# cat << EOM > /etc/yum.repos.d/ceph.repo
[ceph-x86_64]
name=Ceph x86_64 packages
baseurl=https://download.ceph.com/rpm-luminous/el7/x86_64/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-luminous/el7/noarch/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
EOM

 

Establish metadata cache

# yum clean all && yum makecache
 
Installation Package ceph ( ceph-Deploy> = 2.0.1 )
# yum -y install ceph-deploy ceph
 
Ceph cluster deployment
# cd /etc/ceph/
# ceph-deploy new $HOSTNAME
 
Modify the configuration file
# vim ceph.conf
osd pool default size = 1
osd pool default min size = 1
mon osd max split count = 1024
my max = 512 pg per OSD
mon allow pool delete = true
 
Monitor key production deployment initialization
# ceph-deploy mon create-initial
 
Mgr deployment
# ceph-deploy mgr create  $HOSTNAME
 
Open the dashboard ( not necessarily step ) ( The following is the step L version )
# ceph mgr module enable dashboard
# ceph config-key set mgr/dashboard/server_addr X.X.X.X
# ceph config-key set mgr/dashboard/server_port 8443
 
Restart ceph make the configuration files to take effect ( this is to allow the restart dashbord configuration files and configuration of the above configuration to take effect, so the restart ceph.target )
# systemctl restart ceph.target
 
URL query
# ceph mgr services
( After viewing the command URL, open it in a browser, L UI version without a password )
 
Copy key (if there are multiple nodes to be synchronized passwords, not necessarily step )
# ceph-deploy admin $HOSTNAME2 $HOSTNAME3 
 
Creating osd
# ceph-deploy osd create --data /dev/sdb $HOSTNAME
( Error "error: GPT headers found, they must be removed on: / dev / sdb", using "# sgdisk --zap-all / dev / sdb" to solve )
 
Delete osd (0 to osd number)
# systemctl stop ceph-osd@0
# ceph osd purge osd.0 --yes-i-really-mean-it
 
Delete lvm
# Lvdisplay View
# lvremove /dev/ceph-265dddd7-ef18-42f7-869e-58e669638032/osd-data-3fa4b9df-6a59-476a-8aaa-4138b29acce9 删除
# Ceph-deploy disk zap $ HOSTNAME / dev / sdb disk format
 
Creating a Storage Pool (pg pgp based on the actual number and fill out a single copy of each osd not more than 100 pg, multiple copies, osd number * 100 / number of copies)
# ceph osd pool create mytest 256 256
If you create too many triggers alarm pg BUG, ( https://tracker.ceph.com/issues/24687 ), reducing pg / pgp number can be.
 
Set the type of pool
# ceph osd pool application enable mytest rbd 
 
Create Volume
# rbd create -s 100M mytest/rbd-test
 
Clear ceph cluster, install and uninstall package
# ceph-deploy purge $HOSTNAME

Guess you like

Origin www.cnblogs.com/hlc-123/p/11824980.html