Installation ceph using CEPH-DEPLOY


Manual Installation https://www.jianshu.com/p/b8f085ca0307

Executed on all nodes ceph


1. Configure hosts


cat << EOF >> /etc/hosts
172.31.240.49 ceph-mon01
EOF

Configuring installation source ceph

cat << EOF > /etc/yum.repos.d/ceph.repo
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/
gpgcheck=0
priority=1
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/
gpgcheck=0
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS/
enabled=0
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
EOF


3. Create a user and give sudo privileges cephd

useradd cephd
echo 'CephIl#i42' | passwd --stdin cephd
echo "cephd ALL = (root) NOPASSWD:ALL" | tee /etc/sudoers.d/cephd
chmod 0440 /etc/sudoers.d/cephd

4. Prepare the OSD memory space (it must be an unused disk or partition)


On-mon01 Ceph:
the mkfs.xfs / dev / SDB


Executed on a node ceph-deploy

5. Set Free ssh keys to other nodes in the node ceph-deploy.


yum -y install expect
su - cephd
expect << EOF
spawn ssh-keygen -t rsa
expect {
"Enter file in which to save the key (/home/cephd/.ssh/id_rsa):" { send "\r"; exp_continue}
"Enter passphrase (empty for no passphrase):" { send "\r"; exp_continue}
"Enter same passphrase again:" { send "\r"; exp_continue}
}
EOF
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 400 ~/.ssh/authorized_keys
for i in {"ceph-mon01",} ;do ssh-copy-id -i ~/.ssh/id_rsa.pub cephd@$i;done

6. Free specified username parameter configuration executing ceph-deploy


su - cephd
cat << EOF > ~/.ssh/config
Host ceph-mon01
    Hostname ceph-mon01
    User cephd
EOF
chmod 600 ~/.ssh/config

7. Installation ceph-deploy


sudo yum -y install ceph-deploy
ceph-deploy --version

8. Create a temporary deployment directory


mkdir ~/ceph-cluster

9. Installation for all nodes ceph ceph


cd ~/ceph-cluster
ceph-deploy install --no-adjust-repos ceph-mon01
ceph --version

10. Create Monitor monitors the cluster and communicate with each other on the designated network node physically ceph


cd ~/ceph-cluster
ceph-deploy new --public-network 172.31.240.0/24  ceph-mon01

11. ceph.conf custom configuration (I did not do custom)

12. Initialization Monitor monitors the cluster and collect key


~ cd / Cluster-Ceph
Ceph-Deploy --overwrite the Create-Mon-conf Initial
if error: [ERROR] Some monitors have still not reached quorum
error reason: hostname Monitor Monitor node with / etc / hosts does not match the
solution: after modifying the host name, use the following methods to clean up the environment, then you can reload.
SU - cephd
Ceph-Ceph-mon01 Deploy purge
Ceph-Deploy purgedata mon01 Ceph-
Ceph-Deploy forgetkeys
RM -rf ~ / Ceph-Cluster / *

13. The configuration and key ceph distributed to all nodes (including node and MON node OSD)


cd ~/ceph-cluster
ceph-deploy --overwrite-conf admin  ceph-mon01

14. Deployment OSD node (here the same time as the host monitor node OSD)


cd ~/ceph-cluster
ceph-deploy osd create ceph-mon01 --data /dev/sdb

15. Add a mgr for each machine running Monitor


cd ~/ceph-cluster
ceph-deploy mgr create ceph-mon01:ceph-mon01_mgr
systemctl status ceph-mgr@ceph-mon01_mgr

16. Review


ceph -s
ceph daemon osd.0 config get mon_max_pg_per_osd
ceph osd tree

Guess you like

Origin www.cnblogs.com/jipinglong/p/11209769.html