Ceph — Use cephadm to build a Ceph cluster


This article will use the cephadm tool to learn how to simply build an octopus cluster.

Prepare

  1. server
CPU name ip os cpu/memory data disk
mgr-01 192.168.2.15 Centos7.7 2C4G none
node-01 192.168.2.144 Centos7.7 2C4G 60G
node-02 192.168.2.230 Centos7.7 2C4G 60G
node-03 192.168.2.60 Centos7.7 2C4G 60G
  1. Turn off the firewall and turn off selinux
$ systemctl stop firewalld
$ systemctl disable firewalld
$ setenforce 0
$ sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
  1. install docker
$ sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
$ sudo yum install docker-ce docker-ce-cli containerd.io -y
$ systemctl start docker
$ systemctl enable docker
$ systemctl status docker
  1. time synchronization
$ yum -y install chrony
$ systemctl enable chronyd
$ systemctl start chronyd
  1. Install lvm2
$ yum install lvm2 -y
  1. Install python3, if it is already installed, skip it.

  2. configure hosts

$ cat /etc/hosts
192.168.2.144 node-01
192.168.2.230 node-02
192.168.2.60 node-03
  1. Inter-node ssh password-free login

install cephadm

  1. Download the installation script
curl --silent --remote-name --location https://github.com/ceph/ceph/raw/quincy/src/cephadm/cephadm
  1. Give cephadm execute permissions
chmod +x cephadm
  1. add cephadm yum source
$ ./cephadm add-repo --release octopus
# 替换为阿里源(可选)
$ sed -i 's#download.ceph.com#mirrors.aliyun.com/ceph#' /etc/yum.repos.d/ceph.repo

  1. install cephadm
# 导入密钥
$ rpm --import 'https://download.ceph.com/keys/release.asc'
# 安装
$ ./cephadm install
  1. Install the ceph toolkit
    The ceph toolkit includes commands such as ceph, rbd, mount.ceph, etc.
    $ cephadm install ceph-common
    

deploy cluster

  1. Bootstrap the ceph cluster
$ cephadm bootstrap --mon-ip 192.168.2.15 --skip-monitoring-stack

–mon-ip: Specify host IP
–skip-monitoring-stack: Skip the installation of monitoring components, including prometheus, grafana, alertmanager, node-exporter. This parameter can be removed if necessary

This command does the following

  • Create monitor and manager daemons for the new cluster on the local host.
  • Generate a new SSH key for the Ceph cluster and add it to the root user's /root/.ssh/authorized_keys file.
  • Write the minimal configuration file needed to communicate with the new cluster into /etc/ceph/ceph.conf.
  • client.adminWrite a copy of the administrative (privileged!) key to /etc/ceph/ceph.client.admin.keyring..
  • Write a copy of the public key in /etc/ceph/ceph.pub.

After the command is successful, the following results will appear:
insert image description here
Follow the prompts and we can log in to the dashboard of the ceph cluster
insert image description here
2. Add other nodes

$ ssh-copy-id -f -i /etc/ceph/ceph.pub root@node-01
$ ssh-copy-id -f -i /etc/ceph/ceph.pub root@node-02
$ ssh-copy-id -f -i /etc/ceph/ceph.pub root@node-03

$ ceph orch host add node-01
$ ceph orch host add node-02
$ ceph orch host add node-03

insert image description here
3. Create OSD
Method 1: Automatically add all OSDs that meet the conditions

$ ceph orch apply osd --all-available-devices

Method 2: Add OSD manually

$ ceph orch daemon add osd node-01:/dev/sdb
$ ceph orch daemon add osd node-02:/dev/sdb
$ ceph orch daemon add osd node-03:/dev/sdb

After the execution is successful, use the following command to check the osd, if Available shows no, it means that the OSD has been created
insert image description here
Finally, check the status of the cluster deployment through the page againinsert image description here

Guess you like

Origin blog.csdn.net/weixin_45804031/article/details/127829395