Article directory
This article will use the cephadm tool to learn how to simply build an octopus cluster.
Prepare
- server
CPU name | ip | os | cpu/memory | data disk |
---|---|---|---|---|
mgr-01 | 192.168.2.15 | Centos7.7 | 2C4G | none |
node-01 | 192.168.2.144 | Centos7.7 | 2C4G | 60G |
node-02 | 192.168.2.230 | Centos7.7 | 2C4G | 60G |
node-03 | 192.168.2.60 | Centos7.7 | 2C4G | 60G |
- Turn off the firewall and turn off selinux
$ systemctl stop firewalld
$ systemctl disable firewalld
$ setenforce 0
$ sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
- install docker
$ sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
$ sudo yum install docker-ce docker-ce-cli containerd.io -y
$ systemctl start docker
$ systemctl enable docker
$ systemctl status docker
- time synchronization
$ yum -y install chrony
$ systemctl enable chronyd
$ systemctl start chronyd
- Install lvm2
$ yum install lvm2 -y
-
Install python3, if it is already installed, skip it.
-
configure hosts
$ cat /etc/hosts
192.168.2.144 node-01
192.168.2.230 node-02
192.168.2.60 node-03
- Inter-node ssh password-free login
install cephadm
- Download the installation script
curl --silent --remote-name --location https://github.com/ceph/ceph/raw/quincy/src/cephadm/cephadm
- Give cephadm execute permissions
chmod +x cephadm
- add cephadm yum source
$ ./cephadm add-repo --release octopus
# 替换为阿里源(可选)
$ sed -i 's#download.ceph.com#mirrors.aliyun.com/ceph#' /etc/yum.repos.d/ceph.repo
- install cephadm
# 导入密钥
$ rpm --import 'https://download.ceph.com/keys/release.asc'
# 安装
$ ./cephadm install
- Install the ceph toolkit
The ceph toolkit includes commands such as ceph, rbd, mount.ceph, etc.$ cephadm install ceph-common
deploy cluster
- Bootstrap the ceph cluster
$ cephadm bootstrap --mon-ip 192.168.2.15 --skip-monitoring-stack
–mon-ip: Specify host IP
–skip-monitoring-stack: Skip the installation of monitoring components, including prometheus, grafana, alertmanager, node-exporter. This parameter can be removed if necessary
This command does the following
- Create monitor and manager daemons for the new cluster on the local host.
- Generate a new SSH key for the Ceph cluster and add it to the root user's /root/.ssh/authorized_keys file.
- Write the minimal configuration file needed to communicate with the new cluster into /etc/ceph/ceph.conf.
client.admin
Write a copy of the administrative (privileged!) key to/etc/ceph/ceph.client.admin.keyring.
.- Write a copy of the public key in /etc/ceph/ceph.pub.
After the command is successful, the following results will appear:
Follow the prompts and we can log in to the dashboard of the ceph cluster
2. Add other nodes
$ ssh-copy-id -f -i /etc/ceph/ceph.pub root@node-01
$ ssh-copy-id -f -i /etc/ceph/ceph.pub root@node-02
$ ssh-copy-id -f -i /etc/ceph/ceph.pub root@node-03
$ ceph orch host add node-01
$ ceph orch host add node-02
$ ceph orch host add node-03
3. Create OSD
Method 1: Automatically add all OSDs that meet the conditions
$ ceph orch apply osd --all-available-devices
Method 2: Add OSD manually
$ ceph orch daemon add osd node-01:/dev/sdb
$ ceph orch daemon add osd node-02:/dev/sdb
$ ceph orch daemon add osd node-03:/dev/sdb
After the execution is successful, use the following command to check the osd, if Available shows no, it means that the OSD has been created
Finally, check the status of the cluster deployment through the page again