Ceph distributed file system cluster construction detailed explanation

1. Introduction to Ceph Distributed File System

Ceph is a unified, distributed file system designed for excellent performance, reliability and scalability.
Ceph uniquely provides object, block, and file storage functions in a unified system.

The basic development goals are:

  1. Can be easily expanded to several petabytes of capacity
  2. High performance supporting multiple workloads (input/output operations per second [IOPS] and bandwidth)
  3. Highly reliable
    Ceph is not only a file system, but also an object storage ecosystem with enterprise-level functions.

Chinese Learning Website: CEPH Chinese Learning Network

2. Ceph distributed file system construction

2.1 Environmental preparation

Prepare three centos machines (using virtual machines)

IP hostname
192.168.1.12 node01
192.168.1.13 node02
192.168.1.14 node03

2.1.1 Turn off the firewall (node01, node02, node03)

# 查看状态
firewall-cmd --state

# 停止firewall
systemctl stop firewalld.service

# 禁止firewall开机启动
systemctl disable firewalld.service

Insert picture description here

2.1.2 Close selinux (node01, node02, node03)

Execute the command: vi /etc/selinux/config, modify the content "SELINUX=disabled"

Insert picture description here
Insert picture description here

2.1.3 Modify hostname (node01, node02, node03)

Execute vi /etc/hostname, add the name node01,
Insert picture description here
execute vi /etc/hosts, and add the following:

192.168.1.12  node01
192.168.1.13  node02
192.168.1.14  node03

Insert picture description here
After the above command is executed, remember to execute the reboot command to restart the service, and then execute the hostname command to view the machine name
Insert picture description here

2.1.4 Modify yum source (node01, node02, node03)

Use Tsinghua mirror to speed up downloading.
Execute the command: vi /etc/yum.repos.d/ceph.repo and add the following content:

[Ceph]
name=Ceph packages for $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/x86_64/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[Ceph-noarch]
name=Ceph noarch packages
# 清华源
baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/noarch/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[ceph-source]
name=Ceph source packages
baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/SRPMS/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

Insert picture description here

2.1.5 Install ceph and ceph-deploy (node01)

First execute the following two commands (node01, node02, node03):

yum install epel-release -y
yum install lttng-ust -y

Execute the installation command (node01):

yum update && yum -y install ceph ceph-deploy

The execution is successful, as shown in the figure below, the whole process will be a bit slow, don't worry, be patient! ! !
Insert picture description here

2.1.6 Install NTP time synchronization tool (node01, node02, node03)

1) Execute the command:

yum install ntp ntpdate ntp-doc -y

2) Set boot up

systemctl enable ntpd

3) Set up automatic calibration synchronization every 1 hour.
Execute the command: vi /etc/rc.d/rc.local, add the following

/usr/sbin/ntpdate ntp1.aliyun.com > /dev/null 2>&1; /sbin/hwclock -w

4) Set up timing tasks, execute crontab -e to join

0 */1 * * * ntpdate ntp1.aliyun.com > /dev/null 2>&1; /sbin/hwclock -w

2.1.7 Password-free configuration (node01, node02, node03)

Create a cuser user, the password is also cuser (node01, node02, node03)

useradd -d /home/cuser -m cuser
passwd cuser

Set permissions (node01, node02, node03)

echo "cuser ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cuser
sudo chmod 0440 /etc/sudoers.d/cuser

Insert picture description here

Switch cuser user, execute ssh-keygen, generate key (node01)

su cuser

Insert picture description here
Distribute the key to all nodes: (node01)

ssh-copy-id cuser@node01
ssh-copy-id cuser@node02
ssh-copy-id cuser@node03

Insert picture description here
Switch to root user

su root

Execute the command: mkdir ~/.ssh, then enter the .ssh directory, create the file config, and add the following content to the file

Host node01
    Hostname node01
    User cuser
Host node02
    Hostname node02
    User cuser
Host node03
    Hostname node03
    User cuser

Modify file permissions:

chmod 600 ~/.ssh/config

Insert picture description here

2.2 Ceph cluster construction

2.2.1 Create a cluster management directory as the ceph configuration information storage directory

mkdir -p /usr/local/che/cephcluster
cd /usr/local/che/cephcluster

2.2.2 Create a cluster

ceph-deploy new node01 node02 node03

Insert picture description here

2.2.3 Modify the configuration file

Execute the command: vi /usr/local/che/cephcluster/ceph.conf, add the following

#对外开放网段
public network = 192.168.1.0/24
# 设置pool池默认分配数量
osd pool default size = 2
# 容忍更多的时钟误差
mon clock drift allowed = 2
mon clock drift warn backoff = 30
# 允许删除pool
mon_allow_pool_delete = true
[mgr]
# 开启WEB仪表盘
mgr modules = dashboard

Insert picture description here
Execute the installation command: ceph-deploy install node01 node02 node03
Insert picture description here

2.2.4 Initialize Monitor

ceph-deploy mon create-initial

Insert picture description here

2.2.5 Synchronizing management information

ceph-deploy admin node01 node02 node03

2.2.6 Install mgr

ceph-deploy mgr create node01 node02 node03

Insert picture description here

2.2.7 Install rgw

ceph-deploy rgw create node01 node02 node03

Insert picture description here

2.2.7 Install mds service

ceph-deploy mds create node01 node02 node03

Insert picture description here

2.2.8 Install OSD

Execute fdisk -l to view disk information and
Insert picture description here
execute the command to create OSD: (node01)

ceph-deploy osd create --data /dev/sdb node01
ceph-deploy osd create --data /dev/sdb node02
ceph-deploy osd create --data /dev/sdb node03

Insert picture description here
Execute ceph -s to view the status information of the ceph cluster
Insert picture description here

2.2.8 dashboard installation

1) Open the dashboard module

ceph mgr module enable dashboard

2) Generate signature

ceph dashboard create-self-signed-cert

3) Create a directory

mkdir -p /usr/local/che/cephcluster/mgr-dashboard

4) Generate a key pair

openssl req -new -nodes -x509 -subj "/O=IT/CN=ceph-mgr-dashboard" -days 3650 -keyout dashboard.key -out dashboard.crt -extensions v3_ca

Insert picture description here
5) Start the dashboard

ceph mgr module disable dashboard
ceph mgr module enable dashboard

6) Set IP and PORT

ceph config set mgr mgr/dashboard/server_addr 192.168.1.12
ceph config set mgr mgr/dashboard/server_port 8443

7) Turn off HTTPS

ceph config set mgr mgr/dashboard/ssl false

8) View service information

ceph mgr services

Insert picture description here

9) Set the administrator account password

ceph dashboard set-login-credentials admin admin

10) Visit https://192.168.1.13:8443/ in the browser,
Insert picture description here
Insert picture description here

11) RGW visit http://192.168.1.12:7480/

Insert picture description here

2.3 Cephfs management

2.3.1 Create two storage pools

ceph osd pool create cephfs_data 128
ceph osd pool create cephfs_metadata 64

View storage pool

ceph osd lspools

Insert picture description here

2.3.2 Create fs, named fs_demo01

ceph fs new fs_demo01 cephfs_metadata cephfs_data

Check status:

ceph fs ls 

ceph mds stat

Insert picture description here

2.3.3 fuse mount

# 安装
yum -y install ceph-fuse

# 创建挂载目录
mkdir -p /usr/local/che/cephfs_directory

# 挂载cephfs
ceph-fuse -k /etc/ceph/ceph.client.admin.keyring -m 192.168.1.12:6789 /usr/local/che/cephfs_directory

Insert picture description here
View mount information

df -h

Insert picture description here

Guess you like

Origin blog.csdn.net/ytangdigl/article/details/115256447