1. Introduction to Ceph Distributed File System
Ceph is a unified, distributed file system designed for excellent performance, reliability and scalability.
Ceph uniquely provides object, block, and file storage functions in a unified system.
The basic development goals are:
- Can be easily expanded to several petabytes of capacity
- High performance supporting multiple workloads (input/output operations per second [IOPS] and bandwidth)
- Highly reliable
Ceph is not only a file system, but also an object storage ecosystem with enterprise-level functions.
Chinese Learning Website: CEPH Chinese Learning Network
2. Ceph distributed file system construction
2.1 Environmental preparation
Prepare three centos machines (using virtual machines)
IP | hostname |
---|---|
192.168.1.12 | node01 |
192.168.1.13 | node02 |
192.168.1.14 | node03 |
2.1.1 Turn off the firewall (node01, node02, node03)
# 查看状态
firewall-cmd --state
# 停止firewall
systemctl stop firewalld.service
# 禁止firewall开机启动
systemctl disable firewalld.service
2.1.2 Close selinux (node01, node02, node03)
Execute the command: vi /etc/selinux/config, modify the content "SELINUX=disabled"
2.1.3 Modify hostname (node01, node02, node03)
Execute vi /etc/hostname, add the name node01,
execute vi /etc/hosts, and add the following:
192.168.1.12 node01
192.168.1.13 node02
192.168.1.14 node03
After the above command is executed, remember to execute the reboot command to restart the service, and then execute the hostname command to view the machine name
2.1.4 Modify yum source (node01, node02, node03)
Use Tsinghua mirror to speed up downloading.
Execute the command: vi /etc/yum.repos.d/ceph.repo and add the following content:
[Ceph]
name=Ceph packages for $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/x86_64/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[Ceph-noarch]
name=Ceph noarch packages
# 清华源
baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/noarch/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[ceph-source]
name=Ceph source packages
baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/SRPMS/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
2.1.5 Install ceph and ceph-deploy (node01)
First execute the following two commands (node01, node02, node03):
yum install epel-release -y
yum install lttng-ust -y
Execute the installation command (node01):
yum update && yum -y install ceph ceph-deploy
The execution is successful, as shown in the figure below, the whole process will be a bit slow, don't worry, be patient! ! !
2.1.6 Install NTP time synchronization tool (node01, node02, node03)
1) Execute the command:
yum install ntp ntpdate ntp-doc -y
2) Set boot up
systemctl enable ntpd
3) Set up automatic calibration synchronization every 1 hour.
Execute the command: vi /etc/rc.d/rc.local, add the following
/usr/sbin/ntpdate ntp1.aliyun.com > /dev/null 2>&1; /sbin/hwclock -w
4) Set up timing tasks, execute crontab -e to join
0 */1 * * * ntpdate ntp1.aliyun.com > /dev/null 2>&1; /sbin/hwclock -w
2.1.7 Password-free configuration (node01, node02, node03)
Create a cuser user, the password is also cuser (node01, node02, node03)
useradd -d /home/cuser -m cuser
passwd cuser
Set permissions (node01, node02, node03)
echo "cuser ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cuser
sudo chmod 0440 /etc/sudoers.d/cuser
Switch cuser user, execute ssh-keygen, generate key (node01)
su cuser
Distribute the key to all nodes: (node01)
ssh-copy-id cuser@node01
ssh-copy-id cuser@node02
ssh-copy-id cuser@node03
Switch to root user
su root
Execute the command: mkdir ~/.ssh, then enter the .ssh directory, create the file config, and add the following content to the file
Host node01
Hostname node01
User cuser
Host node02
Hostname node02
User cuser
Host node03
Hostname node03
User cuser
Modify file permissions:
chmod 600 ~/.ssh/config
2.2 Ceph cluster construction
2.2.1 Create a cluster management directory as the ceph configuration information storage directory
mkdir -p /usr/local/che/cephcluster
cd /usr/local/che/cephcluster
2.2.2 Create a cluster
ceph-deploy new node01 node02 node03
2.2.3 Modify the configuration file
Execute the command: vi /usr/local/che/cephcluster/ceph.conf, add the following
#对外开放网段
public network = 192.168.1.0/24
# 设置pool池默认分配数量
osd pool default size = 2
# 容忍更多的时钟误差
mon clock drift allowed = 2
mon clock drift warn backoff = 30
# 允许删除pool
mon_allow_pool_delete = true
[mgr]
# 开启WEB仪表盘
mgr modules = dashboard
Execute the installation command: ceph-deploy install node01 node02 node03
2.2.4 Initialize Monitor
ceph-deploy mon create-initial
2.2.5 Synchronizing management information
ceph-deploy admin node01 node02 node03
2.2.6 Install mgr
ceph-deploy mgr create node01 node02 node03
2.2.7 Install rgw
ceph-deploy rgw create node01 node02 node03
2.2.7 Install mds service
ceph-deploy mds create node01 node02 node03
2.2.8 Install OSD
Execute fdisk -l to view disk information and
execute the command to create OSD: (node01)
ceph-deploy osd create --data /dev/sdb node01
ceph-deploy osd create --data /dev/sdb node02
ceph-deploy osd create --data /dev/sdb node03
Execute ceph -s to view the status information of the ceph cluster
2.2.8 dashboard installation
1) Open the dashboard module
ceph mgr module enable dashboard
2) Generate signature
ceph dashboard create-self-signed-cert
3) Create a directory
mkdir -p /usr/local/che/cephcluster/mgr-dashboard
4) Generate a key pair
openssl req -new -nodes -x509 -subj "/O=IT/CN=ceph-mgr-dashboard" -days 3650 -keyout dashboard.key -out dashboard.crt -extensions v3_ca
5) Start the dashboard
ceph mgr module disable dashboard
ceph mgr module enable dashboard
6) Set IP and PORT
ceph config set mgr mgr/dashboard/server_addr 192.168.1.12
ceph config set mgr mgr/dashboard/server_port 8443
7) Turn off HTTPS
ceph config set mgr mgr/dashboard/ssl false
8) View service information
ceph mgr services
9) Set the administrator account password
ceph dashboard set-login-credentials admin admin
10) Visit https://192.168.1.13:8443/ in the browser,
11) RGW visit http://192.168.1.12:7480/
2.3 Cephfs management
2.3.1 Create two storage pools
ceph osd pool create cephfs_data 128
ceph osd pool create cephfs_metadata 64
View storage pool
ceph osd lspools
2.3.2 Create fs, named fs_demo01
ceph fs new fs_demo01 cephfs_metadata cephfs_data
Check status:
ceph fs ls
ceph mds stat
2.3.3 fuse mount
# 安装
yum -y install ceph-fuse
# 创建挂载目录
mkdir -p /usr/local/che/cephfs_directory
# 挂载cephfs
ceph-fuse -k /etc/ceph/ceph.client.admin.keyring -m 192.168.1.12:6789 /usr/local/che/cephfs_directory
View mount information
df -h