linux————ceph distributed deployment

Table of contents

I. Overview

Features

1. Unified storage

2. High scalability

3. Strong reliability

4. High performance

components

1、Monitor

2、OSD

3、MOD

4、Objet

5、PG

6、WORKS

7. Libradio

8. CRUSH

9. RBD

10. RGW

11. CephFS

Architecture diagram

2. Preparation work

3. ceph installation

Create cluster directory

Modify configuration file

Install

Initialize monitor

Synchronize management information

Edit

Install mgr (management daemon)

Install rgw

Create mds service

OSD installation, create OSD

View cluster status

3. Dashboard installation

Open the dashboard module

Generate signature

Create a directory

Start service

Set access address and port

Turn off https

View ceph address

Set user and password

access test

Create a storage pool in the ceph file system

View storage pool

Create file system

View file system

View mds status

4. Customer mounting

centos1 operation

centos4 operation


I. Overview

        It is a unified distributed storage system designed to provide better performance, reliability and scalability.

Features
1. Unified storage

        Although the bottom layer of ceph is a distributed file system, interfaces that support objects and blocks have been developed on the upper layer. Therefore, it can dominate the world among open source storage software. As for whether it can last through generations, I don’t know.

2. High scalability

        Easy expansion and large capacity. Able to manage thousands of servers and EB-level capacity.

3. Strong reliability

        Supports multiple strong consistency replicas, EC. Copies can be stored on hosts, racks, computer rooms, and data centers. So safe and reliable. Storage nodes can be automatically managed and repaired automatically. No single point of failure and strong fault tolerance.

4. High performance

        Because there are multiple copies, it can be highly parallelized during read and write operations. Theoretically, the more nodes there are, the higher the IOPS and throughput of the entire cluster will be. Another point is that the ceph client reads and writes data directly and interacts with the storage device (osd).

components
1、Monitor

A Ceph cluster requires a small cluster composed of multiple Monitors, which synchronize data through Paxos to save OSD metadata.

2、OSD

The full name of OSD is Object Storage Device, which is the process responsible for returning specific data in response to client requests. A Ceph cluster generally has many OSDs.

3、MOD

The full name of MDS is Ceph Metadata Server, which is the metadata service that CephFS service depends on.

4、Objet

The lowest storage unit of Ceph is the Object object, and each Object contains metadata and original data.

5、PG

The full name of PG is Placement Groupops, which is a logical concept. A PG contains multiple OSDs. The PG layer is actually introduced to better distribute data and locate data.

6、WORKS

The full name of RADOS is Reliable Autonomic Distributed Object Store, which is the essence of Ceph cluster. Users can implement cluster operations such as data distribution and failover.

7. Libradio

Librados is a library provided by Rados. Because RADOS is a protocol that is difficult to access directly, the upper RBD, RGW and CephFS are all accessed through librados. Currently, PHP, Ruby, Java, Python, C and C++ are supported.

8. CRUSH

CRUSH is a data distribution algorithm used by Ceph, which is similar to consistent hashing and allows data to be distributed to expected places.

9. RBD

The full name of RBD is RADOS block device, which is a block device service provided by Ceph.

10. R GW

RGW, the full name of RADOS gateway, is an object storage service provided by Ceph. The interface is compatible with S3 and Swift.

11. CephFS

CephFS, the full name of Ceph File System, is a file system service provided by Ceph to the outside world.

Architecture diagram

1: To upload a file, first slice the file into N objects (if cephFS is enabled, you can use MDS cache)
2: The sliced ​​file object will be stored in Ceph
3: Before the file is stored, it will go through the CRUSH algorithm to calculate the current File storage is attributed to which PG
4: PG is an index that logically divides the file storage range
5: Store files in the OSD of the specified server according to the PG index

2. Preparation work

centos1    monitor    osd         192.168.100.3
centos2                    osd         192.168.100.4
centos3                    osd         192.168.100.5
centos4                    Client      192.168.100.6

1. Turn off the firewall

systemctl stop firewalld.service 
systemctl disable firewalld.service 

2. Close the graphical network manager

systemctl stop NetworkManager
systemctl disable NetworkManager

3. Configure static IP

sed -i "s/ONBOOT=no/ONBOOT=yes/" /etc/sysconfig/network-scripts/ifcfg-ens33
systemctl restart network

4. Close selinux

setenforce 0

5. Modify the host name

hostnamectl set-hostname centos{1..4}

6. Modify settings

sed -i "s/#UseDNS yes/UseDNS no/" /etc/ssh/sshd_config
systemctl restart sshd

7. SSH password-free setting

centos1
    ssh-keygen
    for i in 3 4 5 6 ; do ssh-copy-id [email protected].$i;done
centos2
    ssh-keygen
   for i in 3 4 5 6 ; do ssh-copy-id [email protected].$i;done
centos3
    ssh-keygen
    for i in 3 4 5 6 ; do ssh-copy-id [email protected].$i;done
centos4
    ssh-keygen
    for i in 3 4 5 6 ; do ssh-copy-id [email protected].$i;done

8. Modify hosts file

vim /etc/hosts
    192.168.100.3   centos1
    192.168.100.4    centos2
    192.168.100.5    centos3
    192.168.100.6    centos4
for i in 3 4 5 6;do scp /etc/hosts 192.168.100.$i:/etc/;done


9. Time synchronization

yum install -y ntp
vim /etc/ntp.conf
server 127.127.1.0 · Define time server
fudge 127.127.1.0 stratum 8 Define time hierarchy

systemctl start ntpd
systemctl enable ntpd

for i in 4 5 6 ;do ssh 192.168.100.$i  ntpdate 192.168.100.3;done

10. Add disk and thermal scan

cd /sys/class/scsi_host

for i in `ls`;do echo "- - -" > $i/scan;done
lsblk centos1 2 3 all need to be done

11. Disk formatting

mkfs.xfs /dev/sdb

3. ceph installation

yum install epel-release -y
yum install lttng-ust -y

vim /etc/yum.repos.d/ceph.repo

[Ceph]
name=Ceph packages for $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/x86_64/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

[Ceph-noarch]
name=Ceph noarch packages
# 清华源
baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/noarch/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

[ceph-source]
name=Ceph source packages
baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/SRPMS/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

yum -y install ceph ceph-deploy 

Create cluster directory

mkdir -p /usr/local/soft/cephcluster
cd /usr/local/soft/cephcluster

ceph-deploy new centos1 centos2 centos3

Modify configuration file

vim ceph.conf

#Open network segment
public network = 192.168.100.0/24
# Set the default allocation number of the pool pool
osd pool default size = 2
# Tolerate more clock errors
mon clock drift allowed = 2
mon clock drift warn backoff = 30
# Allow deletion of the pool
mon_allow_pool_delete = true
[mgr]
# Enable WEB dashboard
mgr modules = dashboard

Install

ceph-deploy install centos1 centos2 centos3 (requires good network)

Initialize monitor

ceph-deploy mon create-initial 

Synchronize management information

ceph-deploy admin  centos1 centos2 centos3

Install mgr (management daemon)

ceph-deploy mgr create  centos1 centos2 centos3

Install rgw

ceph-deploy rgw create centos1 centos2 centos3

Create mds service

ceph-deploy mds create centos1 centos2 centos3

OSD installation, create OSD

ceph-deploy osd create --data /dev/sdb centos1
ceph-deploy osd create --data /dev/sdb centos2
ceph-deploy osd create --data /dev/sdb centos3

View cluster status

ceph -s

3. Dashboard installation

Open the dashboard module

ceph mgr module enable dashboard

Generate signature

ceph dashboard create-self-signed-cert

Create a directory

mkdir -p /usr/local/jx/cephcluster/mgr-dashboard

cd /usr/local/jx/cephcluster/mgr-dashboard 

openssl req -new -nodes -x509   -subj "/O=IT/CN=ceph-mgr-dashboard" -days 3650   -keyout dashboard.key -out dashboard.crt -extensions v3_ca

Start service

ceph mgr module disable dashboard
ceph mgr module enable dashboard

Set access address and port

ceph config set mgr mgr/dashboard/server_addr 192.168.100.3
ceph config set mgr mgr/dashboard/server_port 9001

Turn off https

ceph config set mgr mgr/dashboard/ssl false

View ceph address

ceph mgr services

Set user and password

ceph dashboard set-login-credentials jx123 123.com

access test

https://192.168.100.3:8443

Create a storage pool in the ceph file system

If there are less than 5 OSDs, you can set pg_num to 128. The
number of OSDs is between 5 and 10. You can set pg_num to 512. The number of OSDs is between
10 and 50. You can set pg_num to 4096.
The number of OSDs is greater than 50. You need to calculate the value of pg_num.

cd /usr/local/soft/cephcluster

ceph osd pool create cephfs_data 128

ceph osd pool create cephfs_metadata 64

View storage pool

ceph osd lspools

Create file system

ceph fs new  fs_test  cephfs_metadata cephfs_data

View file system

ceph fs ls

View mds status

ceph mds stat

4. Customer mounting

centos1 operation

Install

Synchronize management information

ceph-deploy install centos4

ceph-deploy admin  centos4

centos4 operation

yum install -y ceph-fuse

View information

ls  /etc/ceph
ceph.client.admin.keyring

Create the mounting directory
mkdir /ceph and mount the ceph file system
ceph-fuse -k /etc/ceph/ceph.client.admin.keyring -m 192.168.100.3:6789 /ceph

Guess you like

Origin blog.csdn.net/a872182042/article/details/133144741