CentOS7 install OpenStack-11. Ceph distributed storage architecture deployment

11.0 Overview

  • With OpenStack is becoming the standard software stack of open source cloud computing, Ceph has become the first choice of back-end storage OpenStack. Ceph is a for outstanding performance, reliability and scalability designed unified, distributed file systems.

  • ceph official document: http://docs.ceph.org.cn/

  • ceph Chinese open source community: http://ceph.org.cn/

  • Ceph is an open source distributed file system. Because it supports block storage, object storage, so it is natural to be used or the cloud frame openstack cloudstack entire storage backend. Of course, it can serve as a storage, such as deployment of a cluster is stored as objects, SAN storage, NAS storage.

1) ceph support

  1. Object Store : i.e. radosgw, S3 compatible interface. Upload by rest api, download the file.

  2. File System : posix interface. Ceph clusters can be seen as a shared file system is mounted locally.

  3. Storage block : i.e., RBD. There are kernel rbd and librbd used in two ways. Support snapshots, clones. Corresponding to a local hard disk linked to the usage and purpose, and the same hard disk. For example, in the OpenStack project, Ceph block storage device can be docked backend storage of OpenStack

2) Ceph What are the advantages compared to other distributed stores?

  1. Unified Storage : Although ceph ground floor is a distributed file system, but because of the support of the upper target and block the development of interfaces. So open source storage software, able to dominate the political arena. As for the future generations can not, I do not know of.

  2. High scalability : easy expansion, large capacity. Capable of managing thousands of servers, EB level of capacity.

  3. Reliability : consistency strong support multiple copies, EC. Copies can collapse host, rack, computer room, data center storage. Therefore, safe and reliable. Storage nodes can be self-managing, self-healing. No single point of failure, fault-tolerant.

  4. High performance : Since a plurality of copies, when read and write operations so able to do highly parallelized. In theory, the more nodes, the higher the IOPS and throughput of the entire cluster. Another point ceph client read and write data directly interact with the storage device (OSD).

 . 3) the Ceph components Introduction

  • OSDs Ceph : Ceph OSD daemon (Ceph OSD) functions to store data, copy data processing, recovery, backfill, rebalancing, and to provide monitoring information to the Ceph Monitors by examining other OSD daemon heartbeat. When the Ceph storage cluster is set to have two copies, at least two OSD daemon cluster to achieve active + clean state (there are three copies of Ceph default, but you can adjust the number of copies).
  • Monitors : Monitor the Ceph diagram shows maintains various cluster states, including a monitor view the OSD FIG normalized set group (PG) in FIG, CRUSH and FIG. Ceph preserved historical information every state change occurs on Monitors, OSD, and PG (called the epoch).
  • MDSs : Ceph metadata server (MDS) for the Ceph file system stores metadata (i.e., Ceph Ceph object storage block and without the use of MDS). Metadata server makes the POSIX file system users can cause a burden on the premise of execution such as ls, find other basic commands not Ceph storage cluster.

10.1, Ceph cluster deployment experiment

1) Disable selinux, turn off the firewall

# Disable SELinux
setenforce  0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config

systemctl stop firewalld.service
systemctl disbale firewalld.service

2) the host is ready (to edit the hosts file)

the Controller    192.168 . 182.143   ADMIN, the OSD, Mon as management and monitoring node #
compute1    192.168.182.142  osd,mds
compute2    192.168.182.128  osd,mds

# Controller for management. Osd. Mon node

3) increasing a hard three servers experiment, and mounted to create the directory / var / local / osd {0,1,2}

# controler
mkfs.xfs  /dev/sdc
mkdir -p /var/local/osd0
mount /dev/sdc  /var/local/osd0/

# compute1
mkfs.xfs  /dev/sdb
mkdir -p /var/local/osd1
mount /dev/sdb  /var/local/osd1/

# compute2
mkfs.xfs  /dev/sdb
mkdir -p /var/local/osd2
mount /dev/sdb  /var/local/osd2/

If you do not see the new disk devices, then restart the virtual machine; or execute the following command to re-scan the disk, pay attention to see / sys / There are several host file class / scsi_host / directory

echo '- - -' > /sys/class/scsi_host/host{0,1,2}/scan

4) SSH login password-free

# Use ssh-keygen to generate ssh keys issued to each node in the management node

# controller
ssh-keygen ssh-copy-id controler ssh-copy-id compute1 ssh-copy-id compute2

5) management node installation tool ceph-deploy

# Add yum configuration file (each node needs to increase yum source)

cat << EOF > /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/x86_64
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.163.com/ceph/keys/release.asc
priority=1
 
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.163.com/ceph/keys/release.asc
priority=1
 
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.163.com/ceph/keys/release.asc
priority=1
EOF

# Update software and install the source ceph-deploy management tools

yum clean all && yum makecache
yum -y install ceph-deploy

6) Create a monitor service

# controller
mkdir /etc/ceph && cd /etc/ceph

# Mon installed in the controller node

ceph-deploy new controller

# Generate a configuration file in the current directory

[root@controler ceph]# ls
ceph.conf  ceph.log  ceph.mon.keyring

# Ceph profiles, a monitor key ring and a log file

7) modify the number of copies

The default number of copies of the profile # 2 is changed from 3, so that only two osd can achieve active + clean state, this line was added to the [Global] section ( optional )

$ vim ceph.conf
[global]
Broaching = 92f5581d-79d2-4c9f-a523- 4965eedc846b
mon_initial_members = controller
mon_host = 192.168.182.143
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd_pool_default_size = 2

# In all nodes installed ceph

ceph-deploy install controller compute1 compute2

# Installation ceph monitor

ceph-deploy mon create controller

keyring file collection node #

ceph-deploy  gatherkeys controller

8) the service deployment osd

# Add osd nodes (node ​​performs all osd)

# We have created the directory / var / local / osd {id} ready to experiment

8.1) Creating an activation osd

# Create osd

# controller
ceph-deploy osd prepare controller:/var/local/osd0 compute1:/var/local/osd1 compute2:/var/local/osd2

# Activate osd 

ceph-deploy osd activate controller:/var/local/osd0 compute1:/var/local/osd1 compute2:/var/local/osd2

# If an error

 

 Solution: Add permissions on each node

chmod 777 -R /var/local/osd0/
chmod 777 -R /var/local/osd1/
chmod 777 -R /var/local/osd2/

# View status

ceph-deploy osd list  controller compute1 compute2

9) unified configuration

# Ceph-deploy with the key profile and admin copied to all nodes, so that each command line to execute Ceph without specifying a monitor address and ceph.client.admin.keyring

ceph-deploy admin controller compute1 compute2

# Each node permission to modify ceph.client.admin.keyring

chmod +r /etc/ceph/ceph.client.admin.keyring

# View osd state

ceph health 或 ceph -s

10) mds service deployment

ceph-deploy mds create compute1 compute2

# View status

ceph mds stat

# Cluster status

ceph -s

11) Create a ceph file system

# View

CEPH fs ls

# Create a storage pool

ceph osd pool create cephfs_data <pg_num> 
ceph osd pool create cephfs_metadata <pg_num>

Wherein: <pg_num> = 128 ,
for creating the storage pool
to determine pg_num value is mandatory, because they can not automatically calculated. Here are a few common values:
  * less than 5 OSD when the pg_num may be set to 128
  * the number of the OSD 5-10 when the pg_num may be set to 512
  * the number of the OSD 10 to 50 when the can pg_num to 4096
  * number of OSD greater than 50 , you have to understand the trade-offs, and how to calculate their own value pg_num
  * tools can help pgcalc own calculations pg_num value
increases as the number of OSD, the correct value becomes more important pg_num because it significantly affects the behavior of the cluster, as well as data persistence time error (ie, the probability of a catastrophic event resulting in data loss). 

11.1, create a file system

# After you've created a storage pool, you can create a file system using the fs new command, where: <fs_name> = cephfs   Customizable

ceph fs new <fs_name> cephfs_metadata cephfs_data 

View created after cephfs #

CEPH fs ls

# Mds View node status, the Active is active, the other one is in a state of hot backup

ceph mds stat

11.2, Mount Ceph file system ( compute1 )

A: Ceph file system mount the kernel driver

Create a mount point #

mkdir /opt
  • # Storage key (if no copy ceph ceph-deploy in the configuration file management node)
  • # cat /etc/ceph/ceph.client.admin.keyring
  • # 将key对应的值复制下来保存到文件:/etc/ceph/admin.secret中。
# Mount
mount -t ceph 192.168.182.143:6789:/ /opt -o name=admin,secretfile=/etc/ceph/admin.secret

# Unmounted

umount /opt

B: Ceph file system mount user control

# Install the ceph-fuse

yum install -y ceph-fuse

# Mount

ceph-fuse -m 192.168.182.143:6789 /opt

# Unmounted

fusermount -u /opt

ceph in the open source community is quite popular, but more is applied to the back-end storage cloud . So most of ceph use in a production environment the company will have a dedicated team of ceph secondary development, ceph operation and maintenance also more difficult. But after a reasonable optimization, ceph performance and stability are worth the wait.

12) on the other

Cleanup ceph configuration on the machine:

  • Stop all processes: stop ceph-all

  • All ceph uninstall program: ceph-deploy uninstall [{ceph-node}]

  • Delete the associated installation package ceph: ceph-deploy purge {ceph-node} [{ceph-data}]

  • Remove ceph related configuration: ceph-deploy purgedata {ceph-node} [{ceph-data}]

  • 删除key:ceph-deploy forgetkeys

  • Uninstall ceph-deploy management: yum -y remove ceph-deploy

~ ~ ~ ~ ~ ~ Completed the deployment of Ceph distributed storage

 

Guess you like

Origin www.cnblogs.com/liugp/p/12513702.html