[Ceph] basic concepts, principles, architecture introduced

1. Ceph architecture

 

 

 

1.1 Ceph Interface

Ceph supports three interfaces:

  • Object: There are native API, and is also compatible with Swift and S3 API
  • Block: thin provisioning, snapshots, clones,
  • File: Posix interface, support snapshot

 

1.2 Ceph core components and concepts introduced

  • Monitor: Monitor a Ceph cluster requires multiple small clusters composed of Paxos them by synchronizing data used to save OSD metadata.
  • OSD: OSD stands for Object Storage Device, which is responsible for responding to client requests return process specific data, a Ceph cluster generally have a number of OSD.
  • CRUSH: CRUSH is Ceph use of data distribution algorithm, similar to the consistent hashing, so that data allocated to the desired position.
  • PG: PG stands for Placement Groups, is a logical concept, a PG includes a plurality of OSD. This layer is actually introduced into the PG for better distribution data and position data.
  • Object: Ceph storage unit is the bottom Object object, each comprising Object metadata and raw data.
  • RADOS: for data distribution, Failover cluster operations and the like.
  • Libradio: Libradio is RADOS provide library because RADOS is an agreement, it is difficult to directly access, so the upper RBD, RGW and CephFS are accessible through libradios, currently offers PHP, Ruby, Java, Python, C and C ++ support.
  • MDS : MDS stands for Ceph Metadata Server, CephFS service is dependent metadata service.
  • RBD: RBD stands RADOS Block Device, is Ceph block device service provided externally.
  • RGW: RGW full name RADOS gateway, is Ceph object storage service provided by the external interface is compatible with the S3 and Swift.
  • CephFS: CephFS full name Ceph File System, is the Ceph file system services provided externally.

 

2. The three storage types

 

Block device: The main is to map the raw disk space to host uses, similar to SAN storage, usage scenario primarily file storage, log storage, virtual image files.

File Storage: typical: FTP, NFS block storage can not be shared in order to overcome the problem, so have file storage.

Object Store: a high-speed read and stored in the block storage file sharing characteristics and accessible through Restful API, generally suitable images, streaming media storage.

 

2.1 Ceph IO processes and data distribution

 

step:

  1. client to create a cluster handler.
  2. client reads the configuration file.
  3. client connected to the monitor, acquire cluster map information.
  4. write io client request corresponding to the primary data node according osd crushmap algorithm.
  5. Osd master nodes to simultaneously write the data two additional copies of the data nodes.
  6. Waiting for the master node and two other nodes copies of data written state.
  7. And a copy of the master node node status is written after a successful return to the client, io programming.

 

New master IO flow chart

Description:

If the newly added OSD1 replacing the original OSD4 become Primary OSD, because the PG is not created on OSD1, data does not exist, then the I / O on the PG can not, how does it work?

 

 

 

 

step:

(1) client connected monitor cluster map to obtain information.

(2) while the absence of new primary osd1 pg will take the initiative to report data to make informed osd2 monitor temporarily take over the main.

(3) a temporary master osd2 total amount of data will synchronize to the new primary osd1.

(4) client IO read directly connected to read and write temporary master osd2.

(5) osd2 received write io, while the other two write replica node.

(6) wait osd2 and the other two copies of written success.

(7) osd2 three data successfully written back to the client, client io write is completed at this time.

(8) If osd1 data synchronization is completed, the temporary main osd2 will hand over the main role.

(9) osd1 becomes the master node, osd2 become a copy.

 

How to access data 3. Ceph

Here is an article written in easy to understand: http://www.xuxiaopang.com/2016/11/08/easy-ceph-CRUSH/

 

Guess you like

Origin www.cnblogs.com/hukey/p/11899710.html