[Introduction to the distributed file system Cept]

Ceph - a scalable distributed storage system

Linux continues to make inroads into the scalable computing space, especially the scalable storage space. Ceph recently joined an impressive list of filesystem alternatives in Linux, a distributed filesystem that adds replication and fault tolerance while maintaining POSIX compatibility.

 



 

Its name is related to the mascot of UCSC (the birthplace of Ceph), which is "Sammy", a banana-colored slug, a shellless mollusk in the cephalopod. These multi-tentacled cephalopods are a metaphor for a highly parallel distributed file system.

 

Ceph is a unified storage system that supports three interfaces.

Object: has a native API, and is also compatible with Swift and S3 APIs

Block: supports thin provisioning, snapshots, clones

File: Posix interface, supports snapshots

 

 

Ceph is also a distributed storage system, and its characteristics are:

High scalability: Use common x86 servers, support 10 to 1000 servers, and support TB to PB-level expansion.

High reliability: no single point of failure, multiple data copies, automatic management, automatic repair.

High performance: Balanced data distribution and high degree of parallelization. For objects storage and block storage, no metadata server is required.

 



 

System ArchitectureEdit

The Ceph ecosystem architecture can be divided into four parts:

1. Clients: Clients (data users)

2. cmds: Metadata server cluster, metadata server (cache and synchronize distributed metadata)

3. cosd: Object storage cluster, object storage cluster (stores data and metadata as objects and performs other key functions)

4. cmon: Cluster monitors, cluster monitors (execute monitoring functions)

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326892364&siteId=291194637