文件存储方案对比

文件存储方案对比
需求
对海量文件(图片、文档等)进行存储,系统间共享。

数据安全 
需要实现数据冗余,避免数据的单点故障
可线性扩展 
当数据增长到TB、甚至PB以上时,存储方案需要支持可线性扩展
存储高可用 
某个存储服务宕掉时,不影响整体存储方案的可用
性能 
性能达到应用要求
开源选型
Ceph
Ceph是一个开源的分布存储系统,同时提供对象存储、块存储和文件存储。 
linux内核2.6.34将ceph加入到内核中,红帽基于ceph出了redhat ceph storage. 


ceph体系结构

OpenStack Swift
OpenStack的存储项目,提供了弹性可伸缩、高可用的分布式对象存储服务,适合存储大规模非结构化数据。 

OpenStack Swift 作为稳定和高可用的开源对象存储被很多企业作为商业化部署,如新浪的 App Engine 已经上线并提供了基于 Swift 的对象存储服务,韩国电信的 Ucloud Storage 服务。

OpanStack Swift 原理、架构与 API 介绍 
OpenStack Swift特性

Hbase/hdfs
hdf全称是Hadoop distributed file system,是一个用java语言开发的分布式文件系统,有很好的伸缩性,支持10亿+的文件,上百PB数据,上千节点的集群。 
HDFS设计目标是==支持海量数据的批量计算==,而不是直接与用户做交互式操作。 
缺点 
* It had a single point of failure until the recent versions of HDFS 
* It isn’t POSIX compliant 
* It stores at least 3 copies of data 
* It has a centralized name server resulting in scalability challenges

Assumptions and Goals
Hardware Failure 
Hardware failure is the norm rather than the exception. An HDFS instance may consist of hundreds or thousands of server machines, each storing part of the file system’s data. The fact that there are a huge number of components and that each component has a non-trivial probability of failure means that some component of HDFS is always non-functional. Therefore, detection of faults and quick, automatic recovery from them is a core architectural goal of HDFS.

Streaming Data Access 
Applications that run on HDFS need streaming access to their data sets. They are not general purpose applications that typically run on general purpose file systems. HDFS is designed more for batch processing rather than interactive use by users. The emphasis is on high throughput of data access rather than low latency of data access. POSIX imposes many hard requirements that are not needed for applications that are targeted for HDFS. POSIX semantics in a few key areas has been traded to increase data throughput rates.

Large Data Sets 
Applications that run on HDFS have large data sets. A typical file in HDFS is gigabytes to terabytes in size. Thus, HDFS is tuned to support large files. It should provide high aggregate data bandwidth and scale to hundreds of nodes in a single cluster. It should support tens of millions of files in a single instance.

Simple Coherency Model 
HDFS applications need a write-once-read-many access model for files. A file once created, written, and closed need not be changed. This assumption simplifies data coherency issues and enables high throughput data access. A MapReduce application or a web crawler application fits perfectly with this model. There is a plan to support appending-writes to files in the future.

“Moving Computation is Cheaper than Moving Data” 
A computation requested by an application is much more efficient if it is executed near the data it operates on. This is especially true when the size of the data set is huge. This minimizes network congestion and increases the overall throughput of the system. The assumption is that it is often better to migrate the computation closer to where the data is located rather than moving the data to where the application is running. HDFS provides interfaces for applications to move themselves closer to where the data is located.

Portability Across Heterogeneous Hardware and Software Platforms 
HDFS has been designed to be easily portable from one platform to another. This facilitates widespread adoption of HDFS as a platform of choice for a large set of applications.

GlusterFS
GlusterFS是一个开源的分布式文件系统,可支持PB级数据量和几千个客户端,没有元数据服务器。
红帽2011年花1.36亿$购买了GlusterFS,基于GlusterFS发布了一个商业存储系统 


GlusterFS架构与维护 
安装GlusterFS Client

fastdfs
fastdfs是阿里余庆做的一个个人项目,在一些互联网创业公司中有应用,没有官网,不活跃,两个contributors。

tfs
taobao开源的分布式存储系统,其设计目标是用于存海量小文件。设计思路类似于hdfs,两个NameServer和多个DataServer作成,大量小文件合并成一个大文件(Block,默认64M) 


github上面最近一次修改是4年前,阿里云开源网站上最近一次修改是3年前,资料不多。

minio
minio是用go语言开发的一个分布式对象存储系统,提供与Amazon S3兼容的API。它与其它分布式存储系统的特色在于简单、轻量级,对开发者友好,认为存储应该是一个开发问题而不是一个运维问题。

minio创使人Anand Babu Periasamy

对比
特性    ceph    minio    swift    hbase/hdfs    GlusterFS    fastdfs
开发语言    C    go    python    java    副本    副本
数据冗余    副本,纠删码    Reed-Solomon code    副本    副本    副本    副本
一致性    强一致性    强一致    最终一致    最终一致    ?    ?
动态扩展    HASH    不支持动态加节点    一致性hash    ?    ?    ?
性能    ?    ?    ?    ?    ?    ?
中心节点    对象存储无中心,cephFS有元数据服务中心点    无中心    无中心    nameNode单点    ?    ?
存储方式    块、文件、对象    对象存储(分块)    块存储    块存储    ?    ?
活跃度    高,中文社区不算活跃    高,没有中文社区    高    高    中    中
成熟度    高    中    高    高    ?    ?
操作系统    linux-3.10.0+    linux,windows    ?    任何支持java的OS    ?    ?
文件系统    EXT4,XFS    EXT4,XFS    ?    ?    ?    ?
客户端    c、python,S3    java,s3    java,RESTful    java,RESTful    ?    ?
断点续传    兼容S3,分段上传,断点下载    兼容S3,分段上传,断点下载    不支持    不支持    ?    ?
学习成本    高    中    ?    中    ?    ?
前景    10    8    9    9    7    5
开源协议    LGPL version 2.1    Apache v2.0    Apache V2.0    ?    ?    ?
管理工具    ceph-admin,ceph-mgr,zabbix插件,web管理工具    命令行工具 mc    ?    ?    ?    ?
ceph vs minio
从对比中,目前文件存储在ceph和minio中进行比较选型

ceph优缺点
优点
成熟

红帽继子,ceph创始人已经加入红帽
国内有所谓的ceph中国社区,私人机构,不活跃,文档有滞后,而且没有更新的迹象。
从git上提交者来看,中国有几家公司的程序员在提交代码,星辰天合,easystack, 腾讯、阿里基于ceph在做云存储,但是在开源社区中不活跃,阿里一位叫liupan的有参与
功能强大

支持数千节点
支持动态增加节点,自动平衡数据分布。(TODO,需要多长时间,add node时是否可以不间断运行)
可配置性强,可针对不同场景进行调优
缺点
学习成本高,安装运维复杂。(或者说这个不是ceph的缺点,是咱们的缺点)
minio优缺点
优点
学习成本低,安装运维简单,开箱即用
目前minio论坛推广给力,有问必答
有java客户端、js客户端
缺点

社区不够成熟,业界参考资料较少
不支持动态增加节点,minio创始人的设计理念就是动态增加节点太复杂,后续会采用其它方案来支持扩容。 
Dynamic addition and removal of nodes are essential when all the storage nodes are managed by Minio server. Such a design is too complex and restrictive when it comes to cloud native application. Old design is to give all the resources to the storage system and let it manage them efficiently between the tenants. Minio is different by design. It is designed to solve all the needs of a single tenant. Spinning minio per tenant is the job of external orchestration layer. Any addition and removal means one has to rebalance the nodes. When Minio does it internally, it behaves like blackbox. It also adds significant complexity to Minio. Minio is designed to be deployed once and forgotten. We dont even want users to be replacing failed drives and nodes. Erasure code has enough redundancy built it. By the time half the nodes or drives are gone, it is time to refresh all the hardware. If the user still requires rebalancing, one can always start a new minio server on the same system on a different port and simply migrate the data over. It is essentially what minio would do internally. Doing it externally means more control and visibility. 
We are planning to integrate the bucket name based routing inside the minio server itself. This means you can have 16 servers handle a rack full of drives (say few petabytes). Minio will schedule buckets to free 16 drives and route all operations appropriately

参考资料
存储架构 
亚马逊S3文档

ceph commiter 
阿里liupan的博客 
阿里Ceph PPT 
ceph day beijing2016 
ceph day beijing2017 
闲聊Ceph目前在中国的发展&Ceph现状 
bluestore,一种新的ceph存储后端 
Ceph: placement groups 
ceph简介 
星辰天合开发人员上ceph官网

minio官方文档 
minio社区 
minio作者访谈

转自:https://blog.csdn.net/dingjs520/article/details/78655556

猜你喜欢

转载自blog.csdn.net/dragonpeng2008/article/details/89311297