Design Principle of Distributed File System FastDFS

FastDFS is an open source lightweight distributed file system, which consists of three parts: tracker server, storage server and client. It mainly solves the problem of mass data storage, especially suitable for small and medium-sized enterprises. The file (recommended range: 4KB < file_size < 500MB) is the online service of the carrier.

enter image de.ion here

 

Storage server

Storage server (hereinafter referred to as storage) is organized in groups (volumes, groups or volumes). A group contains multiple storage machines, and data is backed up for each other. The storage space is based on the storage with the smallest volume in the group. Multiple storages should be configured the same as possible to avoid wasting storage space.

Organizing storage in groups can facilitate application isolation, load balancing, and customization of the number of copies (the number of storage servers in a group is the number of copies of the group). For example, by storing different application data in different groups, application data can be isolated. At the same time, applications can be allocated to different groups for load balancing according to the access characteristics of the applications; the disadvantage is that the capacity of the group is limited by the storage capacity of a single machine, and when a machine in the group is broken, data recovery can only rely on the internal group. For other machines, the recovery time will be very long.

The storage of each storage in the group depends on the local file system. The storage can be configured with multiple data storage directories. For example, if there are 10 disks, which are respectively mounted in /data/disk1-/data/disk10, these 10 directories can be The data storage directory configured as storage.

When storage receives a file write request, it will select one of the storage directories to store files according to the configured rules (described later). In order to avoid too many files in a single directory, when storage is started for the first time, two levels of subdirectories will be created in each data storage directory, with 256 subdirectories in each level, for a total of 65536 files. The newly written files will be hashed. The method is routed to one of the subdirectories, and then the file data is directly stored in the directory as a local file.

Tracker server

Tracker is the coordinator of FastDFS and is responsible for managing all storage servers and groups. After each storage is started, it will connect to Tracker, inform itself of the group and other information it belongs to, and maintain periodic heartbeats. Tracker establishes a group according to the heartbeat information of the storage. ==> Mapping table of [storage server list].

The tracker needs to manage very little meta information, and it will all be stored in the memory; in addition, the meta information on the tracker is generated by the information reported by the storage, and it does not need to persist any data, which makes the tracker very easy to expand and directly increase the tracker The machine can be extended to serve as a tracker cluster. Each tracker in the cluster is completely equal. All trackers accept the heartbeat information of stroage and generate metadata information to provide read and write services.

Upload file

FastDFS provides users with basic file access interfaces, such as upload, download, append, delete, etc., which are provided to users in the form of client libraries.

enter image de.ion here

Select tracker server

When there is more than one tracker server in the cluster , since the trackers are completely equal, the client can choose a tracker arbitrarily when uploading files .

Select the stored group

tracker接收到upload file的请求时,会为该文件分配一个可以存储该文件的group,支持如下选择group的规则:1.Round robin,所有的group间轮询2.Specifiedgroup,指定某一个确定的group3.Load balance,剩余存储空间多多group优先

选择storage server

当选定group后,tracker会在group内选择一个storage server给客户端,支持如下选择storage的规则:1.Round robin,在group内的所有storage间轮询2.First server ordered by ip,按ip排序3.First server ordered by priority,按优先级排序(优先级在storage上配置)

选择storage path

当分配好storage server后,客户端将向storage发送写文件请求,storage将会为文件分配一个数据存储目录,支持如下规则:1.Round robin,多个存储目录间轮询2.剩余存储空间最多的优先

生成Fileid

选定存储目录之后,storage会为文件生一个Fileid,由storage server ip、文件创建时间、文件大小、文件crc32和一个随机数拼接而成,然后将这个二进制串进行base64编码,转换为可打印的字符串。

选择两级目录

当选定存储目录之后,storage会为文件分配一个fileid,每个存储目录下有两级256*256的子目录,storage会按文件fileid进行两次hash(猜测),路由到其中一个子目录,然后将文件以fileid为文件名存储到该子目录下。

生成文件名

当文件存储到某个子目录后,即认为该文件存储成功,接下来会为该文件生成一个文件名,文件名由group、存储目录、两级子目录、fileid、文件后缀名(由客户端指定,主要用于区分文件类型)拼接而成。

enter image de.ion here

文件同步

写文件时,客户端将文件写至group内一个storage server即认为写文件成功,storage server写完文件后,会由后台线程将文件同步至同group内其他的storage server。

每个storage写文件后,同时会写一份binlog,binlog里不包含文件数据,只包含文件名等元信息,这份binlog用于后台同 步,storage会记录向group内其他storage同步的进度,以便重启后能接上次的进度继续同步;进度以时间戳的方式进行记录,所以最好能保证 集群内所有server的时钟保持同步。

storage的同步进度会作为元数据的一部分汇报到tracker上,tracke在选择读storage的时候会以同步进度作为参考。

比如一个group内有A、B、C三个storage server,A向C同步到进度为T1 (T1以前写的文件都已经同步到B上了),B向C同步到时间戳为T2(T2 > T1),tracker接收到这些同步进度信息时,就会进行整理,将最小的那个做为C的同步时间戳,本例中T1即为C的同步时间戳为T1(即所有T1以前 写的数据都已经同步到C上了);同理,根据上述规则,tracker会为A、B生成一个同步时间戳。

Download file

客户端upload file成功后,会拿到一个storage生成的文件名,接下来客户端根据这个文件名即可访问到该文件。

enter image de.ion here

跟upload file一样,在download file时客户端可以选择任意tracker server。

tracker发送download请求给某个tracker,必须带上文件名信息,tracke从文件名中解析出文件的group、大小、创建时间等信 息,然后为该请求选择一个storage用来服务读请求。由于group内的文件同步时在后台异步进行的,所以有可能出现在读到时候,文件还没有同步到某 些storage server上,为了尽量避免访问到这样的storage,tracker按照如下规则选择group内可读的storage。

1.该文件上传到的源头storage -源头storage只要存活着,肯定包含这个文件,源头的地址被编码在文件名中。2.文件创建时间戳==storage被同步到的时间戳且(当前时间-文件创建时间戳)>文件同步最大时间(如5分钟)-文件创建后,认为经过最大同步时间后,肯定已经同步到其他storage了。3.文件创建时间戳< storage被同步到的时间戳。-同步时间戳之前的文件确定已经同步了4.(当前时间-文件创建时间戳)>同步延迟阀值(如一天)。-经过同步延迟阈值时间,认为文件肯定已经同步了。

小文件合并存储

小文件合并存储主要解决如下几个问题:

1.本地文件系统inode数量有限,从而存储的小文件数量也就受到限制。2.多级目录+目录里很多文件,导致访问文件的开销很大(可能导致很多次IO3.按小文件存储,备份与恢复的效率低

FastDFS在V3.0版本里引入小文件合并存储的机制,可将多个小文件存储到一个大的文件(trunk file),为了支持这个机制,FastDFS生成的文件fileid需要额外增加16个字节

1. trunk file id 2.文件在trunk file内部的offset 3.文件占用的存储空间大小(字节对齐及删除空间复用,文件占用存储空间>=文件大小)

每个trunk file由一个id唯一标识,trunk file由group内的trunk server负责创建(trunk server是tracker选出来的),并同步到group内其他的storage,文件存储合并存储到trunk file后,根据其offset就能从trunk file读取到文件。

文件在trunk file内的offset编码到文件名,决定了其在trunk file内的位置是不能更改的,也就不能通过compact的方式回收trunk file内删除文件的空间。但当trunk file内有文件删除时,其删除的空间是可以被复用的,比如一个100KB的文件被删除,接下来存储一个99KB的文件就可以直接复用这片删除的存储空 间。

HTTP访问支持

FastDFS的tracker和storage都内置了http协议的支持,客户端可以通过http协议来下载文件,tracker在接收到请求时,通 过http的redirect机制将请求重定向至文件所在的storage上;除了内置的http协议外,FastDFS还提供了通过apache或nginx扩展模块下载文件的支持。

enter image de.ion here

其他特性

FastDFS提供了设置/获取文件扩展属性的接口(setmeta/getmeta),扩展属性以key-value对的方式存储在storage上的 同名文件(拥有特殊的前缀或后缀),比如/group/M00/00/01/some_file为原始文件,则该文件的扩展属性存储在/group /M00/00/01/.some_file.meta文件(真实情况不一定是这样,但机制类似),这样根据文件名就能定位到存储扩展属性的文件。

以上两个接口作者不建议使用,额外的meta文件会进一步“放大”海量小文件存储问题,同时由于meta非常小,其存储空间利用率也不高,比如100bytes的meta文件也需要占用4K(block_size)的存储空间。

FastDFS还提供appender file的支持,通过upload_appender_file接口存储,appender file允许在创建后,对该文件进行append操作。实际上,appender file与普通文件的存储方式是相同的,不同的是,appender file不能被合并存储到trunk file。

问题讨论

From the perspective of the entire design of FastDFS, it is basically simple as a principle. For example, backing up data in units of machines simplifies the management of trackers; storage directly uses the local file system to store files as they are, which simplifies the management of storage; writing a single copy of a file to storage is successful, and then synchronizes in the background, simplifying writing files process. However, the problems that simple solutions can solve are usually limited. FastDFS currently has the following problems (welcome to discuss).

data security

  • One copy is successful: within the time window from when the source storage finishes writing the file to when it is synchronized to other storages in the group, once the source storage fails, user data may be lost, and data loss is usually unacceptable to the storage system of.
  • Lack of automatic recovery mechanism: when a disk in the storage fails, you can only replace the disk and restore the data manually; due to machine backup, it seems impossible to have an automatic recovery mechanism unless there is a pre-prepared hot spare disk, lack of The automated recovery mechanism will increase the system operation and maintenance work.
  • Low data recovery efficiency: When recovering data, it can only be read from other storages in the group. At the same time, due to the low access efficiency of small files, the efficiency of file-based recovery will also be very low. Low recovery efficiency also means data recovery. longer in an unsafe state.
  • Lack of multi-computer room disaster recovery support: At present, to do multi-computer room disaster recovery, only additional tools can be used to synchronize data to the backup cluster, and there is no automation mechanism.

storage space utilization

  • The number of files stored on a single machine is limited by the number of inodes
  • Each file corresponds to a file in the storage local file system. On average, each file will have a wasted storage space of block_size/2.
  • The combined storage of files can effectively solve the above two problems, but since there is no space recovery mechanism for combined storage, the space of deleted files cannot be guaranteed to be reused, and there is also the problem of wasting space.

load balancing

  • The group mechanism itself can be used for load balancing, but it is only a static load balancing mechanism, and the access characteristics of the application need to be known in advance; at the same time, the group mechanism also makes it impossible to migrate data between groups for dynamic load balancing.

http://blog.chinaunix.net/uid-20196318-id-4058561.html

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326840520&siteId=291194637