MFS Distributed File System

A, MFS enterprise application scenarios

> 多台web服务器通过nfs共享一个存储,虽然业务上满足需求,但在性能与容量上无法胜任更高的要求,nfs服务器不堪重负,出现超时问题,同时也存在单点故障问题。尽管可以使用rsync同步数据到另一台服务器上做nfs服务的备份,但对提高整个系统的性能毫无帮助。可对nfs进行优化或者采取别的解决方案,但是优化并不能满足日益增多的客户端的性能要求。
解决方案是采用分布式文件系统。采用分布式文件系统后,服务器之间的数据访问不再是一对多的关系,而是多对多的关系(即多web服务器对多文件服务器),这样可以使性能得到大幅提升。
MFS分布式文件系统,即moosefs,可提供容量PB级别的共享存储,无需昂贵的专业硬件服务器便可完成构件。具有冗余容错功能,高可用、可扩展的海量级别分布式文件系统,保证数据的安全性。
MFS把数据分散在多台服务器上,但用户看到的只是一个源。

Two, MFS principle of distributed file system

分布式文件系统是指文件系统管理的物理存储资源不一定直接连接本地节点上,而是通过计算机网络与节点相连。就是把一些分散的(分布在局域网内各个计算机上)共享文件夹,集合到一个文件夹内(虚拟共享文件夹)。对于用户来说,要访问这些共享文件夹时,只要打开这个虚拟共享文件夹,就可以看到所有连接到虚拟共享文件夹内的共享文件,用户感觉不到这些共享文件夹是分散于各个计算机上的。
  • Benefits: centralized access, streamline operations, data disaster recovery, improve performance file access, online capacity expansion.
  • MFS when a network fault-tolerant distributed file system, which breaks the data stored in the plurality of physical servers, and was presented to the user is a uniform resource, for the unified source, use can be mounted.
composition effect
master metadata server Responsible for managing the file system in the whole system, maintenance of metadata.
metalogger metadata journaling server Backup master server's change log file, the file type changelog_ml. *. Mfs. When the master server data is lost or corrupted, you can obtain recovery from the log file server.
chunk server data storage server Real server to store data. When storing a file, the file will save block, and copy the data between the server, the more data servers, can be used in the larger capacity, higher reliability, better performance.
Client client The same can be like nfs mount to mount mfs file system, the same operation.
  • Data reading processing MFS
    client (client) issues a read request to the metadata server (master) -> the required metadata server data storage location (IP address Server chunk and chunk ID) tell the client - > client request data from the known chunk server -> chunk server transmits data to the client

  • MFS writing data processing
    client (client) to the metadata server (master) transmits a write request -> metadata chunk server interacts with the server (only present when required for this interaction block chunks), However, the metadata server is only created some new server block chunks, has been created to inform metadata server operating successfully -> metadata server tells the client, which can write data in chunks which chunk server -> client > the chunk server data synchronization with other chunk server, after chunk server tells the client data is written to success - - write data to the specified chunk server> client informed metadata server for this change is completed

Third, the simulation set up mfs file system

server IP Prepare in advance
master 192.168.2.11 mfs-1.6.27
metalogger 192.2.12 mfs-1.6.27
chunk server1 192.168.2.13 mfs-1.6.27 increase a 5G hard disk
chunk server2 192.168.2.14 mfs-1.6.27 increase a 5G hard disk
chunk server3 192.168.2.15 mfs-1.6.27 increase a 5G hard disk
client 192.168.2.16 mfs-1.6.27、fuse-2.9.2

Due to build their own time is not convenient at the same time to blog, build a successful and we have written, we can learn to build this part of the time to copy a script execution. Of course, the premise you have to do to prepare the environment above their own Baidu download, (do not tangle version)

  1. master configuration
useradd -M -s /sbin/nologin mfs
yum -y install zlib-devel
tar -xf mfs-1.6.27-5.tar.gz -C /usr/src/
cd /usr/src/mfs-1.6.27/
./configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs --disable-mfschunkserver --disable-mfsmount && make && make install

cd /usr/local/mfs/etc/mfs/
ls
mfsexports.cfg.dist    mfsmetalogger.cfg.dist
mfsmaster.cfg.dist     mfstopology.cfg.dist
#被挂载目录及权限配置文件
cp mfsexports.cfg.dist mfsexports.cfg 
#主配置文件   
cp mfsmaster.cfg.dist mfsmaster.cfg      
cp mfstopology.cfg.dist mfstopology.cfg
cd /usr/local/mfs/var/mfs/
cp metadata.mfs.empty metadata.mfs
/usr/local/mfs/sbin/mfsmaster start
#验证是否启动
ps aux |grep mfs|grep -v grep
  1. Build metalogger server (changing the virtual machine, note corresponding to the IP, do not confused)
useradd -M -s /sbin/nologin mfs
yum -y install zlib-devel
tar xf mfs-1.6.27-5.tar.gz -C /usr/src/
cd /usr/src/mfs-1.6.27/
.configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs --disable-mfschunkserver --disable-mfsmount && make && make install
cd /usr/local/mfs/etc/mfs/
cp mfsmetalogger.cfg.dist mfsmetalogger.cfg

vim mfsmetalogger.cfg
MASTER_HOST=192.168.2.11 //元数据服务器的master的IP地址
*这个步骤用sed能改,自己思考一下,但建议还是亲自打开修改,以免失误*

ll -d /usr/local/mfs/var/mfs/
/usr/local/mfs/sbin/mfsmetalogger start
ps aux |grep mfs |grep -v grep
  1. Build three chunk server (three same method)
fdisk -l|grep /dev/
fdisk /dev/sdb
接下来的步骤是对/dev/sdb分区和格式化,分一个区,大小直接回车,w保存
partx -a /dev/sdb
mkfs.ext4 /dev/sdb1
mkdir /data
mount /dev/sdb1 /data/
chown -R mfs.mfs /data/
df -hT

man partx
The partx is not an fdisk program -- adding and removing partitions does not change the disk, it just tells the kernel about the presence and numbering of on-disk partitions.

`` `
The useradd -M -s / sbin / nologin MFS
yum -Y-devel zlib the install
the tar-1.6.27-5.tar.gz -C MFS XF / usr / the src /
CD /usr/src/mfs-1.6. 27 /
./configure --prefix = / usr / LOCA / MFS --with-default-User-Group-MFS --with-default = MFS --with-Group-default = MFS --disable-mfsmaster --disable the make the make install && && -mfsmount
cd / usr / local / MFS / etc / MFS
LS
mfschunkserver.dist mfshdd.cfg.dist
cp mfschunkserver.cfg.dist mfschunkserver.cfg
cp mfshdd.cfg.dist mfshdd.cfg
vim mfschunkserver.cfg
MASTER_HOST 192.168.2.11 =
vim mfshdd.cfg
/ the Data // add this line, / data partition is to mfs, the production environment is best to use a separate partition or separate disk mount this directory

/ usr / local / mf / sbin / mfschunkserver start
ps aux | grep mfs | grep -v grep

Since the three servers configured the same, in order to improve efficiency, it can be achieved through shell scripts.
There is also a good way to achieve by ansible, these IP hosts that have joined the list, very fast, when I wrote about here, has been very impatient, I hope that we can achieve by ansible. I do not know ansible does not matter, reference my blog

Guess you like

Origin www.cnblogs.com/liuwei-xd/p/11110606.html