MooseFS distributed storage

A, MooseFS Introduction

  MooseFS mainly by the management server (master), Syslog server (Metalogger), the data storage server (chunkservers) configuration.

The management server: The main role is to manage the data storage server, file read and write control, data copying, etc. between the node and space management.

Syslog Server: change log backup management server to manage can return to work when server problems.

Data storage server: obey scheduling management server, provide storage space, receiving or transmitting customer data.

 

MooseFS read process as shown:

 

 

 

Summary: MooseFS structure is simple, suitable for beginners to understand the working process of the distributed file system, but MooseFS with a single point of failure risks, once the master does not work, the entire distributed file system

Will stop working, and therefore need to implement master server high availability (such as heartbeat + drbd achieve)

 

   

 Second, clustered deployment:

Host environment: RHEL6.5 selinux and iptables disabled

Master: 172.25.10.2 (HA) 172.25.10.3 (HA)

VIP 172.25.10.100

## Metalogger: 192.168.0.77

Chunkserver: 172.25.10.6 172.25.10.7 172.25.10.8

Client: 172.25.10.4

172.25.10.5 (iSCSI)

 

Generating rpm, ease of deployment:

# yum install gcc make rpm-build fuse-devel zlib-devel -y

# rpmbuild -tb mfs-1.6.27.tar.gz

# ls ~/rpmbuild/RPMS/x86_64

mfs-cgi-1.6.27-4.x86_64.rpm

mfs-master-1.6.27-4.x86_64.rpm

mfs-chunkserver-1.6.27-4.x86_64.rpm

Master server metadata server installation

yum install -y mfs-cgi-1.6.27-4.x86_64.rpm mfs-cgiserv-1.6.27-4.x86_64.rpm mfs-master-1.6.27-4.x86_64.rpm

# cd /etc/mfs/

# cp mfsmaster.cfg.dist mfsmaster.cfg

# cp mfsexports.cfg.dist mfsexports.cfg

# We mfsexports.cfg

172.25.10.0/24 / rw,alldirs,maproot=0

The entry for each file is divided into three parts:

The first part: client ip address

Part II: Mounted directory

Part III: The client has the authority

# Cd / var / lib / MFS

# cp metadata.mfs.empty metadata.mfs

# chown -R nobody /var/lib/mfs

 

Modify / etc / hosts file, add the following line:

172.25.10.2 mfsmaster

# Mfsmaster start start master server

 

# Mfscgiserv # CGI start monitoring service

lockfile created and locked

starting simple cgi server (host: any , port: 9425 , rootpath: /usr/share/mfscgi)

# cd /usr/share/mfscgi/

# chmod +x chart.cgi mfs.cgi

To see the master in the browser address bar enter http://172.25.10.2:9425 operation of the
storage server Chunk servers installed

# yum localinstall -y mfs-chunkserver-1.6.27-4.x86_64.rpm

# cd /etc/mfs

# cp mfschunkserver.cfg.dist mfschunkserver.cfg

# cp mfshdd.cfg.dist mfshdd.cfg

# Vi mfshdd.cfg defined share point mfs

/ Mnt / mfschunks1

# chown -R nobody:nobody /mnt/mfschunks1

Modify / etc / hosts file, add the following line:

172.25.10.2 mfsmaster

 

mkdir / var / lib / MFS

chown nobody /var/lib/mfs

 

Now then visit http://172.25.10.2:9425/ through the browser should be able to see all the information the MFS system, including the master metadata management and storage services chunkserver.
Client client installation

# yum localinstall -y mfs-client-1.6.27-4.x86_64.rpm

# cd /etc/mfs

# cp mfsmount.cfg.dist mfsmount.cfg

# Vi mfsmount.cfg definition of client default mount

mfsmaster=mfsmaster

/ Mnt / mfs

 

# mfsmount

# df -h

...

mfsmaster: 9421 2729728 2729728 0 0% / mnt / MFS
the MFS test

MFS create two directories under the mount point, and set the number of copies its file storage:

# Cd / mnt / mfs

# mkdir dir1 dir2

# Mfssetgoal -r 2 dir2 / file is stored in the dir2 provided as two parts, a default

dir2 /:

inodes with goal changed: 1

inodes with goal not changed: 0

inodes with permission denied: 0

For a directory setting "goal", newly created files and subdirectories in this directory will inherit the settings of this directory, but will not change the number of copies copy files and directories already exist. However, using the -r option to change the copy number of copies that already exist.

Two copies of the same file to a directory

# cp /etc/passwd dir1 # cp /etc/passwd dir2

View file information

# mfsfileinfo dir1/passwd

dir1/passwd:

chunk 0: 0000000000000001_00000001 / (id:1 ver:1)

copy 1: 172.25.10.6:9422

# mfsfileinfo dir2/passwd

dir2/passwd:

chunk 0: 0000000000000002_00000001 / (id:2 ver:1)

copy 1: 172.25.10.6:9422

copy 2: 172.25.10.7:9422

 

Close mfschunkserver2 and then view the file information

# mfsfileinfo dir1/passwd

dir1/passwd:

chunk 0: 0000000000000001_00000001 / (id:1 ver:1)

no valid copies !!!

# mfsfileinfo dir2/passwd

dir2/passwd:

chunk 0: 0000000000000002_00000001 / (id:2 ver:1)

copy 1: 172.25.10.7:9422

After starting mfschunkserver2, file back to normal.

Recover accidentally deleted files

# rm -f dir1/passwd

# mfsgettrashtime dir1/

dir1 /: 86400

Delete the file stored in the "trash" in the time called quarantine time, this time can be used mfsgettrashtime command to view, with mfssettrashtime command set, in seconds, the default is 86400 seconds.

# Mkdir / mnt / mfsmeta

# mfsmount -m /mnt/mfsmeta/ -H mfsmaster

MFSMETA mount the file system that contains the directory trash (deleted files still contain information that can be restored) and

trash / undel (used to obtain the file). To delete files, move to trash / down / undel, you can restore the file.

# Cd / mnt / mfsmeta / trash

# Mv 00000004 \ | dir1 \ | passwd bundles /

Dir1 directory to see passwd file recovery

In the catalog MFSMETA, in addition to trash and trash / undel two directories, there is a third directory reserved, there are files that have been deleted within the directory, but left open by other users. After the user closes the file to be opened, reserved files in the directory will be deleted, the data file will also be deleted immediately. This directory can not be operated.
MFS deploy high-availability
iSCSI configuration

Increase a virtual disk without formatting (vdb)

yum install scsi-target-utils.x86_64 -y

vim /etc/tgt/targets.conf

#<target iqn.2016-03.com.example:server.target9>

# backing-store /dev/vdb1

# initiator-address 172.25.10.2

# initiator-address 172.25.10.3

#</target>

/etc/init.d/tgtd start && chkconfig tgtd on

In the master (172.25.10.2 172.25.10.3) terminal to download and install iscsi-initiator-utils.x86_64

iscsiadm -m discovery -t st -p 172.25.10.5

iscsiadm -m node -l

The disk is formatted as ext4 format

-with fdisk / dev / sda

mkfs.ext4 /dev/sda1

The / var / lib / mfs / * all the data to the network disk / dev / sda1 go, and then mount it to / var / lib / mfs

mount /dev/sda1 /mnt/

cp -p / var / lib / MFS / * / mnt /

mfsmaster start

Pacemaker mounting (172.25.10.2; 3)

With yum source

The default yum source only the basic package Server, yum source bag has

ResilientStorage/

HighAvailability/

LoadBalancer/

Packages/

images/

Packages/

...

Required to install the pacemaker package bag HighAvailability

yum install pacemaker -y

Need to use a pacemaker pacemaker configuration interface program interface crmshell, early pacemaker fitted crmshell comes with an interface, the new version has been independent, no longer part of the pacemaker. Crmshell in turn depends on the respective package pssh therefore need to install these two components.

# yum install crmsh-1.2.6-0.rc2.2.1.x86_64.rpm pssh-2.3.1-2.1.x86_64.rpm

When installed using mounting pacemaker which yum install large correlation dependencies, including corosync, so no corosync mounted directly modify their profile /etc/corosync/corosync.conf.

cd /etc/corosync/

#cp corosync.conf.example corosync.conf

vim corosync.conf

#Bindnetddra: 172.25.10.0

#mcastaddr: 226.94.1.1

#

#service {

# name: pacemaker

# View: 0

#}

/etc/init.d/corosync start && chkconfig corosync on
fence安装(1229)

The deployment of external fence, a fence C / S structure, the fence need to install the service end node following three packages.

fence-virtd.x86_64

fence-virtd-libvirt.x86_64

fence-virtd-multicast.x86_64

After installing using the command fence_virtd -c fence to enter the interactive interface configuration files when configuring the election should be noted that the communication between the host when selecting the interface (interface) card.

mkdir / etc / cluster # default cluster does not exist;

Between a server and client communicate through the key file, the default key file does not exist, and to generate manually copied to all client nodes. No default node / etc / cluster directory, need to establish their own

# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key

# for i in {2,3} ;do scp /etc/cluster/fence_xvm.key master$i.example.com:/etc/cluster ; done

systemctl start fence_virtd.service

systemctl enable fence_virtd.service

The client (172.25.10.2,3) for an installation tool fence-virt

# Yum fence-virt.x86_64 -y install 
metadata availability achieve high availability

Before installing good crmshell a command-line interface provides interactive interface pacemaker cluster management, with a very powerful and easy to use management features, made to the configuration will be synchronized to each cluster node. The individual services on the metadata server cluster management referred to below.

a. First, the service referred to fence cluster. Since the external fence can only identify domain, so it is necessary binding domain and hostname, and monitoring once every 60s.

# crm(live)configure# primitive vmfence stonith:fence_xvm parms pcmk_host_map="master1.example.com:vm2;master2.example.com:vm3" op monitor interval=60s

b. Before the MFS system services handed over to a cluster takeover, the need to establish a virtual IP (VIP), VIP is the external master node, a master node when the cluster resource is down, the service will migrate through the VIP resource to another master node, for client, I did not feel.

# crm(live)configure# primitive vip ocf:hearbeat:IPaddr2 params ip="172.25.10.100" cidr_netmask="24" op monitor interval="30s"

c. The MFS system is handed over to the cluster manager service management.

# Crm (live) configure # property no-quorum-policy = "ignore" # default only if a number of nodes representing cluster does not exist, ignore

# crm(live)configure# primitive mfsdata ocf:heartbeat:Filesystem params device="/dev/sda1" directory="/var/lib/mfs" fstype="ext4" op monitor interval="60s"

# crm(live)configure# primitive mfs lsb:mfs op monitor interval="60s"

# crm(live)configure# group mfsgroup vip mfs mfsdata

# crm(live)configure# order mfs-after-mfstdata inf: mfsdata mfs

Was added hosts and parsing the client side chunk

Mfsmaster 172.25.10.100
---------------- 

Original link: https: //blog.csdn.net/gew25/article/details/51924952

 

Guess you like

Origin www.cnblogs.com/flytor/p/11404076.html