GlusterFS cluster Distributed File System

GlustefFS Profile

GlusterFS open source is a distributed file system, with a strong ability to scale in terms of data storage, by extending the node may support several different levels of storage capacity PB

Case implementation

Necessary experimental environment is as follows, I'll demonstrate node1 server, the other and so
Here Insert Picture Description
there is a,
Client (192.168.100.106) (host name at random) do not mount the CD-ROM, will be a test

Figure, vm network are VM1, connected using Xshell

1) disk partition

I only demo here, node1 server, followed by the host name and then make changes
Enter 100.102

First, when the virtual machine is not turned on, the newly added four SCSI hard drives, are the size of 20GB, then boot

Be sure to add the off state, then boot

The following diagram, disk partition command
Here Insert Picture Description
in the following figure, format the partition, and create a directory to mount, and mount
Here Insert Picture Description
continue to give me the table above operation, do not forget mkfs command is formatted, it should be done as shown in this
Here Insert Picture Description
then configuration, automatic loading boot, since the host name to change
[the root CentOS7-02 @ ~] # Vim / etc / fstab (as shown below)
Here Insert Picture Description

Then, according to my three other servers in the table above, partitioning, boot automatically mount and mount

2) Change the host name, and configure the hosts file

Also in node1 ~ node4, modify the corresponding hostname, hosts must be changed, changed to the following, I am an example node1 100.102

[CentOS7-02 the root @ ~] # Vim / etc / hostname (delete original)
node1
[the root CentOS7-02 @ ~] # Vim / etc / the hosts (add the following)
node1 192.168.100.102
node2 192.168.100.103
node3 192.168.100.104
node4 192.168.100.105

[root @ CentOS7-02 ~] # reboot (be sure to restart, the host name to take effect)
[root @ node1 ~] # df -ht (should have just disk mounted on the right)

Reiterated However, these two steps are to operate in four nodes

3) Install the software, in node1 to node4, and start

[root @ node1 ~] # mkdir / the WWW
[root @ node1 ~] # cd / the WWW
and then use Xftp (that is, above a small green mark), incoming source software, do not directly dragged into, will not all)
Here Insert Picture Description
[root @ node1 the WWW ] # Vim /etc/yum.repos.d/centOS7.repo (read as follows)
[local]
name=centos7
baseurl=file:///www
enabled=1
gpgcheck=0
[the root @ node1 WWW] # yum the install GlusterFS GlusterFS -Y-Server GlusterFS-FUSE-GlusterFS RDMA
[the root @ node1 WWW] # systemctl Start glusterd
[root @ node1 the WWW] # systemctl enable glusterd

Four nodes required to install and start

4) add nodes, node1 ~ node4,

The operation in a host operating to node1

[the root @ node1 ~] # Gluster Use the peer Probe node1
peer probe: success. Probe on localhost not needed
[the root @ node1 ~] # Gluster Use the peer Probe node2
peer probe: success.
[the root @ node1 ~] # Gluster Use the peer Probe node3
peer probe: success.
[the root @ node1 ~] # Gluster Use the peer Probe Node4
peer probe: success.
rear (input command, the output information should be above me, like, if not, check the / etc / hosts file)

5) Check cluster status

In each of the nodes can be viewed, here node1
[the root @ node1 ~] # Gluster Use the peer Status
Number of Peers: 3

Hostname: node2
Uuid: a6632a96-2820-4608-aec0-ee70b876c007
State: Peer in Cluster (Connected)

Hostname: node3
Uuid: 80b90aa5-6f46-4dd9-9f37-a32e03621d6c
State: Peer in Cluster (Connected)

Hostname: node4
Uuid: 8581f87a-48fd-40b4-9c44-6cd481e3b534
State: Peer in Cluster (Connected)

In which node view, it does not show himself, and if the output is not the same as I mentioned above, check the hosts file

Create Volume

1) create a distributed volume

May be other nodes, I was operated here to node1 100.102
[the root @ node1 ~] # Gluster Create Volume Volume DIS-node1: / E6 node2: / E6 Force
volume create: dis-volume: success: please start the volume to access data
(type not specified, the default is distributed volumes)
[the root @ node1 ~] # Gluster info DIS-Volume Volume (View)

Volume Name: dis-volume
Type: Distribute
Volume ID: 7440507d-8df6-4769-acf2-6575e0206df8
Status: Created
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: node1:/e6(Node1, the mount point e6)
Brick2: node2:/e6(node2, the e6 mount point)
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
[root @ node1 ~] # Gluster Volume Start DIS-Volume (start, be sure to start, otherwise there will be problems)
volume start: dis-volume: success

2) Create a striped volume

[the root @ node1 ~] # Gluster stripe Create Volume 2-Volume stripe node1: / d5 of node2: / d5 of Force
volume create: stripe-volume: success: please start the volume to access data
(stripe type, data 2, followed by the two Brick, so that the strip roll)
[the root @ node1 ~] # Gluster info stripe-Volume Volume
(here I only played a major item, the whole bit more)
Volume Name: stripe-volume
Type: Stripe

Bricks:
Brick1: node1:/d5
Brick2: node2:/d5

[root@node1 ~]# gluster volume start stripe-volume
volume start: stripe-volume: success

3) Create a copy volumes

[the root @ node1 ~] # Gluster Volume Create REP-Volume Replica 2 node3: / d5 of Node4: / d5 of Force (designated type Replica (replication volume), data 2, followed by two Brick, replication volume) [the root node1 ~ @] # Gluster info REP-Volume Volume
volume create: rep-volume: success: please start the volume to access data

Volume Name: rep-volume
Type: Replicate

Bricks:
Brick1: node3:/d5
Brick2: node4:/d5

[root@node1 ~]# gluster volume start rep-volume
volume start: rep-volume: success

4) create a striped volume distributed

[the root @ node1 ~] # Gluster Volume Create DIS-stripe stripe 2 node1: / B3 node2: / B3 node3: / B3 Node4: / B3 Force
volume create: dis-stripe: success: please start the volume to access data
(designated type stripe (strip), a value of 2, a minimum of four brick server (node) 2 times 2,)
[the root @ node1 ~] # Gluster info DIS-stripe Volume
Volume Name: dis-stripe
Type: Distributed-Stripe

Bricks:
Brick1: node1:/b3
Brick2: node2:/b3
Brick3: node3:/b3
Brick4: node4:/b3

[root@node1 ~]# gluster volume start dis-stripe
volume start: dis-stripe: success

5) create a distributed replicated volume

[the root @ node1 ~] # Gluster Volume Create DIS-REP Replica 2 node1: / C4 node2: / C4 node3: / C4 Node4: / C4 Force (type Replica, a value of 2, followed by four brick server 2 of twice the volume distributed replication [root @ node1 ~] # gluster volume info dis-rep
volume create: dis-rep: success: please start the volume to access data

Volume Name: dis-rep
Type: Distributed-Replicate

Bricks:
Brick1: node1:/c4
Brick2: node2:/c4
Brick3: node3:/c4
Brick4: node4:/c4

[root@node1 ~]# gluster volume start dis-rep
volume start: dis-rep: success

Gluster client deployment

Then use, just a client of 192.168.100.106
enter

1) install the client software

With the above, node1 configure the YUM source the same, but installation is the following command

[root@centos7-06 www]# yum -y install glusterfs glusterfs-fuse

2) Create a mount directory

[root@centos7-06 www]# yum -y install glusterfs glusterfs-fuse
[root@centos7-06 www]# mkdir -p /test/{dis,stripe,rep,dis_and_stripe,dis_and_rep}
[root@centos7-06 www]# ls /test
dis dis_and_rep dis_and_stripe rep stripe

3) modify the hosts file

[@ centos7-06 the root WWW] # Vim / etc / the hosts (add the following)
192.168.100.102 node1
192.168.100.103 node2
192.168.100.104 node3
192.168.100.105 node4

4) Mount Gluster file system

[root@centos7-06 www]# mount -t glusterfs node1:dis-volume /test/dis
[root@centos7-06 www]# mount -t glusterfs node1:stripe-volume /test/stripe
[root@centos7-06 www]# mount -t glusterfs node1:rep-volume /test/rep
[root@centos7-06 www]# mount -t glusterfs node1:dis-stripe /test/dis_and_stripe/
[root@centos7-06 www]# mount -t glusterfs node1:dis-rep /test/dis_and_rep/

Here Insert Picture Description

5) boot automatically mount, you can not do experimental environment

Operation in the client

[centos7-06 the root @ ~] # Vim / etc / fstab (add the following)
Here Insert Picture Description

Gluster File System Test

Or, in the client 100.106 in operation, written here, see the distribution of the background behind

1) writing a file volume

Free to find a txt file, suffix to log, you can go online to download, the best bigger,
[root @ centos7-06 ~] # cd / root /

Just find the renamed file, and drag to 5 parts Xshell in
[root @ centos7-06 ~] # LL -h Demo *
-rw-r--r-- 1 root root 21M 11月 11 21:04 demo1.log
-rw-r--r-- 1 root root 21M 11月 11 21:04 demo2.log
-rw-r--r-- 1 root root 21M 11月 11 21:04 demo3.log
-rw-r--r-- 1 root root 21M 11月 11 21:04 demo4.log
-rw-r--r-- 1 root root 21M 11月 11 21:04 demo5.log

2) to view the file distribution

Copy client100.106 in
Just copy the directory to mount, since mount is used locally, in fact, it is on the back-end storage server

[root@centos7-06 ~]# cp demo* /test/dis
[root@centos7-06 ~]# cp demo* /test/stripe/
[root@centos7-06 ~]#cp demo* /test/rep/
[root@centos7-06 ~]# cp demo* /test/dis_and_stripe/
[root@centos7-06 ~]# cp demo* /test/dis_and_rep/

Node4 node to node node1 to view
Note the front of the host name, I will not see a repeat in which the node

Here to talk about, why is to look at these directories, create a GFS file system, using these directories, it is like to see the following

1. Review the distributed volume distribution

[the root @ node1 ~] # -H LL / E6
总用量 81M # size is not changed, no demo5.log
-rw-r--r-- 2 root root 21M 11月 12 05:14 demo1.log
-rw-r--r-- 2 root root 21M 11月 12 05:14 demo2.log
-rw-r--r-- 2 root root 21M 11月 12 05:14 demo3.log
-rw-r--r-- 2 root root 21M 11月 12 05:14 demo4.log

[root @ node2 ~] # LL -h / E6
总用量 21M # Here, distributed success
-rw-r--r-- 2 root root 21M 11月 12 05:14 demo5.log

2. Check the striped volume file distribution

[the root @ node1 ~] # LL -LH / d5 of
总用量 51M # data becomes small, fragmented, or 5
-rw-r--r-- 2 root root 11M 11月 12 05:14 demo1.log
-rw-r--r-- 2 root root 11M 11月 12 05:14 demo2.log
-rw-r--r-- 2 root root 11M 11月 12 05:14 demo3.log
-rw-r--r-- 2 root root 11M 11月 12 05:14 demo4.log
-rw-r--r-- 2 root root 11M 11月 12 05:14 demo5.log

[node2 the root @ ~] # LL -H / d5 of
总用量 50M # follows
-rw-r--r-- 2 root root 10M 11月 12 05:14 demo1.log
-rw-r--r-- 2 root root 10M 11月 12 05:14 demo2.log
-rw-r--r-- 2 root root 10M 11月 12 05:14 demo3.log
-rw-r--r-- 2 root root 10M 11月 12 05:14 demo4.log
-rw-r--r-- 2 root root 10M 11月 12 05:14 demo5.log

3. Review the copy volume file distribution

[node3 the root @ ~] # LL -H / d5 of
总用量 101M # size has not changed, there is no fragmentation, the replication volume
-rw-r--r-- 2 root root 21M 11月 12 05:15 demo1.log
-rw-r--r-- 2 root root 21M 11月 12 05:15 demo2.log
-rw-r--r-- 2 root root 21M 11月 12 05:15 demo3.log
-rw-r--r-- 2 root root 21M 11月 12 05:15 demo4.log
-rw-r--r-- 2 root root 21M 11月 12 05:15 demo5.log

[Node4 the root @ ~] # LL -H / d5 of
总用量 101M # size has not changed, there is no fragmentation, the redundant top
-rw-r--r-- 2 root root 21M 11月 12 05:15 demo1.log
-rw-r--r-- 2 root root 21M 11月 12 05:15 demo2.log
-rw-r--r-- 2 root root 21M 11月 12 05:15 demo3.log
-rw-r--r-- 2 root root 21M 11月 12 05:15 demo4.log
-rw-r--r-- 2 root root 21M 11月 12 05:15 demo5.log

4. Review Distributed File striped volume distribution

[the root @ node1 ~] # LL -H / B3
总用量 41M # slice, and then there are distributed, in the back
-rw-r--r-- 2 root root 11M 11月 12 05:15 demo1.log
-rw-r--r-- 2 root root 11M 11月 12 05:15 demo2.log
-rw-r--r-- 2 root root 11M 11月 12 05:15 demo3.log
-rw-r--r-- 2 root root 11M 11月 12 05:15 demo4.log

[node2 the root @ ~] # LL -H / B3
总用量 40M # of fragments,
-rw-r--r-- 2 root root 10M 11月 12 05:15 demo1.log
-rw-r--r-- 2 root root 10M 11月 12 05:15 demo2.log
-rw-r--r-- 2 root root 10M 11月 12 05:15 demo3.log
-rw-r--r-- 2 root root 10M 11月 12 05:15 demo4.log

[node3 the root @ ~] # LL -H / B3
总用量 11M # distribution and fragmentation
-rw-r--r-- 2 root root 11M 11月 12 05:15 demo5.log

[Node4 the root @ ~] # LL -H / B3
总用量 10M # distribution, fragmentation
-rw-r--r-- 2 root root 10M 11月 12 05:15 demo5.log

5. Review the distributed file replication volume distribution

[the root @ node1 ~] # LL -H / C4
总用量 81M # size has not changed, 5.log no later distribution
-rw-r--r-- 2 root root 21M 11月 12 05:16 demo1.log
-rw-r--r-- 2 root root 21M 11月 12 05:16 demo2.log
-rw-r--r-- 2 root root 21M 11月 12 05:16 demo3.log
-rw-r--r-- 2 root root 21M 11月 12 05:16 demo4.log

[node2 the root @ ~] # LL -H / C4
总用量 81M # size has not changed, the redundant above 1 ~ 4.log,
-rw-r--r-- 2 root root 21M 11月 12 05:16 demo1.log
-rw-r--r-- 2 root root 21M 11月 12 05:16 demo2.log
-rw-r--r-- 2 root root 21M 11月 12 05:16 demo3.log
-rw-r--r-- 2 root root 21M 11月 12 05:16 demo4.log

[node3 the root @ ~] # LL -H / C4
总用量 21M # distributed, 5.log size has not changed,
-rw-r--r-- 2 root root 21M 11月 12 05:16 demo5.log

[Node4 the root @ ~] # LL -H / C4
总用量 21M # redundancy, above 5.log
-rw-r--r-- 2 root root 21M 11月 12 05:16 demo5.log

3) destruction test

Suspend node node2, the test file in the client 100.106
1. Test distributed and striped volume, note FIG facie

Here Insert Picture Description

2. Distribution of the test strip tape roll, and the volume distributed replication, note Figure

Here Insert Picture Description

Continue to hang node4 node, node2 node or suspended unchanged

3. Test replicate data volumes

[root@centos7-06 ~]# head -1 /test/rep/demo1.log
q1111111
[root@centos7-06 ~]# head -1 /test/rep/demo2.log
q1111111
[root@centos7-06 ~]# head -1 /test/rep/demo3.log
q1111111
[root@centos7-06 ~]# head -1 /test/rep/demo4.log
q1111111
[root@centos7-06 ~]# head -1 /test/rep/demo5.log
q1111111

Because the data node4 copy volume, node3 have a backup, so it will not be lost, there is redundancy

4. Test whether the distributed data is accessible striped volume

[centos7-06 the root @ ~] # head -1 /test/dis_and_stripe/demo1.log [centos7-06 the root @ ~] # head -1 /test/dis_and_stripe/demo2.log [centos7-06 the root @ ~] # head /test/dis_and_stripe/demo3.log -1 [centos7-06 the root @ ~] # head -1 /test/dis_and_stripe/demo4.log [centos7-06 the root @ ~] # head -1 /test/dis_and_stripe/demo5.log ## (here actually gone, but the system has a cache, enter the appropriate mount catalog, no) [root @ centos7-06 ~] # cd / the Test / dis_and_stripe / [root @ centos7-06 dis_and_stripe] # LS
head: 读取"/test/dis_and_stripe/demo1.log" 时出错: 没有那个文件或目录

head: 读取"/test/dis_and_stripe/demo2.log" 时出错: 没有那个文件或目录

head: 读取"/test/dis_and_stripe/demo3.log" 时出错: 没有那个文件或目录

head: 读取"/test/dis_and_stripe/demo4.log" 时出错: 没有那个文件或目录

q1111111


Explain
(because there 5.log other half node4 file, node4 hang so 5.log can not access, there is no redundancy)

5. Test data volume distributed replication

[root@centos7-06 ~]# head -1 /test/dis_and_rep/demo1.log
q1111111
[root@centos7-06 ~]# head -1 /test/dis_and_rep/demo2.log
q1111111
[root@centos7-06 ~]# head -1 /test/dis_and_rep/demo3.log
q1111111
[root@centos7-06 ~]# head -1 /test/dis_and_rep/demo4.log
q1111111
[root@centos7-06 ~]# head -1 /test/dis_and_rep/demo5.log
q1111111

Although node4 node2 and ## are hung, node2 node1 of data backup, the backup data in node4 node3 having redundancy

Other maintenance commands

The pending state to recover node1 and node4
These commands can be used in node1 ~ node4, the present Example node1

1) View GlusterFS volume

1. Review the list of volumes

[root@node1 ~]# gluster volume list
dis-rep
dis-stripe
dis-volume
rep-volume
stripe-volume

2. Check all volumes of information

[root@node1 ~]# gluster volume info
Here Insert Picture Description

3. Check the status of the volume

[root@node1 ~]# gluster volume status
Here Insert Picture Description

2) Stop / Delete Volume

[root @ node1 ~] # Gluster Volume STOP DIS-Volume (before deleting, you need to stop the volume) [root @ node1 ~] # Gluster the Delete Volume DIS-Volume (delete volume)
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: dis-volume: success

Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: dis-volume: success

3) access control to set the volume

192.168.100.0 subnet only allows clients to access dis-rep volume

[root@node1 ~]# gluster volume set dis-rep auth.allow 192.168.100.*
volume set: success

Possible failure troubleshooting

Should a node node to restart the virtual machine, you will find not on the client mount,

First run on all nodes in the node, the following command to view, should all be open fishes

[root@node1 ~]# systemctl status glusterd

If it is not open, open the service

Then, run the following command to a random node, the stop and then start volume, it can be mounted on

All volumes must stop before they can re-start all mounted on

[root@node1 ~]# gluster volume stop dis-rep
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: dis-rep: success
[root@node1 ~]# gluster volume start dis-rep
volume start: dis-rep: success

Then, sometimes, the virtual machine does not restart, is not on the mount

First check the client hosts file, if not, there are generally no node boot volume, you can start

This command can view the status of the volume, and then starts the appropriate volume
[the root @ node1 ~] # Gluster info Volume

[root@node1 ~]# gluster volume start dis-rep
volume start: dis-rep: success

Experiment is completed! !

Published 54 original articles · won praise 57 · views 20000 +

Guess you like

Origin blog.csdn.net/weixin_45308292/article/details/102924250