Build GlusterFS distributed file system

Build GlusterFS distributed file system

1 Introduction to GlusterFS

Gluster File System is a free software, mainly developed by Z RESEARCH company, with more than a dozen developers, who are very active recently. The documentation is relatively complete, so it is not difficult to get started.

It is mainly used in cluster systems and has good scalability. The structure of the software is well designed, easy to expand and configure, through the flexible combination of various modules to obtain targeted solutions. It can solve the following problems: network storage, joint storage (integration of storage space on multiple nodes), redundant backup, load balancing of large files (blocking). Due to the lack of some key features, the reliability has not been tested for a long time, and it is not suitable for the product environment that needs to provide 24-hour uninterrupted service. Suitable for offline applications with large amounts of data.
Due to its good software design and development by a dedicated company, the progress is very rapid, and there will be great improvements in a few months or a year, which is very worth looking forward to.
GlusterFS uses Infiniband RDMA or Tcp/Ip to interconnect many cheap x86 hosts through the network into a parallel network file system

2 Construction of GlusterFS file system

System node
IP address CPU name Mount path
192.168.200.138 GlusterFS1 /export/brick1/gv0
192.168.200.140 GlusterFS2 /export/brick1/gv0
2.1 1 Configure YUM source
[root@localhost ~]# cat /etc/yum.repos.d/local.repo
[glusterfs]
name=glusterfs
baseurl=https://mirrors.aliyun.com/centos/7/storage/x86_64/gluster-7/
gpgcheck=0
enabled=1

2.2 Install the packages required by GlusterFS in two nodes
[root@GlusterFS1 ~]# yum -y install glusterfs-server xfsprogs

[root@GlusterF2 ~]# yum -y install glusterfs-server xfsprogs

安装完之后启动服务并设置开机自启
systemctl start glusterd
systemctl enable glusterd

2.3 Add node to GlusterFS cluster
[root@GlusterFS1 ~]# gluster peer probe 192.168.200.138
peer probe: success. Probe on localhost not needed
[root@GlusterFS1 ~]# gluster peer probe 192.168.200.140
peer probe: failed: Probe returned with Transport endpoint is not connected

如果失败,则关闭防火墙
[root@GlusterFS1 ~]# gluster peer probe 192.168.200.140
peer probe: success.

2.4 Query cluster status
[root@GlusterFS1 ~]# gluster peer status
Number of Peers: 1

Hostname: 192.168.200.140
Uuid: aa20dc09-50b7-4acd-913d-b187f4acf018
State: Peer in Cluster (Connected)

2.5 Create directory
创建数据存储目录(两个节点都执行)
# fdisk /dev/sda
[root@GlusterFS1 ~]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0  100G  0 disk 
├─sda1            8:1    0  500M  0 part /boot
├─sda2            8:2    0   52G  0 part 
│ ├─centos-root 253:0    0   50G  0 lvm  /
│ └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
├─sda3            8:3    0   10G  0 part
[root@GlusterFS2 ~]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0  100G  0 disk 
├─sda1            8:1    0  500M  0 part /boot
├─sda2            8:2    0   52G  0 part 
│ ├─centos-root 253:0    0   50G  0 lvm  /
│ └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
└─sda3            8:3    0   10G  0 part
使用xfs文件系统对分区进行格式化
# mkfs.xfs /dev/sda3
创建挂载目录
# mkdir -p /export/brick1
挂载分区
# mount /dev/sda3 /export/brick1/
查看挂载情况
[root@GlusterFS1 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 475M     0  475M   0% /dev
tmpfs                    487M     0  487M   0% /dev/shm
tmpfs                    487M  7.6M  479M   2% /run
tmpfs                    487M     0  487M   0% /sys/fs/cgroup
/dev/mapper/centos-root   50G  1.3G   49G   3% /
/dev/sda1                497M  130M  367M  27% /boot
tmpfs                     98M     0   98M   0% /run/user/0
/dev/sda3                 10G   33M   10G   1% /export/brick1

[root@GlusterFS2 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 475M     0  475M   0% /dev
tmpfs                    487M     0  487M   0% /dev/shm
tmpfs                    487M  7.5M  479M   2% /run
tmpfs                    487M     0  487M   0% /sys/fs/cgroup
/dev/mapper/centos-root   50G  1.3G   49G   3% /
/dev/sda1                497M  130M  367M  27% /boot
tmpfs                     98M     0   98M   0% /run/user/0
/dev/sda3                 10G   33M   10G   1% /export/brick1

创建存储目录
# mkdir /export/brick1/gv0

2.6 Create Disk Volume
创建系统卷gv0(副本卷)
[root@GlusterFS1 ~]# gluster volume create gv0 replica 2 192.168.200.138:/export/brick1/gv0 192.168.200.140:/export/brick1/gv0
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
 (y/n) y
volume create: gv0: success: please start the volume to access data

启动系统卷gv0
[root@GlusterFS1 ~]# gluster volume start gv0
volume start: gv0: success

查看系统卷信息
[root@GlusterFS1 ~]# gluster volume info
 
Volume Name: gv0
Type: Replicate
Volume ID: f56b6326-b698-42d9-ac22-4bec7dc444b4
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.200.138:/export/brick1/gv0
Brick2: 192.168.200.140:/export/brick1/gv0
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off

挂载文件系统
安装客户端并挂载GlusterFS文件系统,使用GlusterFS2节点作为客户端,在客户端挂载GlusterFS文件系统

[root@GlusterFS2 ~]# mount -t glusterfs 192.168.200.138:/gv0 /mnt/
[root@GlusterFS2 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 475M     0  475M   0% /dev
tmpfs                    487M     0  487M   0% /dev/shm
tmpfs                    487M  7.6M  479M   2% /run
tmpfs                    487M     0  487M   0% /sys/fs/cgroup
/dev/mapper/centos-root   50G  1.3G   49G   3% /
/dev/sda1                497M  130M  367M  27% /boot
tmpfs                     98M     0   98M   0% /run/user/0
/dev/sda3                 10G   33M   10G   1% /export/brick1
192.168.200.138:/gv0      10G  135M  9.9G   2% /mnt

验证成功,副本卷gv0的大小是10GB,因为GlusterFS的副本数为2,存储空间有一半冗余

3 Operation and maintenance of GlusterFS file system

3.1 Common operation and maintenance operations
添加节点(将server ip 添加到存储池中)
# gluster peer probe server ip

删除节点
# gluster peers detach server ip
注意:将节点server从存储池中移除,移除节点是要保证节点上没有Brick。如果节点上有Brick,需要提前移除Brick。

查看卷信息
# gluster volume info
查看卷状态
# gluster volume status
启动,停止卷
# gluster volume start/stop VOLUME
删除卷
# gluster volume delete VOLUME

修复卷
# gluster volume heal mamm-volume #只修复有问题的文件
# gluster volume heal mamm-volume full #修复所有文件
# gluster volume heal mamm-volume info #查看自愈详情

3.2 Brick management
添加Brick
添加节点(将server ip 添加到存储池中)
# gluster peer probe server ip
# gluster volume add-brick gv0 server ip:/export/brick1/gv0

注意:添加两个Brick到gv0,副本卷则要一次添加的Bricks数是Replica的整数倍,Stripe同样要求
移除Brick
# gluster volume remove-brick gv0 server ip:/export/brick1/gv0
注意:若是副本卷,则要移除的Brick是Replica的整数倍,Stripe具有同样的要求,副本卷要移除一对Brick,在执行移除操作时,数据会移到其他节点。

在执行移除操作后,可以使用status命令对task状态进行查看
# gluster volume remove-brick gv0 server ip:/export/brick1/gv0 status

使用“commit”命令执行Brick移除,则不会进行数据迁移而直接删除Brick,符合不需要数据迁移的用户需求。
# gluster volume remove-brick gv0 server ip:/export/brick1/gv0 commit

Guess you like

Origin blog.csdn.net/qq_44750380/article/details/107130647