1.1 Distributed file system
1.1.1 What is a distributed file system
Compared to the local file system, a distributed file system (English: Distributed file system , DFS ), or network file system (English: Network File System ), is a file system that allows files to be shared on multiple hosts over a network. A file system that allows multiple users on multiple machines to share files and storage space.
In such a file system, the client does not directly access the underlying data storage block, but communicates with the server through a network using a specific communication protocol. Through the design of the communication protocol, both the client and the server can restrict access to the file system according to the access control list or authorization.
1.1.2 What is glusterfs
Gluster is a distributed file system. It is a combination of various storage servers, which are integrated with each other by Ethernet or Infiniband and Remote Direct Memory Access (RDMA), and finally form a large parallel file system network.
It has multiple applications including cloud computing, such as: biomedical science, document storage. Gluster is free software hosted by GNU and the license is AGPL. Gluster Corporation is Gluster's primary commercial sponsor and provides commercial products as well as Gluster-based solutions.
1.2 Rapid deployment of GlusterFS
1.2.1 Environmental description
Note: At least two hard drives are required
System environment description
glusterfs01 info
[root@glusterfs01 ~]# hostname glusterfs01 [root@glusterfs01 ~]# uname -r 3.10.0-693.el7.x86_64 [root @ glusterfs01 ~] # sestatus SELinux status: disabled [root@glusterfs01 ~]# systemctl status firewalld.service ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:firewalld(1) [root@glusterfs01 ~]# hostname -I 10.0.0.120 172.16.1.120
glusterfs02 info
[root@glusterfs02 ~]# uname -r 3.10.0-693.el7.x86_64 [root @ glusterfs02 ~] # sestatus SELinux status: disabled [root@glusterfs02 ~]# systemctl status firewalld.service ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:firewalld(1) [root@glusterfs02 ~]# hostname -I 10.0.0.121 172.16.1.121
Pay attention to configure the hosts resolution
1.2.2 Preliminary preparation
The gluster01 host mounts the disk
[root@glusterfs01 ~]# mkfs.xfs /dev/sdb [root@glusterfs01 ~]# mkdir -p /data/brick1 [root@glusterfs01 ~]# echo '/dev/sdb /data/brick1 xfs defaults 0 0' >> /etc/fstab [root@glusterfs01 ~]# mount -a && mount
The gluster02 host mounts the disk
[root@glusterfs02 ~]# mkfs.xfs /dev/sdb [root@glusterfs02 ~]# mkdir -p /data/brick1 [root@glusterfs02 ~]# echo '/dev/sdb /data/brick1 xfs defaults 0 0' >> /etc/fstab [root@glusterfs02 ~]# mount -a && mount
1.3 Deploy GlusterFS
1.3.1 Install the software
operate on two nodes
yum install centos-release-gluster -y # Modify image source acceleration sed -i 's#http://mirror.centos.org#https://mirrors.shuosc.org#g' /etc/yum.repos.d/CentOS-Gluster-3.12.repo yum install -y glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma
Software version
[root@glusterfs01 ~]# rpm -qa glusterfs glusterfs-3.12.5-2.el7.x86_64
1.3.2 Start GlusterFS
operate on both nodes
[root@glusterfs01 ~]# systemctl start glusterd.service [root@glusterfs01 ~]# systemctl status glusterd.service ● glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled; vendor preset: disabled) Active: active (running) since 三 2018-02-07 21:02:44 CST; 2s ago Process: 1923 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) Main PID: 1924 (glusterd) CGroup: /system.slice/glusterd.service └─1924 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO 2月 07 21:02:44 glusterfs01 systemd[1]: Starting GlusterFS, a clustered file-system server... 2月 07 21:02:44 glusterfs01 systemd[1]: Started GlusterFS, a clustered file-system server. Hint: Some lines were ellipsized, use -l to show in full.
1.3.3 Configuring mutual trust (trusted pool)
Operate on glusterfs01
[root@glusterfs01 ~]# gluster peer probe glusterfs02 peer probe: success.
Operate on glusterfs02
[root@glusterfs02 ~]# gluster peer probe glusterfs01 peer probe: success.
Note: Once this pool is established, only trusted members may probe new servers into the pool. New servers cannot probe the pool and must probe from the pool.
1.3.4 Checking Peer Status
[root@glusterfs01 ~]# gluster peer status Number of Peers: 1 Hostname: 10.0.0.121 Uuid: 61d043b0-5582-4354-b475-2626c88bc576 State: Peer in Cluster (Connected) Other names: glusterfs02
Note: The UUIDs seen should not be the same.
[root@glusterfs02 ~]# gluster peer status Number of Peers: 1 Hostname: glusterfs01 Uuid: e2a9367c-fe96-446d-a631-194970c18750 State: Peer in Cluster (Connected)
1.3.5 Create a GlusterFS volume
operate on two nodes
mkdir -p /data/brick1/gv0
Execute on any node
[root@glusterfs01 ~]# gluster volume create gv0 replica 2 glusterfs01:/data/brick1/gv0 glusterfs02:/data/brick1/gv0 Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/. Do you still want to continue? (y/n) and volume create: gv0: success: please start the volume to access data
Enable storage volumes
[root@glusterfs01 ~]# gluster volume start gv0 volume start: gv0: success
View information
[root@glusterfs01 ~]# gluster volume info Volume Name: gv0 Type: Replicate Volume ID: 865899b9-1e5a-416a-8374-63f7df93e4f5 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: glusterfs01: / data / brick1 / gv0 Brick2: glusterfs02: / data / brick1 / gv0 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
At this point, the server configuration is over
1.4 Client Test
1.4.1 Installing Client Tools
mount test
[root@clsn6 ~]# yum install centos-release-gluster -y [root@clsn6 ~]# yum install -y glusterfs glusterfs-fuse
Note: To configure the hosts file, otherwise the connection will go wrong
[root@clsn6 ~]# mount.glusterfs glusterfs01:/gv0 /mnt [root@clsn6 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 19G 2.2G 16G 13% / tmpfs 238M 0 238M 0% /dev/shm /dev/sda1 190M 40M 141M 22% /boot glusterfs01: / gv0 100G 33M 100G 1% / mnt
1.4.2 Copy file test
[root@clsn6 ~]# for i in `seq -w 1 100`; do cp -rp /var/log/messages /mnt/copy-test-$i; done
Client check file
[root@clsn6 ~]# ls -lA /mnt/copy* | wc -l 10
service node check file
[root@glusterfs01 ~]# ls -lA /data/brick1/gv0/copy* |wc -l 100
service node check file
[root@glusterfs02 ~]# ls -lA /data/brick1/gv0/copy* |wc -l 100
So far Glusterfs simple configuration is complete
1.1 Distributed file system
1.1.1 What is a distributed file system
Compared to the local file system, a distributed file system (English: Distributed file system , DFS ), or network file system (English: Network File System ), is a file system that allows files to be shared on multiple hosts over a network. A file system that allows multiple users on multiple machines to share files and storage space.
In such a file system, the client does not directly access the underlying data storage block, but communicates with the server through a network using a specific communication protocol. Through the design of the communication protocol, both the client and the server can restrict access to the file system according to the access control list or authorization.
1.1.2 What is glusterfs
Gluster is a distributed file system. It is a combination of various storage servers, which are integrated with each other by Ethernet or Infiniband and Remote Direct Memory Access (RDMA), and finally form a large parallel file system network.
It has multiple applications including cloud computing, such as: biomedical science, document storage. Gluster is free software hosted by GNU and the license is AGPL. Gluster Corporation is Gluster's primary commercial sponsor and provides commercial products as well as Gluster-based solutions.
1.2 Rapid deployment of GlusterFS
1.2.1 Environmental description
Note: At least two hard drives are required
System environment description
glusterfs01 info
[root@glusterfs01 ~]# hostname glusterfs01 [root@glusterfs01 ~]# uname -r 3.10.0-693.el7.x86_64 [root @ glusterfs01 ~] # sestatus SELinux status: disabled [root@glusterfs01 ~]# systemctl status firewalld.service ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:firewalld(1) [root@glusterfs01 ~]# hostname -I 10.0.0.120 172.16.1.120
glusterfs02 info
[root@glusterfs02 ~]# uname -r 3.10.0-693.el7.x86_64 [root @ glusterfs02 ~] # sestatus SELinux status: disabled [root@glusterfs02 ~]# systemctl status firewalld.service ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:firewalld(1) [root@glusterfs02 ~]# hostname -I 10.0.0.121 172.16.1.121
Pay attention to configure the hosts resolution
1.2.2 Preliminary preparation
The gluster01 host mounts the disk
[root@glusterfs01 ~]# mkfs.xfs /dev/sdb [root@glusterfs01 ~]# mkdir -p /data/brick1 [root@glusterfs01 ~]# echo '/dev/sdb /data/brick1 xfs defaults 0 0' >> /etc/fstab [root@glusterfs01 ~]# mount -a && mount
The gluster02 host mounts the disk
[root@glusterfs02 ~]# mkfs.xfs /dev/sdb [root@glusterfs02 ~]# mkdir -p /data/brick1 [root@glusterfs02 ~]# echo '/dev/sdb /data/brick1 xfs defaults 0 0' >> /etc/fstab [root@glusterfs02 ~]# mount -a && mount
1.3 Deploy GlusterFS
1.3.1 Install the software
operate on two nodes
yum install centos-release-gluster -y # Modify image source acceleration sed -i 's#http://mirror.centos.org#https://mirrors.shuosc.org#g' /etc/yum.repos.d/CentOS-Gluster-3.12.repo yum install -y glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma
Software version
[root@glusterfs01 ~]# rpm -qa glusterfs glusterfs-3.12.5-2.el7.x86_64
1.3.2 Start GlusterFS
operate on both nodes
[root@glusterfs01 ~]# systemctl start glusterd.service [root@glusterfs01 ~]# systemctl status glusterd.service ● glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled; vendor preset: disabled) Active: active (running) since 三 2018-02-07 21:02:44 CST; 2s ago Process: 1923 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) Main PID: 1924 (glusterd) CGroup: /system.slice/glusterd.service └─1924 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO 2月 07 21:02:44 glusterfs01 systemd[1]: Starting GlusterFS, a clustered file-system server... 2月 07 21:02:44 glusterfs01 systemd[1]: Started GlusterFS, a clustered file-system server. Hint: Some lines were ellipsized, use -l to show in full.
1.3.3 Configuring mutual trust (trusted pool)
Operate on glusterfs01
[root@glusterfs01 ~]# gluster peer probe glusterfs02 peer probe: success.
Operate on glusterfs02
[root@glusterfs02 ~]# gluster peer probe glusterfs01 peer probe: success.
Note: Once this pool is established, only trusted members may probe new servers into the pool. New servers cannot probe the pool and must probe from the pool.
1.3.4 Checking Peer Status
[root@glusterfs01 ~]# gluster peer status Number of Peers: 1 Hostname: 10.0.0.121 Uuid: 61d043b0-5582-4354-b475-2626c88bc576 State: Peer in Cluster (Connected) Other names: glusterfs02
Note: The UUIDs seen should not be the same.
[root@glusterfs02 ~]# gluster peer status Number of Peers: 1 Hostname: glusterfs01 Uuid: e2a9367c-fe96-446d-a631-194970c18750 State: Peer in Cluster (Connected)
1.3.5 Create a GlusterFS volume
operate on two nodes
mkdir -p /data/brick1/gv0
Execute on any node
[root@glusterfs01 ~]# gluster volume create gv0 replica 2 glusterfs01:/data/brick1/gv0 glusterfs02:/data/brick1/gv0 Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/. Do you still want to continue? (y/n) and volume create: gv0: success: please start the volume to access data
Enable storage volumes
[root@glusterfs01 ~]# gluster volume start gv0 volume start: gv0: success
View information
[root@glusterfs01 ~]# gluster volume info Volume Name: gv0 Type: Replicate Volume ID: 865899b9-1e5a-416a-8374-63f7df93e4f5 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: glusterfs01: / data / brick1 / gv0 Brick2: glusterfs02: / data / brick1 / gv0 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
At this point, the server configuration is over
1.4 Client Test
1.4.1 Installing Client Tools
mount test
[root@clsn6 ~]# yum install centos-release-gluster -y [root@clsn6 ~]# yum install -y glusterfs glusterfs-fuse
Note: To configure the hosts file, otherwise the connection will go wrong
[root@clsn6 ~]# mount.glusterfs glusterfs01:/gv0 /mnt [root@clsn6 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 19G 2.2G 16G 13% / tmpfs 238M 0 238M 0% /dev/shm /dev/sda1 190M 40M 141M 22% /boot glusterfs01: / gv0 100G 33M 100G 1% / mnt
1.4.2 Copy file test
[root@clsn6 ~]# for i in `seq -w 1 100`; do cp -rp /var/log/messages /mnt/copy-test-$i; done
Client check file
[root@clsn6 ~]# ls -lA /mnt/copy* | wc -l 10
service node check file
[root@glusterfs01 ~]# ls -lA /data/brick1/gv0/copy* |wc -l 100
service node check file
[root@glusterfs02 ~]# ls -lA /data/brick1/gv0/copy* |wc -l 100
So far Glusterfs simple configuration is complete