Installation and basic operation of file storage Glusterfs

1. Introduction to Glusterfs

Glusterfs is an open source distributed file system with the following features:

  1. High availability: The Glusterfs system is highly available, ensuring data persistence and reliability by using data replication and data distribution technologies.

  2. Scalability: The Glusterfs system can easily expand the storage capacity and the number of nodes like building blocks to meet the needs of different scales and capacities.

  3. Cross-platform support: Glusterfs system supports multiple operating systems, such as Linux, Unix, Windows, etc.

  4. Data consistency: The Glusterfs system uses a consistent hash algorithm to ensure the consistency of data distribution, thereby improving data reliability and integrity.

  5. Fast recovery: The Glusterfs system can automatically migrate data when a node fails, and quickly restore the entire storage system.

  6. Pluggability: The Glusterfs system supports a plug-in mechanism, which can easily integrate different storage engines and protocols to meet different needs.

2. Installation

yum install -y epel-release
yum install -y centos-release-gluster
yum install glusterfs-server -y
systemctl start glusterd
# 安装成功后查看安装版本
glusterfs -V

3. Concrete operation

# 各节点选择存储盘
mkdir -p /data/brick1

#查看主机池
gluster peer status 


#添加主机池
gluster peer probe ip1

gluster peer probe ip2

gluster peer probe ip3


#任意节点操作,创建副本为3,名为gv1 的复制卷
gluster volume create gv0 replica 3  ip1:/vdb/brick1 ip2:/vdb/brick1  ip3:/vdb/brick1 force

# 任意节点操作 ,启动已创建的 volume
gluster volume start gv0

查看信息:
[root@ip1 ~]# gluster volume info all

Volume Name: gv1
Type: Replicate
Volume ID: 5fee666d-dd65-4d61-9975-9def0728ad98
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ip1:/vdb/brick1
Brick2: ip2:/vdb/brick1
Brick3: ip3:/vdb/brick1
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
cluster.granular-entry-heal: on

# 本地挂载
mount.glusterfs    ip1:/gv0 /data/test/

mount -t glusterfs  ip1:/gv0 /data/salt

mount -t glusterfs -o backupvolfile-server=ip1,use-readdirp=no,log-level=WARNING,log-file=/var/log/gluster.log    ip2:/gv0 /data/salt/

#缩容volume
剔除的brick个数,要考虑剔除之后,数据会不会丢,移除Replicate类型的brick数据不会丢失。
gluster volume remove-brick gv1  replica 5 ip1:/vdb/brick1 force

#删除节点
gluster peer  detach ip1



#增加节点
gluster peer probe ip1

#扩容volume
gluster volume add-brick  gv1  replica 3 ip1:/vdb/brick1

4. Exception handling

When using lsync to synchronize the local hanging directory, there will be a problem of file changes that cannot be monitored, so it is necessary to directly monitor the original directory.

Guess you like

Origin blog.csdn.net/qq_29520895/article/details/131106577