CentOS7.5 GlusterFS 分布式文件系统集群环境搭建

环境准备:

系统版本:CentOS Linux release 7.5.1804 (Core)

glusterfs:3.6.9

userspace-rcu-master:

硬件资源:

10.200.22.152 GlusterFS-master  以下简称152

10.200.22.151 GlusterFS-slave    以下简称151

一、安装依赖(152&151)

yum install -y flex bison openssl openssl-devel acl libacl libacl-devel sqlite-devel libxml2-devel python-devel make cmake gcc gcc-c++ autoconf automake libtool unzip zip wget

二、安装userspace-rcu-master(152&151)

1)进入/usr/local/src目录并下载userspace-rcu-master.zip

cd /usr/local/src && wget https://github.com/urcu/userspace-rcu/archive/master.zip

2)解压并编译安装,命令如下:

unzip /usr/local/src/master -d /usr/local/
cd /usr/local/userspace-rcu-master/
./bootstrap
./configure
make && make install
ldconfig

三、安装glusterfs(152&151)

1)进入/usr/local/src目录并下载glusterfs-3.6.9.tar.gz

cd /usr/local/src && wget https://download.gluster.org/pub/gluster/glusterfs/old-releases/3.6/3.6.9/glusterfs-3.6.9.tar.gz

2)解压并编译安装,命令如下:

tar -zxvf /usr/local/src/glusterfs-3.6.9.tar.gz -C /usr/local/
cd /usr/local/glusterfs-3.6.9/
./configure --prefix=/usr/local/glusterfs
make && make install

3)添加环境变量

vi /etc/profile

#最上面添加如下配置
export GLUSTERFS_HOME=/usr/local/glusterfs
export PATH=$PATH:$GLUSTERFS_HOME/sbin

示例图如下:
source /etc/profile #刷新配置使之生效

 4)启动glusterfs

/usr/local/glusterfs/sbin/glusterd

5)关闭防火墙

systemctl stop firewalld.service
systemctl disable firewalld.service

附:YUM源安装gluster

rpm -ivh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-8.noarch.rpm
wget -P /etc/yum.repos.d https://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.19/CentOS/glusterfs-epel.repo (根据实际需要选择合适版本)

yum -y install glusterfs glusterfs-fuse glusterfs-server
systemctl start glusterd.service
systemctl enable glusterd.service

四、建立集群(152)

1)执行以下命令,将151节点加入到集群:

gluster peer probe 20.200.22.151

2)查看集群状态:

[root@GlusterFS-master ~]# gluster peer status
Number of Peers: 1

Hostname: 10.200.22.151
Uuid: d2426768-81e9-486c-808b-d4716b1cd8ec
State: Peer in Cluster (Connected)

3)查看 volume 信息

[root@GlusterFS-master ~]# gluster volume info
No volumes present

4)创建数据存储目录(152&151) (集群的所有节点都要配置)

mkdir -p /data

5)创建复制卷 models,指定刚刚创建的目录(replica 2表明存储2个备份,后面指定服务器的存储目录)

[root@GlusterFS-master ~]# gluster volume create models replica 2 10.200.22.152:/data 10.200.22.151:/data force
volume create: models: success: please start the volume to access data

GlusterFS 几种volume模式说明:
链接中比较直观:https://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/

默认模式,既DHT, 也叫分布卷: 将文件已hash算法随机分布到 一台服务器节点中存储。
命令格式:gluster volume create test-volume server1:/exp1 server2:/exp2
复制模式,既AFR, 创建volume 时带 replica x 数量: 将文件复制到 replica x 个节点中,现在已经推荐3节点仲裁者复制模式,因为2节点可能产生脑裂。
命令格式:gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2
gluster volume create test-volume replica 3 arbiter 1 transport tcp server1:/exp1 server2:/exp2 server3:/exp3
分布式复制模式,至少4节点。
命令格式:gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4
分散模式,最少需要3节点
命令格式:gluster volume create test-volume disperse 3 server{1..3}:/bricks/test-volume
分布式分散模式,创建一个分布式分散体积,分散关键字和<数量>是强制性的,指定的砖块在命令行中的数量必须是分散数的倍数
命令格式:gluster volume create <volname> disperse 3 server1:/brick{1..6}

6)再次查看 volume 信息

[root@GlusterFS-master ~]# gluster volume info
 
Volume Name: models
Type: Replicate
Volume ID: f2792167-cbab-4279-9d6d-77dc6559afa7
Status: Created
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.200.22.152:/data
Brick2: 10.200.22.151:/data

7)启动 models

[root@GlusterFS-master ~]# gluster volume start models
volume start: models: success

8)gluster 性能调优

开启 指定 volume 的配额 
gluster volume quota models enable 

限制 models 总目录最大使用 10GB 空间(可根据实际硬盘大小配置) 
gluster volume quota models limit-usage / 10GB 


设置 cache 大小(128MB并非绝对,您可根据实际硬盘大小配置)

gluster volume set models performance.cache-size 128MB 


开启异步,后台操作 
gluster volume set models performance.flush-behind on 


设置 io 线程 32 
gluster volume set models performance.io-thread-count 32 


设置 回写 (写数据时间,先写入缓存内,再写入硬盘) 
gluster volume set models performance.write-behind on 

9)查看调优之后的volume信息

[root@GlusterFS-master ~]# gluster volume info
 
Volume Name: models
Type: Replicate
Volume ID: f2792167-cbab-4279-9d6d-77dc6559afa7
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.200.22.152:/data
Brick2: 10.200.22.151:/data
Options Reconfigured:
performance.write-behind: on
performance.io-thread-count: 32
performance.flush-behind: on
performance.cache-size: 128MB
features.quota: on

五、部署客户端并挂载GlusterFS文件系统

1)安装gluster-client,命令如下:

yum install -y glusterfs glusterfs-fuse

2)建立挂载点目录,命令如下:

mkdir -p /opt/gfsmount

3)挂载,命令如下:

mount -t glusterfs 10.200.22.151:models /opt/gfsmount/


4)令检查挂载情况,命令如下:

df -h

5)测试

time dd if=/dev/zero of=/opt/gfsmount/hello bs=10M count=1


6)查看集群存储情况 (在53和54两个节点上都运行):

cd /opt/gluster/data/ && ll

六、GlusterFS的管理

1)删除卷

gluster volume stop models 
gluster volume delete models

2)将机器移出集群

gluster peer detach glusterfs3 glusterfs4

3)卷扩容(由于副本数设置为2,至少要添加2(4、6、8..)台机器)

gluster peer probe glusterfs3 # 加节点 
gluster peer probe glusterfs4 # 加节点 
gluster volume add-brick models glusterfs3:/data/brick1/models glusterfs4:/data/brick1/models force# 合并卷

4)重新均衡卷

gluster volume rebalance models start
gluster volume rebalance models status
gluster volume rebalance models stop

5)收缩卷(收缩卷前gluster需要先移动数据到其他位置)

gluster volume remove-brick models glusterfs3:/data/brick1/models glusterfs4:/data/brick1/models start # 开始迁移 
gluster volume remove-brick models glusterfs3:/data/brick1/models glusterfs4:/data/brick1/models status # 查看迁移状态 
gluster volume remove-brick models glusterfs3:/data/brick1/models glusterfs4:/data/brick1/models commit # 迁移完成后提交

6)迁移卷

gluster peer probe glusterfs5 # 将glusterfs3的数据迁移到glusterfs5,先将glusterfs5加入集群 
gluster volume replace-brick models glusterfs3:/data/brick1/models glusterfs5:/data/brick1/models start # 开始迁移 
gluster volume replace-brick models glusterfs3:/data/brick1/models glusterfs5:/data/brick1/models status # 查看迁移状态 
gluster volume replace-brick models glusterfs3:/data/brick1/models glusterfs5:/data/brick1/models commit # 数据迁移完毕后提交 
gluster volume replace-brick models glusterfs3:/data/brick1/models glusterfs5:/data/brick1/models commit -force # 如果机器agent31.kisops.org出现故障已经不能运行,执行强制提交 
gluster volume heal gfs full # 同步整个卷

7)授权访问

gluster volume set gfs auth.allow 10.200.*

六、GlusterFS相关命令

查看GlusterFS中所有的volume:

gluster volume list

查看
GlusterFS中所有的volume的信息:

gluster volume status

gluster volume info
启动磁盘: gluster volume start models //启动名字为 models 的磁盘 

停止磁盘: gluster volume stop models //停止名字为 models 的磁盘

删除磁盘: gluster volume delete models //删除名字为 models 的磁盘

查看集群节点:
gluster pool list

七、GlusterFS服务器重启启动

1.glusterFS服务需要启动
2.磁盘models需要启动
3.目录/opt/gfsmount/需要重新挂载
4.挂载完目录/opt/gfsmount/需要重新进入

systemctl stop firewalld.service 
gluster volume start models 
mount -t glusterfs 10.200.22.151:models /opt/gfsmount/ 
cd /opt/gfsmount/

猜你喜欢

转载自www.cnblogs.com/EikiXu/p/10494892.html