Experiment: Super detailed-GFS distributed file system!

Device list:
node1: 20.0.0.3
node2: 20.0.0.5
node3: 20.0.0.6
node4: 20.0.0.7
client: 20.0.0.8

  • Operation is required at the start, otherwise there will be problems
四台都要做此操作!!!
[root@node1 ~]# systemctl stop firewalld         ###关闭防火墙
[root@node1 ~]# systemctl disable firewalld    ###关闭自启动
[root@node1 ~]# vim /etc/selinux/config         ###关闭核心防护
SELINUX=disabled        ###设置成disabled,一共三种模式。

Enforcing:强制模式。代表SELinux在运行中,且已经开始限制domain/type之间的验证关系
Permissive:宽容模式。代表SELinux在运行中,不过不会限制domain/type之间的验证关系,即使验证不正确,程仍可以对文件进行操作,不过如果验证不正确会发出警告
Disabled:关闭模式。SELinux并没有实际运行
  • Add hard drive
把四台服务器,各加四块硬盘,算上自己本身的,每个服务器都有五块硬盘

[root@localhost ~]# fdisk -l   ###用这条命令查看一下刚刚添加进去的硬盘,是否在
[root@localhost ~]# vim disk.sh   ###写一个对刚刚硬盘进分区格式化的脚本
#!/bin/bash
echo "the disks exist list:"
fdisk -l |grep '磁盘 /dev/sd[a-z]'
echo "=========================================="
PS3="chose which disk you want to create:"
select VAR in `ls /dev/sd* |grep -o 'sd[b-z]'|uniq` quit
do
    case $VAR in
    sda)
        fdisk -l /dev/sda
        break ;;
    sd[b-z])
        #create partitions
        echo "n
        p



        w" | fdisk /dev/$VAR

        #make filesystem
        mkfs.xfs -i size=512 /dev/${VAR}"1" &> /dev/null
        #mount the systeml
        mkdir -p /data/${VAR}"1" &>/dev/null
        echo -e "/dev/${VAR}"1" /data/${VAR}"1" xfs defaults 0 0\n" >> /etc/fstab
        mount -a &> /dev/null
        break ;;
    quit)
        break;;
    *)
        echo "wrong disk,please check again";;
    esac
done

  • Give the script an execution permission
[root@localhost ~]# chmod +x disk.sh   ###给脚本一个执行权限
  • Remotely copy to the root directory on the 20.0.0.5, 6, 7 host through the root user
[root@localhost ~]# scp disk.sh [email protected]:/root   
然后输入yes   输入密码Abc123(root密码)
[root@localhost ~]# scp disk.sh [email protected]:/root 
然后输入yes   输入密码Abc123(root密码)
[root@localhost ~]# scp disk.sh [email protected]:/root 
然后输入yes   输入密码Abc123(root密码)

  • Format the hard drive partition

[root@localhost ~]# ./disk.sh   ### 然后在四台服务器都执行该脚本
1) sdb
2) sdc
3) sdd
4) sde
5) quit
chose which disk you want to create:           //这里就输入1,然后在执行一次脚本在输入2,依次执行。
注意:每一个执行都要留一段时间,不然可能失败

[root@localhost ~]# df -Th   ###查看一下四块硬盘是否创建好


---------------------------------如果单个硬盘出现没做好,可以以下方法解决----------------------------------
[root@localhost ~]# mkfs -t xfs /dev/sdb1   ###格式化sdb1
[root@localhost ~]# mount  -a                      ###重新挂载
[root@localhost ~]# df -Th                            ###再检查一下
  • Set the host names of four servers so that the host names can be used to map the ip addresses.
[root@localhost ~]# hostnamectl  set-hostname node1
[root@localhost ~]# hostnamectl  set-hostname node2
[root@localhost ~]# hostnamectl  set-hostname node3
[root@localhost ~]# hostnamectl  set-hostname node4
[root@localhost ~]# su        ### 每一个都要su刷新一下,这里就不过多写了
  • Set the hosts mapping file, and all four of them must be mapped like this. Setting the host name simplifies the operation of waiting.
[root@node1 ~]# vim /etc/hosts
20.0.0.3 node1
20.0.0.5 node2
20.0.0.6 node3
20.0.0.7 node4
[root@node1 ~]# ping node4            ###可以测试一下,ping主机名,能ping通
  • The yum warehouse is on my windows computer and needs to be shared and mounted for use

Find the local yum source and perform the sharing operation
1. Add the file sharing to the Everyone user to specify the read permission
Insert picture description here
Insert picture description here
Insert picture description here

2. Set the local policy secpol.msc
win key plus R and enter secpol.msc to enter the local policy group
Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here

3. Network and Sharing Center ""Sharing Options""Set all sharing options to allow (share without password protection)
Insert picture description here
Insert picture description here

  • Mount the host's yum source
[root@node1 ~]# smbclient -L //192.168.10.2       ###查看一下宿主机的共享目录
---然后不用输入密码直接回车---
[root@node1 ~]# mkdir /abc                                   ###在node1创建一个abc目录
[root@node1 ~]# mount.cifs //192.168.10.2/gfsrepo  /abc     ###然后把刚刚的yum源挂载进去
  • Configure the yum warehouse, all four of them must have the same configuration! ! !
[root@node1 abc]# cd /etc/yum.repos.d/                     ###进入到etc/yum.repos.d/ 目录里
[root@node1 yum.repos.d]# mkdir  backup                 ###创建一个backup目录
[root@node1 yum.repos.d]# mv CentOs-*   backup/   ###把CentOs放到backup目录里
[root@node1 yum.repos.d]# vim GLFS.repo
[GLFS]
name=glfs
baseurl=file:///abc         ###这个目录要注意 http://mirror.centos.org/centos/$releasever/storage/$basearch/gluster-3.12/
gpgcheck=0
enabled=1
[root@node1 yum.repos.d]# yum clean all         ###清除缓存
[root@node1 yum.repos.d]# yum list                 ###重新加载
---------------------------如果不能启动就使用下面的办法------------------------------
用线网源!!!
[root@node1 yum.repos.d]# vim GLFS.repo
baseurl=file:///abc  改成 ==》》 http://mirror.centos.org/centos/$releasever/storage/$basearch/gluster-3.12/
[root@node1 yum.repos.d]# yum clean all
[root@node1 yum.repos.d]# yum makecache
[root@node1 yum.repos.d]# yum -y install glusterfs-server glusterfs glusterfs-fuse glusterfs-rdma
  • Install the environment package, all four must be installed
[root@node1 ~]# yum -y install glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma

[root@node1 yum.repos.d]# systemctl start glusterd.service    ###启动服务
[root@node1 yum.repos.d]# systemctl enable glusterd             ###开机自启
[root@node1 yum.repos.d]# systemctl status  glusterd             ###查看状态
  • Time synchronization, all four stations need to do
[root@node1 yum.repos.d]# ntpdate ntp1.aliyun.com
  • On any host, add it to the storage trust pool
[root@node1 yum.repos.d]# gluster peer probe node2 
[root@node1 yum.repos.d]# gluster peer probe node3
[root@node1 yum.repos.d]# gluster peer probe node4
  • View all nodes
[root@node1 yum.repos.d]# gluster peer status
Number of Peers: 3

Hostname: node2
Uuid: 63f568a6-9f1a-47f7-8667-0893186ef99e
State: Peer in Cluster (Connected)

Hostname: node3
Uuid: b69de245-b692-46bc-8848-8db471f304b8
State: Peer in Cluster (Connected)

Hostname: node4
Uuid: 9f0decde-ba47-4537-a0f0-50464962d182
State: Peer in Cluster (Connected)

  • Distributed volume
分布式卷
没有对文件进行分块处理
通过扩展文件属性保存HASH值
支持的底层文件系统有ext3、ext4、ZFS、XFS等

###分布式卷具有如下特点###
文件分布在不同的服务器,不具备冗余性更容易和廉价地扩展卷的大小
单点故障会造成数据丢失依赖底层的数据保护

[root@node4 yum.repos.d]# gluster volume create dis-vol node1:/data/sdb1 node2:/data/sdb1 force  ###创建分布式卷,卷名叫dis-vol,用了node1:/data/sdb1和node2:/data/sdb1 

[root@node4 yum.repos.d]# gluster volume info dis-vol    ###查看详细信息和状态
Volume Name: dis-vol          ###名称
Type: Distribute                     ###分布式卷的类型
Volume ID: 5b75e4bd-d830-4e3f-9714-456261c276be  ###id,独一无二的
Status: Created                      ###Created(创建)状态不能使用
Snapshot Count: 0        
Xlator 1: BD
Capability 1: thin
Capability 2: offload_copy
Capability 3: offload_snapshot
Number of Bricks: 2               ###两个块组成
Transport-type: tcp                ###tcp协议
Bricks:
Brick1: node1:/data/sdb1      ###node1的sdb1
Brick1 VG: 
Brick2: node2:/data/sdb1      ###node2的sbd1
Brick2 VG: 
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
 
[root@node4 yum.repos.d]# gluster volume start dis-vol      ###开启
[root@node4 yum.repos.d]# gluster volume info dis-vol       ###再用这条查看状态就是Started
Status: Started
  • Striped roll
条带卷
根据偏移量将文件分成N块(N个条带节点),轮询的存储在每个Brick Server节点存储大文件时,性能尤为突出
不具备冗余性,类似Raid0

####条带卷特点###
数据被分割成更小块分布到块服务器群中的不同条带区分布减少了负载且更小的文件加速了存取的速度
没有数据冗余

[root@node4 yum.repos.d]# gluster volume create stripe-vol stripe 2 node1:/data/sdc1 node2:/data/sdc1 force
[root@node4 yum.repos.d]# gluster volume info stripe-vol    ###查看详细信息和状态
[root@node4 yum.repos.d]# gluster volume start stripe-vol    ###开启
[root@node4 yum.repos.d]# gluster volume info stripe-vol     ###在查看详细信息和状态  
```bash

- 复制卷
```bash
复制卷
同一文件保存一份或多分副本
复制模式因为要保存副本,所以磁盘利用率较低
多个节点上的存储空间不一致,那么将按照木桶效应取最低节点的容量作为该卷的总容量

###特点###
卷中所有的服务器均保存一个完整的副本卷的副本数量可由客户创建的时候决定至少由两个块服务器或更多服务器
具备冗余性

[root@node4 yum.repos.d]# gluster volume create rep-vol replica 2 node3:/data/sdb1 node4:/data/sdb1 force
[root@node4 yum.repos.d]# gluster volume info rep-vol   ###查看详细信息和状态
[root@node4 yum.repos.d]# gluster volume start rep-vol     ###开启
[root@node4 yum.repos.d]# gluster volume info rep-vol      ###在查看详细信息和状态  
  • Distributed stripe volume
分布式条带卷
兼顾分布式卷和条带卷的功能主要用于大文件访问处理
至少最少需要4台服务器
[root@node4 yum.repos.d]# gluster volume create dis-stripe stripe 2 node1:/data/sdd1 node2:/data/sdd1 node3:/data/sdd1 node4:/data/sdd1 force

[root@node4 yum.repos.d]# gluster volume info dis-stripe    ###查看详细信息和状态
[root@node4 yum.repos.d]# gluster volume start dis-stripe     ###开启
[root@node4 yum.repos.d]# gluster volume info dis-stripe      ###在查看详细信息和状态  
  • Distributed replication volume
分布式复制卷
兼顾分布式卷和复制卷的功能
用于需要冗余的情况下

[root@node4 yum.repos.d]# gluster volume create dis-rep replica 2 node1:/data/sde1 node2:/data/sde1 node3:/data/sde1 node4:/data/sde1 force

[root@node4 yum.repos.d]# gluster volume info  dis-rep    ###查看详细信息和状态
[root@node4 yum.repos.d]# gluster volume start  dis-rep     ###开启
[root@node4 yum.repos.d]# gluster volume info  dis-rep      ###在查看详细信息和状态  


[root@node4 yum.repos.d]# gluster volume  list  ###查看一下有几个卷
dis-rep
dis-stripe
dis-vol
rep-vol
stripe-vol
  • Client configuration
#########开局必配操作,不然一定会出问题##########
[root@node1 ~]# systemctl stop firewalld         ###关闭防火墙
[root@node1 ~]# systemctl disable firewalld    ###关闭自启动
[root@node1 ~]# vim /etc/selinux/config         ###关闭核心防护
  • Mount the host's yum source
[root@node1 ~]# smbclient -L //192.168.10.2       ###查看一下宿主机的共享目录
---然后不用输入密码直接回车---
[root@node1 ~]# mkdir /abc                                   ###在node1创建一个abc目录
[root@node1 ~]# mount.cifs //192.168.10.2/gfsrepo  /abc     ###然后把刚刚的yum源挂载进去

  • Configure the yum warehouse, all four of them must have the same configuration! ! !
[root@node1 abc]# cd /etc/yum.repos.d/                     ###进入到etc/yum.repos.d/ 目录里
[root@node1 yum.repos.d]# mkdir  backup                 ###创建一个backup目录
[root@node1 yum.repos.d]# mv CentOs-*   backup/   ###把CentOs放到backup目录里
[root@node1 yum.repos.d]# vim GLFS.repo
[GLFS]
name=glfs
baseurl=file:///abc         ###这个目录要注意
gpgcheck=0
enabled=1
[root@node1 yum.repos.d]# yum clean all         ###清除缓存
[root@node1 yum.repos.d]# yum list                 ###重新加载


[root@client yum.repos.d]# yum -y install glusterfs glusterfs-fuse    
  • Set up hosts mapping
[root@client yum.repos.d]#  vim /etc/hosts
20.0.0.3 node1
20.0.0.5 node2
20.0.0.6 node3
20.0.0.7 node4
  • Temporary mount
[root@client yum.repos.d]# mkdir -p /test/dis         ### 挂载分布式卷
[root@client yum.repos.d]# mount.glusterfs node1:dis-vol /test/dis/

[root@client yum.repos.d]# mkdir -p /test/stripe
[root@client yum.repos.d]# mount.glusterfs node1:stripe-vol /test/stripe/    ### 挂载条带卷

[root@client yum.repos.d]# mkdir -p /test/rep
[root@client yum.repos.d]# mount.glusterfs node1:rep-vol /test/rep/    ### 挂载复制卷

[root@client yum.repos.d]# mkdir -p /test/dis-stripe
[root@client yum.repos.d]# mount.glusterfs node1:dis-stripe /test/dis-stripe/   ### 挂载分布条带卷

[root@client yum.repos.d]# mkdir -p /test/dis-rep
[root@client yum.repos.d]# mount.glusterfs node1:dis-rep /test/dis-rep/    ### 挂载分布复制卷

[root@client yum.repos.d]# df -Th      ####查看一下对不对
node1:dis-vol           fuse.glusterfs   40G   65M   40G   1% /test/dis
node1:stripe-vol        fuse.glusterfs   40G   65M   40G   1% /test/stripe
node1:rep-vol           fuse.glusterfs   20G   33M   20G   1% /test/rep
node1:dis-stripe        fuse.glusterfs   80G  130M   80G   1% /test/dis-stripe
node1:dis-rep           fuse.glusterfs   40G   65M   40G   1% /test/dis-rep
  • test
###创建5个40M的文件
[root@client yum.repos.d]# dd if=/dev/zero of=/demo1.log bs=1M count=40
[root@client yum.repos.d]# dd if=/dev/zero of=/demo2.log bs=1M count=40
[root@client yum.repos.d]# dd if=/dev/zero of=/demo3.log bs=1M count=40
[root@client yum.repos.d]# dd if=/dev/zero of=/demo4.log bs=1M count=40
[root@client yum.repos.d]# dd if=/dev/zero of=/demo5.log bs=1M count=40


###复制5个文件到不同的卷上
[root@client yum.repos.d]# cp /demo* /test/dis
[root@client yum.repos.d]# cp /demo* /test/stripe/
[root@client yum.repos.d]# cp /demo* /test/rep/
[root@client yum.repos.d]# cp /demo* /test/dis-stripe/
[root@client yum.repos.d]# cp /demo* /test/dis-rep/

###查看分布式卷
[root@node1 yum.repos.d]# ls -h /data/sdb1/
demo1.log  demo2.log  demo3.log  demo4.log

[root@node2 ~]# ls -h /data/sdb1/
demo5.log

###查看条带卷
[root@node1 yum.repos.d]# ls -lh /data/sdc1
total 100M
-rw-r--r-- 2 root root 20M Oct 27 06:29 demo1.log
-rw-r--r-- 2 root root 20M Oct 27 06:29 demo2.log
-rw-r--r-- 2 root root 20M Oct 27 06:29 demo3.log
-rw-r--r-- 2 root root 20M Oct 27 06:29 demo4.log
-rw-r--r-- 2 root root 20M Oct 27 06:29 demo5.log

[root@node2 ~]# ls -lh /data/sdc1
total 100M
-rw-r--r-- 2 root root 20M Oct 27 18:29 demo1.log
-rw-r--r-- 2 root root 20M Oct 27 18:29 demo2.log
-rw-r--r-- 2 root root 20M Oct 27 18:29 demo3.log
-rw-r--r-- 2 root root 20M Oct 27 18:29 demo4.log
-rw-r--r-- 2 root root 20M Oct 27 18:29 demo5.log

###查看复制卷
[root@node3 yum.repos.d]# ll -lh /data/sdb1
total 200M
-rw-r--r-- 2 root root 40M Oct 27 06:30 demo1.log
-rw-r--r-- 2 root root 40M Oct 27 06:30 demo2.log
-rw-r--r-- 2 root root 40M Oct 27 06:30 demo3.log
-rw-r--r-- 2 root root 40M Oct 27 06:30 demo4.log
-rw-r--r-- 2 root root 40M Oct 27 06:30 demo5.log

[root@node4 yum.repos.d]# ll -lh /data/sdb1
总用量 200M
-rw-r--r--. 2 root root 40M 10月 27 06:30 demo1.log
-rw-r--r--. 2 root root 40M 10月 27 06:30 demo2.log
-rw-r--r--. 2 root root 40M 10月 27 06:30 demo3.log
-rw-r--r--. 2 root root 40M 10月 27 06:30 demo4.log
-rw-r--r--. 2 root root 40M 10月 27 06:30 demo5.log

###查看分布式复制卷
[root@node1 yum.repos.d]# ls -lh /data/sde1
total 160M
-rw-r--r-- 2 root root 40M Oct 27 06:30 demo1.log
-rw-r--r-- 2 root root 40M Oct 27 06:30 demo2.log
-rw-r--r-- 2 root root 40M Oct 27 06:30 demo3.log
-rw-r--r-- 2 root root 40M Oct 27 06:30 demo4.log

[root@node2 ~]# ls -lh /data/sde1
total 160M
-rw-r--r-- 2 root root 40M Oct 27 18:30 demo1.log
-rw-r--r-- 2 root root 40M Oct 27 18:30 demo2.log
-rw-r--r-- 2 root root 40M Oct 27 18:30 demo3.log
-rw-r--r-- 2 root root 40M Oct 27 18:30 demo4.log
 

###查看分布式条带卷
[root@node1 yum.repos.d]# ls -lh /data/sdd1
total 80M
-rw-r--r-- 2 root root 20M Oct 27 06:30 demo1.log
-rw-r--r-- 2 root root 20M Oct 27 06:30 demo2.log
-rw-r--r-- 2 root root 20M Oct 27 06:30 demo3.log
-rw-r--r-- 2 root root 20M Oct 27 06:30 demo4.log

[root@node2 ~]# ls -lh /data/sdd1
total 80M
-rw-r--r-- 2 root root 20M Oct 27 18:30 demo1.log
-rw-r--r-- 2 root root 20M Oct 27 18:30 demo2.log
-rw-r--r-- 2 root root 20M Oct 27 18:30 demo3.log
-rw-r--r-- 2 root root 20M Oct 27 18:30 demo4.log
  • Destruction test
关闭node1服务器观察结果
[root@node1 yum.repos.d]# init 0

-------在客户端---------
[root@client ~]# cd /test/
[root@client test]# ls           
dis  dis-rep  dis-stripe  rep  stripe

[root@client test]# ls dis    ###查看一下分布式卷,只有demo5了
demo5.log

[root@client test]# ls stripe  ###查看条带卷,不能查看了
ls: cannot access stripe: Transport endpoint is not connected


[root@client test]# ls dis-stripe/  ###查看分布式条带卷,只能看demo5了
demo5.log

[root@client test]# ls dis-rep/   ###查看分布式复制卷,完整
demo1.log  demo2.log  demo3.log  demo4.log  demo5.log





-----------------------------------------删除卷----------------------------------------
[root@node3 yum.repos.d]# gluster volume stop rep-vol      ###要先stop
Stopping volume will make its data inaccessible. Do you want to continue? (y/n)y
volume stop: rep-vol: success

[rootnode3 yum.repos.d]# gluster volume list
dis-rep
dis-stripe
dis-vol
rep-vol
stripe-vol

//注意:删除卷时,信任池中不能有主机处于拓机状态,否则删除不成功(只有开启状态才能删除)
[root@node3 yum.repos.d]# gluster volume delete rep-vol
Deleting volume will erase all information about the volume. Do you want to continue?(y/n)y
volume delete: rep-vol: success

[root@node3 yum.repos.d]# gluster volume list
dis-rep
dis-stripe
dis-vol
stripe-vol




-----------------------------------------访问控制----------------------------------------
//仅拒绝
[root@node1 yum.repos.d]# gluster volume set dis-vol auth.reject 192.168.10.20
volume set: success

//仅允许
[root@node1 yum.repos.d]# gluster volume set dis-vol auth.allow 192.168.10.3
volume set: success

Guess you like

Origin blog.csdn.net/m0_46563938/article/details/109362578