GFS distributed file system to build practical operation -----

A, GlusterFS profile:

GFS is a scalable distributed file system for large-scale, distributed, large amounts of data application access. It runs on inexpensive commodity hardware, and provides fault tolerance. It can provide higher overall performance of the service to a large number of users.

Open distributed file system;
by the storage server, and the client NFS / Samba storage gateways;
(. 1) the GlusterFS features:

Scalability and performance;
high availability;
global unified namespace;
elastic volume management;
based on standard protocols
(2) modular stack architecture:

1, modular stacked structure;
2, by a combination of modules, to implement complicated functions;
GFS distributed file system to build practical operation -----
. 3, the GlusterFS workflow:
GFS distributed file system to build practical operation -----
4, the elastic HASH algorithm:

(1) obtained by a 32-bit integer HASH algorithm; and
(2) is divided into N sub-space connection, each space corresponds to a Brick;
advantage (3) elastic HASH algorithm:
(4) ensure that the data is evenly distributed per a Brick; and
(5) resolve the dependence on the metadata server, thereby solving the single point of failure and service access bottlenecks.

Two, GlusterFS volume type:

(1) Distributed volume:

(1) is not a file into blocks;
(2) HASH value stored by the extended file attributes;
(3) supported by the underlying file system ext3, ext4, ZFS, XFS other
features:

(1)文件分布在不同的服务器,不具备冗余性;
(2)更容易和廉价地扩展卷的大小;
(3)单点故障会造成数据丢失;
(4)依赖底层的数据保护。
(2)条带卷:

(1)根据偏移量将文件分为 N 块(N个条带节点),轮询的存储在每个 Brick (2)Server 节点;
(3)存储大文件时,性能尤为突出;
(4)不具备冗余性,类似 raid0
特点:

(1)数据被分割成更小块分布到块服务器群中的不同条带区;
(2)分布减少了负载且更小的文件加速了存取的速度;
(3)没有数据冗余
(3)复制卷:

(1)同一个文件保存一份或多分副本;
(2)复制模式因为要保存副本,所以磁盘利用率较低;
(3)多个节点上的存储空间不一致,那么将安装木桶效应取最低节点的容量(4)作为该卷的总容量
特点:

(1)卷中所有的服务器均保存一个完整的副本;
(2)卷的副本数量可由客户创建的时候决定;
(3)至少由两个块服务器或更多服务器;
(4)具备容灾性。
(4)分布式条带卷:

(1)兼顾分布式和条带卷的功能;
(2)主要用于大文件访问处理;
(3)至少最少需要 4 台服务器。
(5)分布式复制卷:

(1)兼顾分布式卷和复制卷的功能;
(2)用于需要冗余的情况下

三、GlusterFS 实操走起:

五台虚拟机:一台作为客户端,另外四台作为节点,每个虚拟机新增4块磁盘(每个磁盘20G)
GFS distributed file system to build practical operation -----
1、先将各个磁盘分区、格式化、挂载好,可以使用以下脚本

vim disk.sh //挂载磁盘脚本,一键操作

#! /bin/bash
echo "the disks exist list:"
fdisk -l |grep '磁盘 /dev/sd[a-z]'
echo "=================================================="
PS3="chose which disk you want to create:"
select VAR in `ls /dev/sd*|grep -o 'sd[b-z]'|uniq` quit
do
    case $VAR in
    sda)
        fdisk -l /dev/sda
        break ;;
    sd[b-z])
        #create partitions
        echo "n
                p

                w"  | fdisk /dev/$VAR

        #make filesystem
        mkfs.xfs -i size=512 /dev/${VAR}"1" &> /dev/null
    #mount the system
        mkdir -p /data/${VAR}"1" &> /dev/null
        echo -e "/dev/${VAR}"1" /data/${VAR}"1" xfs defaults 0 0\n" >> /etc/fstab
        mount -a &> /dev/null
        break ;;
    quit)
        break;;
    *)
        echo "wrong disk,please check again";;
    esac
done

2、在四台 node 节点上的操作
(1)修改主机名(node1、node2、node3、node4),并关闭防火墙等。

(2)编辑 hosts 文件(当用户在浏览器中输入一个需要登录的网址时,系统会首先自动从Hosts文件中寻找对应的IP地址,一旦找到,系统会立即打开对应网页,如果没有找到,则系统会再将网址提交DNS域名解析服务器进行IP地址的解析。),添加主机名和 IP地址

vim   /etc/hosts

192.168.220.172 node1
192.168.220.131 node2
192.168.220.140 node3
192.168.220.136 node4

(3)编写 yum 源的库,安装 GlusterFS :

cd /opt/
mkdir /abc
mount.cifs //192.168.10.157/MHA /abc   //远程挂载到本地
cd /etc/yum.repos.d/
mkdir bak  
mv Cent* bak/   //将原来的源都移到新建的文件夹中

vim GLFS.repo   //新建一个源
[GLFS]
name=glfs
baseurl=file:///abc/gfsrepo
gpgcheck=0
enabled=1

(4)安装软件包

yum -y install glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma

(5)开启服务

systemctl start glusterd
systemctl status glusterd

(6)查看状态
GFS distributed file system to build practical operation -----
3、时间同步,每台节点都需要操作

ntpdate ntp1.aliyun.com   //时间同步

添加存储信任池,只要在一台主机上添加其他三台节点即可:
这是在 node1 节点上的操作:

gluster peer probe node2
gluster peer probe node3
gluster peer probe node4

gluster peer status //查看所有节点状态

GFS distributed file system to build practical operation -----

四、创建各种卷

1、建分布式卷

gluster volume create dis-vol node1:/data/sdb1 node2:/data/sdb1 force
  //利用node1和node2上的两块磁盘创建;dis-vol为磁盘名;force表示强制

gluster volume start dis-vol    //启动
gluster volume info dis-vol     //查看状态

2、建条带卷

gluster volume create stripe-vol stripe 2 node1:/data/sdc1 node2:/data/sdc1 force

gluster volume start stripe-vol
gluster volume info stripe-vol

3、建复制卷

gluster volume create rep-vol replica 2 node3:/data/sdb1 node4:/data/sdb1 force

gluster volume start rep-vol
gluster volume info rep-vol

4、分布式条带卷

gluster volume create dis-stripe stripe 2 node1:/data/sdd1 node2:/data/sdd1 node3:/data/sdd1 node4:/data/sdd1 force

gluster volume start dis-stripe
gluster volume info dis-stripe

5、分布式复制卷

gluster volume create dis-rep replica 2 node1:/data/sde1 node2:/data/sde1 node3:/data/sde1 node4:/data/sde1 force

gluster volume start dis-rep
gluster volume info dis-rep

6、客户端配置
(1)关闭防火墙

(2)配置安装 GFS 源:

cd /opt/
mkdir /abc
mount.cifs //192.168.10.157/MHA /abc   //远程挂载到本地
cd /etc/yum.repos.d/

vim GLFS.repo   //新建一个源
[GLFS]
name=glfs
baseurl=file:///abc/gfsrepo
gpgcheck=0
enabled=1

(3)安装软件包

yum -y install glusterfs glusterfs-fuse  

(4)修改 hosts文件:

vim /etc/hosts

192.168.220.172 node1
192.168.220.131 node2
192.168.220.140 node3
192.168.220.136 node4

(5)创建临时挂载点:

mkdir -p /text/dis   //递归创建一个挂载点
mount.glusterfs node1:dis-vol /text/dis/         //挂载分布式卷

mkdir /text/strip
mount.glusterfs node1:stripe-vol /text/strip/     //挂载条带卷

mkdir /text/rep
mount.glusterfs node3:rep-vol /text/rep/          //挂载复制卷

mkdir /text/dis-str
mount.glusterfs node2:dis-stripe /text/dis-str/    //挂载分布式条带卷

mkdir /text/dis-rep
mount.glusterfs node4:dis-rep /text/dis-rep/        //挂载分布式复制卷

(6)df-hT:查看挂载信息:
GFS distributed file system to build practical operation -----

五、测试各个卷

(1)创建 5 个40M 的文件:

dd if=/dev/zero of=/demo1.log bs=1M count=40
dd if=/dev/zero of=/demo2.log bs=1M count=40
dd if=/dev/zero of=/demo3.log bs=1M count=40
dd if=/dev/zero of=/demo4.log bs=1M count=40
dd if=/dev/zero of=/demo5.log bs=1M count=40

(2)创建的 5 个文件分别复制到不同的卷上:

cp /demo* /text/dis
cp /demo* /text/strip
cp /demo* /text/rep/
cp /demo* /text/dis-str
cp /demo* /text/dis-rep

How (3) View the volumes are distributed: LL -h / the Data / sdb1
1, distributed volumes:
it can be seen are each file is complete.
GFS distributed file system to build practical operation -----
GFS distributed file system to build practical operation -----
2, striped:
all files are divided into halves of distributed storage.
GFS distributed file system to build practical operation -----
GFS distributed file system to build practical operation -----
3, replicated volumes:
all files are copied intact again, for storage.
GFS distributed file system to build practical operation -----
GFS distributed file system to build practical operation -----
4, the distributed coil strips:
GFS distributed file system to build practical operation -----
GFS distributed file system to build practical operation -----
GFS distributed file system to build practical operation -----
GFS distributed file system to build practical operation -----
5, volume distributed replication:GFS distributed file system to build practical operation -----
GFS distributed file system to build practical operation -----
GFS distributed file system to build practical operation -----
GFS distributed file system to build practical operation -----

(4) Fault-destructive testing:
now closing the second node server, analog down; View each volume on the client where:
GFS distributed file system to build practical operation -----
Summary:

1, all files are distributed in volume;
2, copy all the files in the volume;
3, a striped volume mount distributed only demo5.log a file, and lost 4;
4, all mounted volumes distributed replication files are in;
5, striped all files are lost.
(5) Other actions:

1. Delete Volume (stop, delete):

gluster volume stop 卷名
gluster volume delete 卷名

2, black and white list settings:

gluster volume set 卷名 auth.reject 192.168.220.100     //拒绝某台主机挂载

gluster volume set 卷名 auth.allow 192.168.220.100      //允许某台主机挂载

Guess you like

Origin blog.51cto.com/14475593/2460580