[] Glusterfs quickly build distributed storage

Preparing the Environment

ip address use
172.30.200.240 Glusterfs server1
172.30.200.241 Glusterfs server2
172.30.200.242 Glusterfs server3
172.30.200.243 Glusterfs client

Step 1, to ensure that there are at least three servers

  • Names are "server1", "server2" and "server3"
  • Network Connectivity
  • At least two drives, wherein a disk for storing GlusterFS services, e.g. (/ dev / sdb)
  • Set NTP server.

Configuration hostname

### 172.30.200.240
 echo "server1" >/etc/hostname 
### 172.30.200.241
 echo "server2" >/etc/hostname 
### 172.30.200.242
 echo "server3" >/etc/hostname

Restart above each server, so that HOSTNAMEthe entry into force

Configured to resolve host

echo "172.30.200.240 server1" >>/etc/hosts
echo "172.30.200.241 server2" >>/etc/hosts
echo "172.30.200.242 server3" >>/etc/hosts

Configuring ntp client synchronization time

yum install -y ntpdate
### 同步阿里云的时间服务
[root@linux-node2 ~]# ntpdate ntp2.aliyun.com
 7 Nov 13:27:06 ntpdate[1656]: adjust time server 203.107.6.88 offset -0.000860 sec
 
### 写一个crontab
[root@linux-node2 ~]# crontab -l
*/5 * * * * /usr/sbin/ntpdate ntp2.aliyun.com >/dev/null 2>&1

Step 2, the hard disk formatting and configuration

Three servers need to be configured
1. Configure the new disk partition sdb, as follows

[root@server2 ~]# fdisk /dev/sdb

欢迎使用 fdisk (util-linux 2.34)。
更改将停留在内存中,直到您决定将更改写入磁盘。
使用写入命令前请三思。

设备不包含可识别的分区表。
创建了一个磁盘标识符为 0x10c33a14 的新 DOS 磁盘标签。

命令(输入 m 获取帮助):n  
分区类型
   p   主分区 (0个主分区,0个扩展分区,4空闲)
   e   扩展分区 (逻辑分区容器)
选择 (默认 p):p
分区号 (1-4, 默认  1): 
第一个扇区 (2048-41943039, 默认 2048): 
最后一个扇区,+/-sectors 或 +size{K,M,G,T,P} (2048-41943039, 默认 41943039): 

创建了一个新分区 1,类型为“Linux”,大小为 20 GiB。

命令(输入 m 获取帮助):p
Disk /dev/sdb:20 GiB,21474836480 字节,41943040 个扇区
磁盘型号:Virtual disk    
单元:扇区 / 1 * 512 = 512 字节
扇区大小(逻辑/物理):512 字节 / 512 字节
I/O 大小(最小/最佳):512 字节 / 512 字节
磁盘标签类型:dos
磁盘标识符:0x10c33a14

设备       启动  起点     末尾     扇区 大小 Id 类型
/dev/sdb1        2048 41943039 41940992  20G 83 Linux

命令(输入 m 获取帮助):w
分区表已调整。
将调用 ioctl() 来重新读分区表。
正在同步磁盘。

2. Format and loading / dev / sdb1

mkfs.xfs -i size=512 /dev/sdb1
mkdir -p /data/brick1
echo '/dev/sdb1 /data/brick1 xfs defaults 1 2' >> /etc/fstab
mount -a && mount
df -h

Step 3, the installation GlusterFS

install software

yum install glusterfs-server

Start the daemon GlusterFS management

CentOS 6
# service glusterd start
# service glusterd status
CentOS 7
# systemctl start glusterd.service
# systemctl status glusterd.service

Step 4, to configure the firewall

iptables -I INPUT -p all -s 172.30.200.240 -j ACCEPT
iptables -I INPUT -p all -s 172.30.200.241 -j ACCEPT
iptables -I INPUT -p all -s 172.30.200.242 -j ACCEPT

If you want simple, you can also turn off the firewall

systemctl stop firewalld.service 

Step 5, arranged trusted pool

In the above server1 server, as follows:

# gluster peer probe server2
# gluster peer probe server3

View server1 in, peer status information

# gluster peer status

You should see the following information (UUID is different)

Number of Peers: 2

Hostname: server2
Uuid: f7b97263-1da0-4572-8340-3be3182f9db3
State: Peer in Cluster (Connected)

Hostname: server3
Uuid: a89c3006-1b66-44af-bebd-bafa367d69e1
State: Peer in Cluster (Connected)

Step 6, a roll GlusterFS

All servers, execute the following command:

# mkdir -p /data/brick1/gv0

Any server, execute the following command:

# gluster volume create gv0 replica 3 server1:/data/brick1/gv0 server2:/data/brick1/gv0 server3:/data/brick1/gv0
volume create: gv0: success: please start the volume to access data
# gluster volume start gv0
volume start: gv0: success

Information Confirm the volume is normal:

# gluster volume info

You should see the following information (Volume ID is different)

Volume Name: gv0
Type: Replicate
Volume ID: 53e05780-146d-41ca-bdfc-b2152fafb2a0
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: server1:/data/brick1/gv0
Brick2: server2:/data/brick1/gv0
Brick3: server3:/data/brick1/gv0
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off

Note: If the state is not "Started", you can view /var/log/glusterfs/glusterd.logdiagnostic error

Step 7, whether the normal use of the test volume GlusterFS

Here the test client serverIP:172.30.200.243

Install Client

yum install -y glusterfs glusterfs-fuse

With the host

[root@localhost ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.30.200.240 server1
172.30.200.241 server2
172.30.200.242 server3

Create a directory and mount glusterfs

# mkdir -p /data
# mount -t glusterfs server1:/gv0 /data

Tests are stored

for i in `seq -w 1 100`; do cp -rp /var/log/messages /data/copy-test-$i; done
[root@localhost data]# ls -lA /data/copy* | wc -l
100

See here is 100 files.

Now go Glusterfs server cluster, each server queries. Each server can see above, there are 100 files

[root@server1 ~]#  ls -lA /data/brick1/gv0/copy* |wc -l
100
[root@server2 ~]# ls -lA /data/brick1/gv0/copy* |wc -l
100
[root@server3 ~]# ls -lA /data/brick1/gv0/copy* |wc -l
100

Guess you like

Origin www.cnblogs.com/zhangshengdong/p/11812780.html