mfs的高可用实现

mfs的高可用配置
实验环境必须保证每个节点有解析:
redhat 7.3

server1 172.25.26.1   mfsmaster节点
server2 172.25.26.2   从节点,就是真正储存数据的节点
server3 172.25.26.3   同server2
server4 172.25.26.4   mfsmaster高可用节点

1.server4上安装master,编辑域名解析,开启服务

[root@server4 3.0.103]# yum install -y moosefs-master-3.0.103-1.rhsystemd.x86_64.rpm

[root@server4 3.0.103]# vim /etc/hosts
172.25.26.1    server1 mfsmaster

2.server1和server4上配置高可用yum源

[root@server1 ~]# vim /etc/yum.repos.d/yum.repo 
[rhel7.3]
name=rhel7.3
baseurl=http://172.25.26.250/rhel7.3/x86_64/dvd
gpgcheck=0

[HighAvailability]
name=HighAvailability
baseurl=http://172.25.26.250/rhel7.3/x86_64/dvd/addons/HighAvailability
gpgcheck=0

[ResilientStorage]
name=ResilientStorage
baseurl=http://172.25.26.250/rhel7.3/x86_64/dvd/addons/ResilientStorage
gpgcheck=0

[root@server1 ~]# scp /etc/yum.repos.d/yum.repo 172.25.26.4:/etc/yum.repos.d/yum.repo
[email protected]'s password: 
yum.repo   

3.在serevr1和server4安装组件

[root@server1 ~]# yum install -y pacemaker corosync
[root@server4 ~]# yum install -y pacemaker corosync

4.在server1和server4上做免密

[root@server1 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
d7:92:3e:97:0a:d9:30:62:d5:00:52:b1:e6:23:ef:75 root@server1
The key's randomart image is:
+--[ RSA 2048]----+
|    ..+o.        |
|     . . o       |
|      o . .      |
|     o .   o     |
|    . = S + .    |
|     + o B . .   |
|      . + E o    |
|     . . o +     |
|      .   .      |
+-----------------+

[root@server1 ~]# ssh-copy-id server4
[root@server1 ~]# ssh-copy-id server1

在这里插入图片描述
在这里插入图片描述
5.安装资源管理工具pcs

[root@server1 ~]# yum install -y pcs
[root@server4 ~]# yum install -y pcs
[root@server1 ~]# systemctl start pcsd
[root@server4 ~]# systemctl start pcsd
[root@server1 ~]# systemctl enable pcsd
[root@server4 ~]# systemctl enable pcsd
[root@server1 ~]# passwd hacluster
[root@server4 ~]# passwd hacluster

在这里插入图片描述
在这里插入图片描述
6.创建集群,并启动

[root@server1 ~]# pcs cluster auth server1 server4
[root@server1 ~]# pcs cluster setup --name mycluster server1 server4 
[root@server1 ~]# pcs cluster start --all

在这里插入图片描述
7.查看状态

[root@server1 ~]# corosync-cfgtool -s
Printing ring status.
Local node ID 1
RING ID 0
	id	= 172.25.26.1
	status	= ring 0 active with no faults
[root@server1 ~]# pcs status corosync

Membership information
----------------------
    Nodeid      Votes Name
         1          1 server1 (local)
         2          1 server4

在这里插入图片描述

[root@server1 ~]# pcs property set stonith-enabled=false
[root@server1 ~]# crm_verify -L -V

9.创建vip资源

[root@server1 ~]# pcs resource create vip ocf:heartbeat:IPaddr2 ip=172.25.26.100 cidr_netmask=32 op monitor interval=30s

[root@server1 ~]# pcs resource show
 vip	(ocf::heartbeat:IPaddr2):	Started server1
 
[root@server1 ~]# pcs status 
[root@server4 ~]# crm_mon		##查看状态

在这里插入图片描述
在这里插入图片描述
10.实现高可用
查看此时虚拟ip在server上

[root@server1 ~]# ip addr

[root@server1 ~]# pcs cluster stop server1 
[root@server4 ~]# ip addr
用crm_mon命令,查看监控信息可以查看只有server4是online
再开启server1,vip不会漂移
[root@server1 ~]# pcs cluster start server1  

在这里插入图片描述
在这里插入图片描述
查看vip是否漂移到server4中
在这里插入图片描述
在这里插入图片描述

实现数据共享(iscsi)

1.清理之前的环境

[root@foundation14 ~]# umount /mnt/mfsmeta

[root@server1 ~]# systemctl stop moosefs-master
[root@server2 chunk1]# systemctl stop moosefs-chunkserver
[root@server3 3.0.103]# systemctl stop moosefs-chunkserver

2.所有节点,添加如下解析(真机,server1-4)

[root@foundation14 ~]# vim /etc/hosts
172.25.26.100  mfsmaster
172.24.26.1    server1

3.安装targetcli,并做相应配置

[root@server2 ~]# yum install -y targetcli

[root@server2 ~]# umount /mnt/chunk1
[root@server2 ~]# targetcli
Warning: Could not load preferences file /root/.targetcli/prefs.bin.
targetcli shell version 2.1.fb41
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.

/> ls
o- / ...................................................................................... [...]
  o- backstores ........................................................................... [...]
  | o- block ............................................................... [Storage Objects: 0]
  | o- fileio .............................................................. [Storage Objects: 0]
  | o- pscsi ............................................................... [Storage Objects: 0]
  | o- ramdisk ............................................................. [Storage Objects: 0]
  o- iscsi ......................................................................... [Targets: 0]
  o- loopback ...................................................................... [Targets: 0]
/> cd backstores/block 
/backstores/block> create my_disk1 /dev/vda
Created block storage object my_disk1 using /dev/vda.

/iscsi> create iqn.2019-05.com.example:server2

/iscsi/iqn.20...er2/tpg1/luns> create /backstores/block/my_disk1

/iscsi/iqn.20...er2/tpg1/acls> create iqn.2019-05.com.example:client

/iscsi/iqn.20.../tpg1/portals> create 172.25.36.2
Using default IP port 3260
Created network portal 172.25.36.2:3260.

在这里插入图片描述
5. server1
##安装iscsi,并连接

[root@server1 ~]# yum install -y iscsi-*

[root@server1 ~]# cat /etc/iscsi/initiatorname.iscsi 
InitiatorName=iqn.2019-05.com.example:client

[root@server1 ~]# iscsiadm -m discovery -t st -p 172.25.26.2
172.25.26.2:3260,1 iqn.2019-05.com.example:server2
[root@server1 ~]# iscsiadm -m node -l
Logging in to [iface: default, target: iqn.2019-05.com.example:server2, portal: 172.25.26.2,3260] (multiple)
Login to [iface: default, target: iqn.2019-05.com.example:server2, portal: 172.25.26.2,3260] successful.

##磁盘共享成功

[root@server1 ~]# fdisk -l
Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors 
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

##创建分区,并格式化

[root@server1 ~]# fdisk /dev/sdb    
[root@server1 ~]# mkfs.xfs /dev/sdb1

在这里插入图片描述
##挂载,并放入共享文件

[root@server1 ~]# mount /dev/sdb1 /mnt/
[root@server1 ~]# cd /var/lib/mfs/
[root@server1 mfs]# ls
changelog.1.mfs  metadata.crc  metadata.mfs.back.1  stats.mfs
changelog.2.mfs  metadata.mfs  metadata.mfs.empty
[root@server1 mfs]# cp -p * /mnt
[root@server1 mfs]# cd /mnt
[root@server1 mnt]# ls
changelog.1.mfs  metadata.crc  metadata.mfs.back.1  stats.mfs
changelog.2.mfs  metadata.mfs  metadata.mfs.empty
[root@server1 mnt]# chown mfs.mfs /mnt/
[root@server1 ~]# systemctl start moosefs-master
[root@server1 ~]# systemctl stop moosefs-master

4.server4,同server1

[root@server4 ~]# yum install -y iscsi-*
[root@server4 ~]# cat /etc/iscsi/initiatorname.iscsi 
InitiatorName=iqn.2019-05.com.example:client
[root@server4 ~]# iscsiadm -m discovery -t st -p 172.25.26.2
172.25.14.2:3260,1 iqn.2019-04.com.example:server2
[root@server4 ~]# iscsiadm -m node -l
[root@server4 ~]# ls /mnt/
[root@server4 ~]# mount /dev/sdb1 /mnt
[root@server4 ~]# ls /mnt/				##文件共享成功
changelog.1.mfs  metadata.crc  metadata.mfs.back.1  stats.mfs
changelog.2.mfs  metadata.mfs  metadata.mfs.empty

猜你喜欢

转载自blog.csdn.net/qwqq233/article/details/90322337
今日推荐