运维企业实战——mfs配置高可用

实验环境:
redhat: 7.3
server1 172.25.10.1 mfsmaster管理节点
server2 172.25.10.2 从节点
server3 172.25.10.3 从节点
server4 172.25.10.4 高可用节点
真机 172.25.10.250 客户端

1、server4上安装master,编辑域名解析,开启服务

[root@server4 ~]# yum install moosefs-master-3.0.103-1.rhsystemd.x86_64.rpm -y

[root@server4 ~]# vim /etc/hosts
172.25.10.1    server1 mfsmaster

[root@server4 ~]# vim /usr/lib/systemd/system/moosefs-master.service 
 ExecStart=/usr/sbin/mfsmaster -a
[root@server4 ~]# systemctl daemon-reload
[root@server4 ~]# systemctl start moosefs-master

2、在server1和server4上配置高可用yum源

[root@server1 ~]# vim /etc/yum.repos.d/yum.repo 
[rhel7]
name=rhel7.3
baseurl=http://172.25.10.250/rhel7.3
gpgcheck=0

[HighAvailability]
name=HighAvailability
baseurl=http://172.25.10.250/rhel7.3/addons/HighAvailability
gpgcheck=0

[ResilientStorage]
name=ResilientStorage
baseurl=http://172.25.10.250/rhel7.3//addons/ResilientStorage
gpgcheck=0

[root@server1 mnt]# scp /etc/yum.repos.d/yum.repo server4:/etc/yum.repos.d/yum.repo 

3、在server1和server4上安装组件

[root@server1 mnt]# yum install -y pacemakeer corosync
[root@server4 ~]# yum install -y pacemakeer corosync

4、给server1和server4做免密

[root@server1 mnt]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
8a:e4:31:e5:7b:07:70:24:94:f1:bf:b5:74:7d:0c:8f root@server1
The key's randomart image is:
+--[ RSA 2048]----+
|     .+o.        |
|      .+         |
|      o o     .  |
|     o o .     * |
|    + . S . o E =|
|   o + o . + o  .|
|    o o . o .    |
|       . .       |
|                 |
+-----------------+
[root@server1 mnt]# ssh-copy-id server4
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@server4's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'server4'"
and check to make sure that only the key(s) you wanted were added.

[root@server1 mnt]# ssh-copy-id server1
The authenticity of host 'server1 (172.25.10.1)' can't be established.
ECDSA key fingerprint is a1:32:13:3a:f7:04:db:99:66:54:5f:eb:3a:2c:a3:5e.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@server1's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'server1'"
and check to make sure that only the key(s) you wanted were added.

5、安装资源管理工具pcs

[root@server1 ~]# yum install -y pcs
[root@server4 ~]# yum install -y pcs
[root@server1 ~]# systemctl start pcsd
[root@server4 ~]# systemctl start pcsd
[root@server1 ~]# systemctl enable pcsd
[root@server4 ~]# systemctl enable pcsd
[root@server1 ~]# passwd hacluster
[root@server4 ~]# passwd hacluster

6、创建集群,启动

[root@server1 ~]# pcs cluster auth server1 server4
[root@server1 ~]# pcs cluster setup --name mycluster server1 server4 
[root@server1 ~]# pcs cluster start --all

7、查看状态

[root@server1 mnt]# corosync-cfgtool -s
Printing ring status.
Local node ID 1
RING ID 0
	id	= 172.25.10.1
	status	= ring 0 active with no faults
[root@server1 mnt]# pcs status corosync

Membership information
----------------------
    Nodeid      Votes Name
         1          1 server1 (local)
         2          1 server4


[root@server1 mnt]# crm_verify -L -V
   error: unpack_resources:	Resource start-up disabled since no STONITH resources have been defined
   error: unpack_resources:	Either configure some or disable STONITH with the stonith-enabled option
   error: unpack_resources:	NOTE: Clusters with shared data need STONITH to ensure data integrity
Errors found during check: config not valid
[root@server1 mnt]# pcs property set stonith-enabled=false
[root@server1 mnt]# crm_verify -L -V

8、创建资源

[root@server1 mnt]# pcs resource create vip ocf:heartbeat:IPaddr2 ip=172.25.10.100 cidr_netmask=32 op monitor interval=30s
[root@server1 mnt]# pcs resource show
     vip	(ocf::heartbeat:IPaddr2):	Started server1
[root@server1 mnt]# pcs status 
Cluster name: mycluster
Stack: corosync
Current DC: server4 (version 1.1.15-11.el7-e174ec8) - partition with quorum
Last updated: Sat May 18 15:18:08 2019		Last change: Sat May 18 15:16:26 2019 by root via cibadmin on server1

2 nodes and 1 resource configured

Online: [ server1 server4 ]

Full list of resources:

 vip	(ocf::heartbeat:IPaddr2):	Started server1

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled

[root@server4 ~]# crm_mon
9、实现高可用
查看,此时的虚拟ip在server1上

[root@server1 mnt]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:da:c2:92 brd ff:ff:ff:ff:ff:ff
    inet 172.25.10.1/24 brd 172.25.10.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.25.10.100/32 brd 172.25.10.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:feda:c292/64 scope link 
       valid_lft forever preferred_lft forever

关闭server1

[root@server1 mnt]# pcs cluster stop server1

[root@server1 mnt]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:da:c2:92 brd ff:ff:ff:ff:ff:ff
    inet 172.25.10.1/24 brd 172.25.10.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:feda:c292/64 scope link 
       valid_lft forever preferred_lft forever

虚拟ip漂移到了server4上

[root@server4 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:ce:9c:07 brd ff:ff:ff:ff:ff:ff
    inet 172.25.10.4/24 brd 172.25.10.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.25.10.100/32 brd 172.25.10.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fece:9c07/64 scope link 
       valid_lft forever preferred_lft forever

再次启动server1,vip不会漂移

[root@server1 mnt]# systemctl start moosefs-master

实现数据共享

1、清理之前的环境

[root@foundation10 mnt]# umount /mnt/mfsmeta/

[root@server1 ~]# systemctl stop moosefs-master
[root@server2 chunk1]# systemctl stop moosefs-chunkserver
[root@server3 3.0.103]# systemctl stop moosefs-chunkserver

2、在所有的节点,添加如下解析

[root@foundation10 ~]# vim /etc/hosts
[root@foundation10 ~]# cat /etc/hosts
172.25.10.100  mfsmaster
172.24.10.1    server1

3、给servre2添加一块虚拟磁盘
[root@server2 mfs]# fdisk -l

Disk /dev/vda: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

4、安装targetcli,并做相应配置

	 [root@server2 ~]# yum install -y targetcli
   
	 [root@server2 ~]# targetcli
    Warning: Could not load preferences file /root/.targetcli/prefs.bin.
    targetcli shell version 2.1.fb41
    Copyright 2011-2013 by Datera, Inc and others.
    For help on commands, type 'help'.
    
    /> ls
o- / ............................................................................................. [...]
  o- backstores .................................................................................. [...]
  | o- block ...................................................................... [Storage Objects: 0]
  | o- fileio ..................................................................... [Storage Objects: 0]
  | o- pscsi ...................................................................... [Storage Objects: 0]
  | o- ramdisk .................................................................... [Storage Objects: 0]
  o- iscsi ................................................................................ [Targets: 0]
  o- loopback ............................................................................. [Targets: 0]
/> cd backstores/block 
/backstores/block> create my_disk1 /dev/vdb
Created block storage object my_disk1 using /dev/vdb.
/backstores/block> cd /iscsi 
/iscsi> create iqn.2019-04.com.example:server2
Created target iqn.2019-04.com.example:server2.
Created TPG 1.
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (0.0.0.0), port 3260.
/iscsi> cd iqn.2019-04.com.example:server2/tpg1/luns 
/iscsi/iqn.20...er2/tpg1/luns> create /backstores/block/my_disk1 
Created LUN 0.
/iscsi/iqn.20...er2/tpg1/luns> create iqn.2019-04.com.example:client
storage object or path not valid
/iscsi/iqn.20...er2/tpg1/luns> cd ..
/iscsi/iqn.20...:server2/tpg1> ls
o- tpg1 ......................................................................... [no-gen-acls, no-auth]
  o- acls .................................................................................... [ACLs: 0]
  o- luns .................................................................................... [LUNs: 1]
  | o- lun0 ................................................................ [block/my_disk1 (/dev/vdb)]
  o- portals .............................................................................. [Portals: 1]
    o- 0.0.0.0:3260 ............................................................................... [OK]
/iscsi/iqn.20...:server2/tpg1> cd acls 
/iscsi/iqn.20...er2/tpg1/acls> create iqn.2019-04.com.example:client
Created Node ACL for iqn.2019-04.com.example:client
Created mapped LUN 0.
/iscsi/iqn.20...er2/tpg1/acls> exit
Global pref auto_save_on_exit=true
Last 10 configs saved in /etc/target/backup.
Configuration saved to /etc/target/saveconfig.json

5、在server1上安装iscsi
[root@server1 mnt]# yum install -y iscsi-*
[root@server1 mnt]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2019-04.com.example:client

[root@server1 ~]# iscsiadm -m discovery -t st -p 172.25.10.2
172.25.10.2:3260,1 iqn.2019-04.com.example:server2
[root@server1 ~]# iscsiadm -m node -l
Logging in to [iface: default, target: iqn.2019-04.com.example:server2, portal: 172.25.10.2,3260] (multiple)
Login to [iface: default, target: iqn.2019-04.com.example:server2, portal: 172.25.10.2,3260] successful.

6、磁盘共享

[root@server1 ~]# fdisk -l
Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors 
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

##创建分区,并格式化
[root@server1 ~]# fdisk /dev/sdb    
[root@server1 ~]# mkfs.xfs /dev/sdb1
[root@server1 ~]# mount /dev/sdb1 /mnt/
[root@server1 ~]# cd /var/lib/mfs/
[root@server1 mfs]# ls
changelog.10.mfs  changelog.6.mfs  changelog.9.mfs  metadata.mfs.back.1
changelog.1.mfs   changelog.7.mfs  metadata.crc     metadata.mfs.empty
changelog.3.mfs   changelog.8.mfs  metadata.mfs     stats.mfs
[root@server1 mfs]# cp -p * /mnt
[root@server1 mfs]# cd /mnt
[root@server1 mnt]# ls
changelog.10.mfs  changelog.6.mfs  changelog.9.mfs  metadata.mfs.back.1
changelog.1.mfs   changelog.7.mfs  metadata.crc     metadata.mfs.empty
changelog.3.mfs   changelog.8.mfs  metadata.mfs     stats.mfs

[root@server1 mnt]# chown mfs.mfs /mnt/
[root@server1 mnt]# cd
[root@server1 ~]# umount /mnt
[root@server1 ~]# mount /dev/sdb1 /var/lib/mfs
[root@server1 ~]# systemctl start moosefs-master
[root@server1 ~]# systemctl stop moosefs-master

server4,同server1

[root@server4 ~]# yum install -y iscsi-*
[root@server4 ~]# cat /etc/iscsi/initiatorname.iscsi 
InitiatorName=iqn.2019-04.com.example:client
[root@server4 ~]# iscsiadm -m discovery -t st -p 172.25.10.2
172.25.10.2:3260,1 iqn.2019-04.com.example:server2
[root@server4 ~]# iscsiadm -m node -l
[root@server4 ~]# mount /dev/sdb1 /var/lib/mfs
[root@server4 ~]# systemctl start moosefs-master
[root@server4 ~]# systemctl stop moosefs-master

7、创建资源

    [root@server1 ~]#  pcs  resource create mfsdata  ocf:heartbeat:Filesystem device=/dev/sdb1 directory=/var/lib/mfs fstype=xfs op monitor interval=30s
	[root@server1 ~]# pcs resource create mfsd systemd:moosefs-master op monitor interval=1min
	[root@server1 ~]# pcs resource group add mfsgroup vip mfsdata mfsd

8、 测试,关掉server4,则server1上线

servre4好了也不会漂回去

[root@server1 ~]# pcs cluster start server4  

配置fence

1、 server1,server4安装fence-virt

[root@server1 mfs]# yum install -y  fence-virt
[root@server4 mfs]# yum install -y  fence-virt

[root@server1 mfs]# pcs stonith list
fence_virt - Fence agent for virtual machines
fence_xvm - Fence agent for virtual machines

2、客户端安装 fence-virtd,并做配置

[root@foundation10 mnt]# yum install -y  fence-virtd

[root@foundation10 mnt]# mkdir /etc/cluster

[root@foundation10 mnt]# cd /etc/cluster
[root@foundation10 cluster]# fence_virtd -c
			Interface [virbr0]: br0 
[root@foundation10 cluster]# dd if=/dev/urandom of=fence_xvm.key bs=128 count=1
1+0 records in
1+0 records out
128 bytes (128 B) copied, 0.000713627 s, 179 kB/s
[root@foundation10 cluster]# ls
fence_xvm.key
[root@foundation10 cluster]# scp fence_xvm.key server1:
[root@foundation10 cluster]# scp fence_xvm.key server4:

[root@server1 mfs]# mkdir /etc/cluster
[root@server4 ~]# mkdir /etc/cluster

[root@foundation10 cluster]# systemctl start fence_virtd
[root@foundation10 cluster]#  netstat -anulp | grep :1229
udp        0      0 0.0.0.0:1229            0.0.0.0:*                           16927/fence_virtd

3、在sevrer1

[root@server1 mfs]# cd /etc/cluster
[root@server1 cluster]# pcs stonith create vmfence fence_xvm pcmk_host_map="server1:server1;server4:server4" op monitor interval=1min

[root@server1 cluster]# pcs property set stonith-enabled-true
[root@server1 cluster]# crm_verify -L -V    ##没有报错
[root@server1 cluster]# fence_xvm -H server4

[root@server4 ~]# pcs cluster start server4

使server1崩溃,server1会自启动,然后server4上线

[root@server1 ~]# echo c > /proc/sysrq-trigger

猜你喜欢

转载自blog.csdn.net/weixin_44321029/article/details/90317693