Linux之企业实训篇——haproxy与pacemaker实现高可用负载均衡

:haproxy与fence的相关配置可以参照一下我之前写的博客 >_< ~~~

一、简介

  • Pacemaker是一个集群资源管理器。它利用集群基础构件(OpenAIS,heartbeat或corosync)提供的消息和成员管理能力来探测并从节点或资源级别的故障中恢复,以实现群集服务(亦称资源)的最大可用性。
  • Corosync是集群管理套件的一部分,它在传递信息的时候可以通过一个简单的配置文件来定义信息传递的方式和协议等。
    [1]
    Corosync是集群管理套件的一部分,通常会与其他资源管理器一起组合使用它在传递信息的时候可以通过一个简单的配置文件来定义信息传递的方式和协议等。它是一个新兴的软件,2008年推出,但其实它并不是一个真正意义上的新软件,在2002年的时候有一个项目Openais
    , 它由于过大,分裂为两个子项目,其中可以实现HA心跳信息传输的功能就是Corosync ,它的代码60%左右来源于Openais.
    Corosync可以提供一个完整的HA功能,但是要实现更多,更复杂的功能,那就需要使用Openais了。Corosync是未来的发展方向。在以后的新项目里,一般采用Corosync,而hb_gui可以提供很好的HA管理功能,可以实现图形化的管理。另外相关的图形化有RHCS的套件luci+ricci,当然还有基于java开发的LCMC集群管理工具。

二、实验环境

  • haproxy与pacemaker服务器 :

    • server1: 172.25.2.1/24
    • server2 : 172.25.2.2/24
  • 后端服务器:

    • server:3: 172.25.2.3/24
    • server4 : 172.25.2.4/24
  • 物理主机:172.25.2.250/24

此次试验所用的安装包链接: https://pan.baidu.com/s/1nCyPkqyomRDHjWG__X0lcw 密码: wmxq

三、实验

3.1 Pacemaker+Corosync配置:

server2于server1环境应该完全相同,haproxy的配置参数可查看上篇博客

3.1.1 配置server2于server1相同的haproxy环境。
[root@server2 x86_64]# rpm -ivh haproxy-1.6.11-1.x86_64.rpm   
Preparing...                ########################################### [100%]
   1:haproxy                ########################################### [100%]
[root@server1 ~]# scp /etc/haproxy/haproxy.cfg server2:/etc/haproxy/haproxy.cfg 
root@server2's password: 
haproxy.cfg                                                                       100% 1897     
[root@server2 x86_64]# /etc/init.d/haproxy start
Starting haproxy:                                          [  OK  ]
3.1.2 安装pacemaker与corosync软件
[root@server2 ~]# yum install -y pacemaker corosync
[root@server1 ~]# cd /etc/corosync/
[root@server1 corosync]# ls
corosync.conf.example  corosync.conf.example.udpu  service.d  uidgid.d
[root@server1 corosync]# cp corosync.conf.example corosync.conf 
//拷贝配置文件
[root@server1 corosync]# vim corosync.conf

这里写图片描述
这里写图片描述

[root@server1 corosync]# scp corosync.conf server2:/etc/corosync/  //拷贝配置文件
root@server2s password: 
corosync.conf                                                    100%  480     0.5KB/s   00:00    
[root@server1 ~]# yum install crmsh-1.2.6-0.rc2.2.1.x86_64.rpm  pssh-2.3.1-2.1.x86_64.rpm  -y         //安装管理工具
3.1.3 pacemaker配置参数查看
[root@server1 ~]# crm      //进入管理界面
crm(live)# configure 
crm(live)configure# show    //查看默认配置
node server1
node server2
property $id="cib-bootstrap-options" \
    dc-version="1.1.10-14.el6-368c726" \
    cluster-infrastructure="classic openais (with plugin)" \
    expected-quorum-votes="2"
crm(live)configure# 

在另一台服务器上我们也可以实施监控查看

Server4:

[root@server1 ~]# crm_mon   //调出监控
Last updated: Sat Aug  4 15:07:13 2018
Last change: Sat Aug  4 15:00:04 2018 via crmd on server1
Stack: classic openais (with plugin)
Current DC: server1 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
0 Resources configured
//ctrl+c退出监控
3.1.4 stonith禁用
[root@server1 ~]# crm      //进入管理界面
crm(live)# configure 
crm(live)configure# property stonith-enabled=false
//corosync默认启用了stonith,而当前集群并没有相应的stonith设备  我们里可以通过如下命令先禁用stonith:
crm(live)configure# commit   //保存

注意:每次修改完策略都必须保存一下,否则不生效
这里写图片描述

3.1.5 添加vip
[root@server2 rpmbuild]# crm_verify -VL  //检查语法
[root@server2 ~]# crm      //进入管理界面
crm(live)# configure 
crm(live)configure# primitive vip ocf:heartbeat:IPaddr2 params ip=172.25.2.100 cidr_netmask=24 op monitor interval=1min  
//添加VIP
crm(live)configure# commit    //保存

这里写图片描述

//监控
Last updated: Sat Aug  4 15:26:06 2018
Last change: Sat Aug  4 15:25:34 2018 via cibadmin on server1
Stack: classic openais (with plugin)
Current DC: server1 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
1 Resources configured


Online: [ server1 server2 ]

vip     (ocf::heartbeat:IPaddr2):   Started server2  //此时vip已添加在server2上

[root@server2~]# /etc/init.d/corosync stop   //server2关闭服务
Starting Corosync Cluster Engine (corosync):               [  OK  ]

Server1:
Last updated: Sat Aug  4 15:28:31 2018
Last change: Sat Aug  4 15:25:34 2018 via cibadmin on server1
Stack: classic openais (with plugin)
Current DC: server1 - partition WITHOUT quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
1 Resources configured


Online: [ server1 ]
OFFLINE: [ server2 ]

vip     (ocf::heartbeat:IPaddr2):   Started server1
[root@server2 ~]# /etc/init.d/corosync start  //打开服务
Starting Corosync Cluster Engine (corosync):               [  OK  ]


Server1:
Last updated: Sat Aug  4 15:31:27 2018
Last change: Sat Aug  4 15:25:34 2018 via cibadmin on server1
Stack: classic openais (with plugin)
Current DC: server1 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
1 Resources configured


Online: [ server1 server2 ]   //server2 online

vip     (ocf::heartbeat:IPaddr2):   Started server1
3.1.6 设置投票
[root@server2 x86_64]# crm
crm(live)# configure 
crm(live)configure# show 
node server1
node server2
primitive vip ocf:heartbeat:IPaddr2 \
    params ip="172.25.2.100" cidr_netmask="24" \
    op monitor interval="1min"
property $id="cib-bootstrap-options" \
    dc-version="1.1.10-14.el6-368c726" \
    cluster-infrastructure="classic openais (with plugin)" \
    expected-quorum-votes="2" \
    stonith-enabled="false"
crm(live)configure# property no-quorum-policy=ignore  v//设置投票
crm(live)configure# verify   //检查语法
crm(live)configure# commit  //保存
3.1.7 添加haproxy:
[root@server1 ~]# crm 
crm(live)# configure 
crm(live)configure# primitive haproxy lsb:haproxy op monitor interval=1min
crm(live)configure# commit 
Last updated: Sat Aug  4 15:45:04 2018
Last change: Sat Aug  4 15:44:58 2018 via cibadmin on server1
Stack: classic openais (with plugin)
Current DC: server2 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
2 Resources configured


Online: [ server1 server2 ]

vip     (ocf::heartbeat:IPaddr2):   Started server2
haproxy (lsb:haproxy):  Started server1 //监控,此时haproxy在server1上运行
3.1.8 添加组,组合vip与haproxy
[root@server1 ~]# crm 
crm(live)# configure 
crm(live)configure# group hagroup vip haproxy 
crm(live)configure# commit 
Last updated: Sat Aug  4 15:46:21 2018
Last change: Sat Aug  4 15:46:02 2018 via cibadmin on server1
Stack: classic openais (with plugin)
Current DC: server2 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
2 Resources configured


Online: [ server1 server2 ]

 Resource Group: hagroup   //hagroup
     vip        (ocf::heartbeat:IPaddr2):   Started server2
     haproxy    (lsb:haproxy):  Started server1

这里写图片描述

3.1.9 测试
Last updated: Sat Aug  4 15:46:21 2018
Last change: Sat Aug  4 15:46:02 2018 via cibadmin on server1
Stack: classic openais (with plugin)
Current DC: server2 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
2 Resources configured


Online: [ server1 server2 ]

 Resource Group: hagroup   //hagroup
     vip        (ocf::heartbeat:IPaddr2):   Started server2
     haproxy    (lsb:haproxy):  Started server1

[root@server1 ~]# crm node standby   //强制关闭当前服务器

Last updated: Sat Aug  4 16:05:30 2018
Last change: Sat Aug  4 16:05:26 2018 via crm_attribute on server1
Stack: classic openais (with plugin)
Current DC: server2 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
2 Resources configured


Node server1: standby
Online: [ server2 ]     //仅仅server1在线

 Resource Group: hagroup
     vip        (ocf::heartbeat:IPaddr2):   Started server2 
     haproxy    (lsb:haproxy):  Started server2  //运行在server2

[root@server1 ~]# crm node online   //激活服务

Last updated: Sat Aug  4 16:15:27 2018
Last change: Sat Aug  4 16:15:11 2018 via crm_attribute on server1
Stack: classic openais (with plugin)
Current DC: server2 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
2 Resources configured


[root@server2 ~]# crm_mon
Online: [ server1 server2 ]    // 均在线

 Resource Group: hagroup
     vip        (ocf::heartbeat:IPaddr2):   Started server2
     haproxy    (lsb:haproxy):  Started server2

3.2 配置Fence:

1.Server1与server2

[root@server1 ~]# yum install fence-virt-0.2.3-15.el6.x86_64 -y  //安装fence

物理机:

[root@foundation2 cluster]# scp  fence_xvm.key server1:/etc/cluster/
root@server1's password:    //传秘钥
fence_xvm.key                                                                     100%  128     0.1

2.打开stonith支持fence

[root@server1 ~]# crm
crm(live)# configure 
crm(live)configure# show 
node server1 \
    attributes standby="off"
node server2
primitive haproxy lsb:haproxy \
    op monitor interval="1min"
primitive vip ocf:heartbeat:IPaddr2 \
    params ip="172.25.2.100" cidr_netmask="24" \
    op monitor interval="1min"
group hagroup vip haproxy
property $id="cib-bootstrap-options" \
    dc-version="1.1.10-14.el6-368c726" \
    cluster-infrastructure="classic openais (with plugin)" \
    expected-quorum-votes="2" \
    stonith-enabled="false" \
    no-quorum-policy="ignore"
crm(live)configure# property stonith-enabled=true  //打开fence
crm(live)configure# commit

2.创建vmfence

[root@server2 ~]# crm 
crm(live)# configure 
crm(live)configure# primitive vmfence 
lsb:      ocf:      service:  stonith:  
crm(live)configure# primitive vmfence stonith:fence_
fence_legacy   fence_pcmk     fence_virt     fence_xvm      
crm(live)configure# primitive vmfence stonith:fence_xvm 
meta     op       params   
crm(live)configure# primitive vmfence stonith:fence_xvm params 
action=                ipport=                pcmk_list_action=      pcmk_off_retries=      pcmk_status_timeout=
auth=                  key_file=              pcmk_list_retries=     pcmk_off_timeout=      port=
debug=                 multicast_address=     pcmk_list_timeout=     pcmk_reboot_action=    priority=
delay=                 pcmk_host_argument=    pcmk_monitor_action=   pcmk_reboot_retries=   retrans=
domain=                pcmk_host_check=       pcmk_monitor_retries=  pcmk_reboot_timeout=   stonith-timeout=
hash=                  pcmk_host_list=        pcmk_monitor_timeout=  pcmk_status_action=    timeout=
ip_family=             pcmk_host_map=         pcmk_off_action=       pcmk_status_retries=   use_uuid=

crm(live)configure# primitive vmfence stonith:fence_xvm params pcmk_host_map="server1:westos1;server2:westos2;" op monitor interval=1min
//这里的server1后跟的名称必须与物理机上虚拟机名称一样
 crm(live)configure# commit 

这里写图片描述
监控:



Last updated: Sat Aug  4 16:36:52 2018
Last change: Sat Aug  4 16:36:00 2018 via cibadmin on server2
Stack: classic openais (with plugin)
Current DC: server2 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
3 Resources configured


Online: [ server1 server2 ]

 Resource Group: hagroup
     vip        (ocf::heartbeat:IPaddr2):   Started server2
     haproxy    (lsb:haproxy):  Started server2
vmfence (stonith:fence_xvm):    Started server1 / /添加

3.3 测试

[root@server2 ~]# echo c >/proc/sysrq-trigger  //内核崩溃


Last updated: Sat Aug  4 16:39:20 2018
Last change: Sat Aug  4 16:36:00 2018 via cibadmin on server2
Stack: classic openais (with plugin)
Current DC: server2 - partition WITHOUT quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
3 Resources configured


Online: [ server2 ]
OFFLINE: [ server1 ]

 Resource Group: hagroup
     vip        (ocf::heartbeat:IPaddr2):   Started server2
     haproxy    (lsb:haproxy):  Started server2
vmfence (stonith:fence_xvm):    Started server2  //转移

这里写图片描述
server1会重启
这里写图片描述

[root@server1 ~]# /etc/init.d/corosync start  //开启服务
Starting Corosync Cluster Engine (corosync):               [  OK  ]


Last updated: Sat Aug  4 16:40:58 2018
Last change: Sat Aug  4 16:36:00 2018 via cibadmin on server2
Stack: classic openais (with plugin)
Current DC: server2 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
3 Resources configured


Online: [ server1 server2 ]  

 Resource Group: hagroup
     vip        (ocf::heartbeat:IPaddr2):   Started server2
     haproxy    (lsb:haproxy):  Started server2
vmfence (stonith:fence_xvm):    Started server1  //转移

4.访问vip
这里写图片描述

猜你喜欢

转载自blog.csdn.net/yifan850399167/article/details/81452934