版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/EVISWANG/article/details/77966619
pxc节点:10.194.41.231 10.194.41.228 10.194.41.227
ha: 10.194.41.220 主 10.194.41.221 备
vip:10.194.41.199
一.安装keepalived
1.yum install keepalived
2.修改配置
主节点4上
vi /etc/keepalived/keepalived.conf
vrrp_script chk_haproxy {
script "killall -0 haproxy" # verify the pid existance
interval 2 # check every 2 seconds
weight -2 # add 2 points of prio if OK
}
vrrp_instance PXC_3306 {
interface eth0 # interface to monitor
state MASTER
virtual_router_id 51 # Assign one ID for this route
priority 101 # 101 on master, 100 on backup
nopreempt
debug
virtual_ipaddress {
10.194.41.199/24 dev eth0 label eth0:0
}
track_script {
chk_haproxy
}
notify_master /etc/keepalived/scripts/start_haproxy.sh #表示当切换到master状态时,要执行的脚本
notify_fault /etc/keepalived/scripts/stop_keepalived.sh #故障时执行的脚本
notify_stop /etc/keepalived/scripts/stop_haproxy.sh #keepalived 停止运行前运行notify_stop指定的脚本
}
注:备节点5上/etc/keepalived/keepalived.conf,做相同配置,只修改
state BACKUP
priority 90
3.各个执行脚本
3.1 当切换到master状态时,要执行的脚本
vi /etc/keepalived/scripts/start_haproxy.sh
#!/bin/bash
sleep 5
get=`ip addr |grep 10.194.41.199 |wc -l` echo $get >> /etc/keepalived/scripts/start_ha.log
if [ $get -eq 1 ]
then
echo "`date +%c` success to get vip" >> /etc/keepalived/scripts/start_ha.log
/usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/conf/haproxy.cfg #根据实际haproxy修改
else
echo "`date +%c` can not get vip" >> /etc/keepalived/scripts/start_ha.log
fi
3.2 故障时执行的脚本
vi /etc/keepalived/scripts/stop_keepalived.sh
#!/bin/bash
pid=`pidof keepalived`
if [ $pid == "" ]
then
echo "`date +%c` no keepalived process id" >> /etc/keepalived/scripts/stop_keep.log
else
echo "`date +%c` will stop keepalived " >> /etc/keepalived/scripts/stop_keep.log
/etc/init.d/keepalived stop
fi
3.3 keepalived 停止运行前运行notify_stop指定的脚本
vi /etc/keepalived/scripts/stop_haproxy.sh
#!/bin/bash
pid=`pidof haproxy`
echo "`date +%c` stop haproxy" >> /etc/keepalived/scripts/stop_ha.log
kill -9 $pid
二、安装haproxy
wget http://www.haproxy.org/download/1.4/src/haproxy-1.4.27.tar.gz
tar xvfz haproxy-1.4.27.tar.gz -C /tmp/
cd /tmp/haproxy-1.4.27
make TARGET=linux26 PREFIX=/usr/local/haproxy
make install PREFIX=/usr/local/haproxy
cd /usr/local/haproxy/
mkdir conf logs
vim conf/haproxy.cfg
global
maxconn 51200
#uid 99
#gid 99
chroot /usr/local/haproxy
daemon
#quiet
nbproc 1
pidfile /usr/local/haproxy/logs/haproxy.pid
defaults
mode tcp
option redispatch
option abortonclose
timeout connect 5000s
timeout client 50000s
timeout server 50000s
log 127.0.0.1 local0
balance roundrobin
listen proxy
bind 10.194.41.199:3307
mode tcp
option httpchk
server db1 10.194.41.231:3306 weight 1 check port 9200 inter 12000 rise 3 fall 3
server db2 10.194.41.228:3306 weight 1 check port 9200 inter 12000 rise 3 fall 3
server db2 10.194.41.227:3306 weight 1 check port 9200 inter 12000 rise 3 fall 3
listen haproxy_stats
mode http
bind 10.194.41.220:8888
option httplog
stats refresh 5s
stats uri /status
stats realm Haproxy Manager
stats auth admin:evis123
注: haproxy安装和配置两节点4,5一样
4和5的内核参数修改/etc/sysctl.conf增加
net.ipv4.ip_nonlocal_bind=1
net.ipv4.ip_forward = 1
sysctl -p 生效
三、数据库节点
安装mysql健康状态检查脚本,用于haproxy检查, 每个节点都安装
1)脚本拷贝
# cp /usr/local/mysql/bin/clustercheck /usr/bin/
# cp /usr/local/mysql/xinetd.d/mysqlchk /etc/xinetd.d/
# cp /usr/local/mysql/bin/mysql /usr/bin/
ps:clustercheck和脚本都是默认值没有修改
注:如不使用clustercheck中默认用户名和密码,将需要修改clustercheck脚本,MYSQL_USERNAME和MYSQL_PASSWORD值
clustercheckuser
clustercheckpassword!
CREATE USER 'clustercheckuser'@'%' identified BY 'clustercheckpassword!';
GRANT ALL ON *.* TO 'clustercheckuser'@'%';
flush privileges;
2)更改/etc/services添加mysqlchk的服务端口号:
# echo 'mysqlchk 9200/tcp # mysqlchk' >> /etc/services
3)安装xinetd服务,通过守护进程来管理mysql健康状态检查脚本
# yum -y install xinetd
# /etc/init.d/xinetd restart
Stopping xinetd: [FAILED]
Starting xinetd: [ OK ]
# chkconfig --level 2345 xinetd on
# chkconfig --list |grep xinetd
xinetd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
安装完成后,将xinetd服务加入开机自启动:
systemctl enable xinetd.service
将telnet服务加入开机自启动:
systemctl enable telnet.socket
最后,启动以上两个服务即可:
systemctl start telnet.socket
systemctl start xinetd(或service xinetd start)
测试检测脚本:
# clustercheck
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: close
Content-Length: 40
Percona XtraDB Cluster Node is synced.
注:ps:要保证状态为200,否则检测不通过,可能是mysql服务不正常,或者环境不对致使haproxy无法使用mysql
haproxy如何侦测 MySQL Server 是否存活,靠着就是 9200 port,透过 Http check 方式,让 HAProxy 知道 PXC 状态
在mysql集群的其他节点执行上面操作,保证各个节点返回状态为200,如下:
curl -I 10.194.41.231:9200
curl -I 10.194.41.228:9200
curl -I 10.194.41.227:9200
四、最后启动测试
1.启动keepalived
/etc/init.d/keepalived start
1;systemctl daemon-reload 重新加载
2:systemctl enable keepalived.service 设置开机自动启动
3:systemctl disable keepalived.service 取消开机自动启动
4:systemctl start keepalived.service 启动
5:systemctl stop keepalived.service停止
2.启动haproxy
/usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/conf/haproxy.cfg
http://10.194.41.220:8888/status