MySQL high availability cluster architecture --MHA architecture

MHA profile:

(1 Introduction

Currently in terms of high availability MySQL is a relatively mature solution that youshimaton by the Japanese company DeNA (Facebook now working in the company) to develop, is a good switch as a failure at the primary and MySQL high availability environments from high availability software upgrade . In MySQL failover process, the MHA can be done automatically in 30 seconds from 0 to the fault switching operation of the database, and performing the handover procedure failure, the MHA to ensure consistency of the data to the maximum extent, in order to achieve real availability of meaning.

(2) The software consists of two parts:

MHA Manager (management node) and MHA Node (node ​​data). MHA Manager can be deployed individually manage a plurality of master-slave cluster on a separate machine, it can be deployed on a slave node. MHA Node MySQL running on each server, master node MHA Manager will regularly detect the cluster, when the master fails, it can automatically slave to the latest data of the upgrade to the new master, and then all the other slave redirected to the new the master. The entire failover process is completely transparent to the application.

(3) How it works:

1, the automatic failover process MHA, MHA attempt to save down from the master binary log, without losing the greatest degree of assurance of data, but this is not always feasible. For example, if the primary server hardware failure or inability to access via ssh, MHA can not save the binary log, perform failover only lost the latest data. MySQL 5.5 using semi-synchronous replication, can greatly reduce the risk of data loss. MHA can be combined with the semi-synchronous replication. If only one slave has received the latest binary log, MHA latest binary log can be applied to all other slave servers, so you can ensure data consistency across all nodes.

2, order:

① master save the binary log events from the collapse of downtime (binlog Events);
② identify slave containing the latest update;
application log difference ③ relay (relay log) to another slave;
④ Apply to save the binary log events from the master ( Events binlog);
⑤ upgrade a slave as the new master;
⑥ the other slave connected to the new master replication

lab environment

master(192.168.13.129)   mha4mysql-node| 
slave1(192.168.13.130)  mha4mysql-node |
slave2(192.168.13.131)   mha4mysql-node |
manager(192.168.13.128)    mha4mysql-manager、 mha4mysql-node|

1, installed on the server from the database mysql three main

#安装编译依懒环境
[root@localhost ~] yum -y install gcc gcc-c++ ncurses ncurses-devel bison perl-Module-Install cmake
[root@localhost ~] mount.cifs //192.168.100.3/mha /mnt     ##挂载软件包
Password for root@//192.168.100.3/mha:    
[root@localhost ~] cd /mnt
[root@localhost mnt] tar zxvf cmake-2.8.6.tar.gz -C /opt   ##安装cmake编译软件
[root@localhost mnt] cd /opt/cmake-2.8.6/
[root@localhost cmake-2.8.6] ./configure   ##配置
[root@localhost cmake-2.8.6] gmake && gmake install   ##编译安装
#安装mysql数据库
[root@localhost cmake-2.8.6]# cd /mnt
[root@localhost mnt]# tar zxvf mysql-5.6.36.tar.gz -C /opt ##解压MySQL
#编译mysql
[root@localhost mnt]# cd /opt/mysql-5.6.36/
[root@localhost mysql-5.6.36]# cmake -DCMAKE_INSTALL_PREFIX=/usr/local/mysql \
-DDEFAULT_CHARSET=utf8 \     ##指定字符集
-DDEFAULT_COLLATION=utf8_general_ci \     ##指定字符集默认
-DWITH_EXTRA_CHARSETS=all \  ##关联额外的所有字符集
-DSYSCONFDIR=/etc    ##配置文件目录
#安装
[root@localhost mysql-5.6.36]# make && make install  ##编译安装
#设置环境变量
[root@localhost mysql-5.6.36]# cp support-files/my-default.cnf /etc/my.cnf  ##复制配置文件
[root@localhost mysql-5.6.36]# cp support-files/mysql.server /etc/rc.d/init.d/mysqld 
##复制启动脚本
[root@localhost mysql-5.6.36]# chmod +x /etc/rc.d/init.d/mysqld  ##给执行权限
[root@localhost mysql-5.6.36]# chkconfig --add mysqld  ##添加到service管理中
[root@localhost mysql-5.6.36]# echo "PATH=$PATH:/usr/local/mysql/bin" >> /etc/profile
##修改环境变量
[root@localhost mysql-5.6.36]# source /etc/profile   ##刷新华宁变量
#创建mysql数据库,并授权
[root@localhost mysql-5.6.36]# groupadd mysql   ##创建组
[root@localhost mysql-5.6.36]# useradd -M -s /sbin/nologin mysql -g mysql  
##创建系统用户
[root@localhost mysql-5.6.36]# chown -R mysql.mysql /usr/local/mysql  ##修改属组属主
[root@localhost mysql-5.6.36]# mkdir -p /data/mysql  ##创建数据目录
#初始化数据库
[root@localhost mysql-5.6.36]# /usr/local/mysql/scripts/mysql_install_db \
--basedir=/usr/local/mysql \  ##文件目录
--datadir=/usr/local/mysql/data \  ##数据目录
--user=mysql   ##用户

2, modify the mysql master configuration file: /etc/my.cnf, pay attention to three servers can not be the same as the server-id

---配置主服务器:
[root@master mysql-5.6.36]# vim /etc/my.cnf
[mysql]
server-id = 1
#开启二进制日志
log_bin = master-bin
#允许从服务器进行同步
log-slave-updates = true

---配置从服务器1:

[root@slave1 mysql-5.6.36]# vim /etc/my.cnf
[mysql]
server-id = 2
#开启二进制日志
log_bin = master-bin
 #使用中继日志进行同步
relay-log = relay-log-bin
relay-log-index = slave-relay-bin.index

---配置从服务器2:
[root@slave2 mysql-5.6.36]# vim /etc/my.cnf
[mysql]
server-id = 3
log_bin = master-bin
relay-log = relay-log-bin
relay-log-index = slave-relay-bin.index

3, three servers start mysql service

#在三台服务器上分别创建这两个个软链接
[root@master mysql-5.6.36]# ln -s /usr/local/mysql/bin/mysql /usr/sbin/
[root@master mysql-5.6.36]# ln -s /usr/local/mysql/bin/mysqlbinlog /usr/sbin/

#启动mysql
[root@master mysql-5.6.36]# /usr/local/mysql/bin/mysqld_safe --user=mysql &
#关闭防火墙和安全功能
[root@master mysql-5.6.36]# systemctl stop firewalld.service
[root@master mysql-5.6.36]# setenforce 0

4, arranged Mysql master-slave synchronization (two from a master) two authorized user database on all nodes

[root@master mysql-5.6.36]# mysql -u root -p             //进入数据库
mysql> grant replication slave on *.* to 'myslave'@'192.168.13.%' identified by '123';
##从数据库同步使用用户myslave
mysql> grant all privileges on *.* to 'mha'@'192.168.13.%' identified by 'manager';
##manager使用监控用户
mysql> flush privileges;  //刷新数据库
#在数据库上添加下列授权(理论上不需要)以主机名授权(MHA检查时是通过主机名的形式)
mysql> grant all privileges on *.* to 'mha'@'master' identified by 'manager';
mysql> grant all privileges on *.* to 'mha'@'slave1' identified by 'manager';
mysql> grant all privileges on *.* to 'mha'@'slave2' identified by 'manager';

5, View binary files on the master server and synchronization points

mysql> show master status;
+-------------------+----------+--------------+------------------+-------------------+
| File              | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+-------------------+----------+--------------+------------------+-------------------+
| master-bin.000001 |     1213 |              |                  |                   |
+-------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)

6, the synchronization is provided from the server on both

#在两台从服务器上都执行下列的命令,同步主服务器的日志
mysql>  change master to master_host='192.168.13.129',master_user='myslave',master_password='123',master_log_file='master-bin.000001',master_log_pos=1213;
mysql>  start slave;    //开启slave
mysql> show slave status\G;  //查看slave
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
mysql> set global read_only=1;
mysql> flush privileges;   //刷新数据库

7, MHA installed on all servers in the environment in accordance with lazy, first install epel source

[root@master mysql-5.6.36]# yum install epel-release --nogpgcheck -y  ##安装源
[root@master mysql-5.6.36]# yum install -y perl-DBD-MySQL \  ##针对MySQL
perl-Config-Tiny \  ##配置文件
perl-Log-Dispatch \   ##日志
perl-Parallel-ForkManager \  ##多线程管理
perl-ExtUtils-CBuilder \    ##扩展工具
perl-ExtUtils-MakeMaker \  
perl-CPAN  ##程序库

8, install the node on all servers

#解压安装node
[root@manager ~]# cd ~
[root@manager ~]# tar zxvf /mnt/mha4mysql-node-0.57.tar.gz
[root@manager ~]# cd mha4mysql-node-0.57/
[root@manager mha4mysql-node-0.57]# perl Makefile.PL  ##perl进行编译
[root@manager mha4mysql-node-0.57]# make && make install  ##编译安装

9, the installation manager (Note: Be sure to install the components to install node manager component) in the manger server

#关闭防火墙
[root@manager ~]# systemctl stop firewalld.service 
[root@manager ~]# setenforce 0
#解压并安装manager
[root@manager ~]# cd ~
[root@manager ~]# tar zxvf /mnt/mha4mysql-manager-0.57.tar.gz
[root@manager ~]# cd mha4mysql-manager-0.57/
[root@manager mha4mysql-manager-0.57]# perl Makefile.PL   ##perl编译
[root@manager mha4mysql-manager-0.57]# make && make install   ##编译安装

After the server manager installed below usr / local / bin directory will generate several tools:

 - masterha_check_repl      检查mysql复制状况
 - masterha_master_monitor       检查master是否宕机
 - masterha_check_ssh   检查MHA的SSH配置情况  
 - masterha_master_switch     控制故障转移
 - masterha_check_status         检查当前MHA运行状态
 -  masterha_conf_host        添加或删除配置的server信息
 - masterha_stop                关闭manager
 - masterha_manager                     启动manager的脚本

After node installed in the / usr / local / bin will be generated following several scripts (MHA Manager typically triggered by script, without human operation)

 - apply_diff_relay_logs :识别差异的中继日志事件并将其差异的事件应用于其他的 slave;
 - save_binary_logs:保存和复制 master 的二进制日志;
 - filter_mysqlbinlog :去除不必要的 ROLLBACK 事件 (MHA 已不再使用这个工具);
 - purge_relay_logs:清除中继日志(不会阻塞 SQL 线程);

10, no password to access the configuration

##在manager配置所有数据库节点的无密码认证
[root@manager ~]# ssh-keygen -t rsa  ##生成秘钥
Enter file in which to save the key (/root/.ssh/id_rsa):   ##回车
Enter passphrase (empty for no passphrase):   ##回车
Enter same passphrase again:    ##回车
[root@manager ~]# ssh-copy-id 192.168.13.129  ##上传到其他服务器
Are you sure you want to continue connecting (yes/no)? yes
[email protected]'s password:    ##输入129服务器的密码
[root@manager ~]# ssh-copy-id 192.168.13.130
[root@manager ~]# ssh-copy-id 192.168.13.131
##在master上配置到数据库节点slave1和slave2的无密码认证
[root@master ~]# ssh-keygen -t rsa
[root@master ~]# ssh-copy-id 192.168.13.130
[root@master ~]# ssh-copy-id 192.168.13.131
##在slave1上配置到数据库节点master'和slave2的无密码认证
[root@slave1 ~]# ssh-keygen -t rsa
[root@slave1 ~]# ssh-copy-id 192.168.13.129
[root@slave1 ~]# ssh-copy-id 192.168.13.131
##在slave2上配置到数据库节点slave1和master的无密码认证
[root@slave2 ~]# ssh-keygen -t rsa
[root@slave2 ~]# ssh-copy-id 192.168.13.129
[root@slave2 ~]# ssh-copy-id 192.168.13.130

11, the configuration MHA, copy the relevant script to / usr / local directory on the manager node, and configure

[root@manager ~]# cp -ra /root/mha4mysql-manager-0.57/samples/scripts/ /usr/local/bin/
##复制脚本到/usr/local
[root@manager ~]# ls mha4mysql-manager-0.57/samples/scripts/
 ##生成四个可执行脚本
 master_ip_failover:自动切换时 VIP 管理的脚本;
 master_ip_online_change:在线切换时 VIP 的管理;
 power_manager:故障发生后关闭主机的脚本;
 send_report:因故障切换后发送报警的脚本;
##将自动切换时 VIP 管理的脚本复制到 /usr/local/bin/目录下:
[root@manager ~]# cp /usr/local/bin/scripts/master_ip_failover /usr/local/bin/
[root@manager ~]# vim /usr/local/bin/master_ip_failover   
##删除所有内容,重新编写 master_ip_failover 脚本
#!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';

use Getopt::Long;

my (
$command, $ssh_user, $orig_master_host, $orig_master_ip,
$orig_master_port, $new_master_host, $new_master_ip, $new_master_port
);
my $vip = '192.168.13.100';
my $brdc = '192.168.13.255';
my $ifdev = 'ens33';
my $key = '1';
my $ssh_start_vip = "/sbin/ifconfig ens33:$key $vip";
my $ssh_stop_vip = "/sbin/ifconfig ens33:$key down";
my $exit_code = 0;
#my $ssh_start_vip = "/usr/sbin/ip addr add $vip/24 brd $brdc dev $ifdev label $ifdev:$key;/usr/sbin/arping -q -A -c 1 -I $ifdev $vip;iptables -F;";
#my $ssh_stop_vip = "/usr/sbin/ip addr del $vip/24 dev $ifdev label $ifdev:$key";
GetOptions(
'command=s' => \$command,
'ssh_user=s' => \$ssh_user,
'orig_master_host=s' => \$orig_master_host,
'orig_master_ip=s' => \$orig_master_ip,
'orig_master_port=i' => \$orig_master_port,
'new_master_host=s' => \$new_master_host,
'new_master_ip=s' => \$new_master_ip,
'new_master_port=i' => \$new_master_port,
);

exit &main();

sub main {

print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";

if ( $command eq "stop" || $command eq "stopssh" ) {

my $exit_code = 1;
eval {
print "Disabling the VIP on old master: $orig_master_host \n";
&stop_vip();
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "start" ) {

my $exit_code = 10;
eval {
print "Enabling the VIP - $vip on the new master - $new_master_host \n";
&start_vip();
$exit_code = 0;
};
if ($@) {
warn $@;
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "status" ) {
print "Checking the Status of the script.. OK \n";
exit 0;
}
else {
&usage();
exit 1;
}
}
sub start_vip() {
`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
# A simple system call that disable the VIP on the old_master
sub stop_vip() {
`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}

sub usage {
print
"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}

12, MHA software to create a directory on the node manager and copy the configuration file

[root@manager ~]# mkdir /etc/masterha
[root@manager ~]# cp /root/mha4mysql-manager-0.57/samples/conf/app1.cnf /etc/masterha/
#编辑配置文件
[root@manager ~]# vim /etc/masterha/app1.cnf
[server default]
#manager配置文件
manager_log=/var/log/masterha/app1/manager.log     
#manager日志
manager_workdir=/var/log/masterha/app1
#master保存binlog的位置,这里的路径要与master里配置的bilog的相同
master_binlog_dir=/usr/local/mysql/data
#设置自动failover时候的切换脚本。也就是上边的那个脚本
master_ip_failover_script=/usr/local/bin/master_ip_failover
#设置手动切换时候的切换脚本
master_ip_online_change_script=/usr/local/bin/master_ip_online_change
#这个密码是前文中创建监控用户的那个密码
password=manager
remote_workdir=/tmp
#设置复制用户密码
repl_password=123
#设置复制用户的用户
repl_user=myslave
#设置发生切换后发生报警的脚本
secondary_check_script=/usr/local/bin/masterha_secondary_check -s 192.168.13.130 -s 192.168.13.131
#设置故障发生关闭故障脚本主机
shutdown_script=""
#设置ssh的登录用户名
ssh_user=root
#设置监控用户
user=mha

[server1]
hostname=192.168.13.129
port=3306

[server2]
candidate_master=1
#设置为候选master,如果设置该参数以后,发送主从切换以后将会从此从库升级为主库
hostname=192.168.13.130
check_repl_delay=0
port=3306

[server3]
hostname=192.168.13.131
port=3306

13, no test ssh password authentication, if the normal final output will be successful, to build health check status

[root@manager ~]# masterha_check_ssh -conf=/etc/masterha/app1.cnf
....
[root@manager ~]# masterha_check_repl -conf=/etc/masterha/app1.cnf

14, configure the virtual ip on the master

[root@master mha4mysql-node-0.57]# /sbin/ifconfig ens33:1 192.168.13.100/24

15, started on the manager server mha

[root@manager scripts]# nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2>&1 &
##查看MHA状态,可以看到当前的master是mysql节点
[root@manager scripts]# masterha_check_status --conf=/etc/masterha/app1.cnf
app1 (pid:43036) is running(0:PING_OK), master:192.168.13.129

16, fault simulation

[root@manager scripts]# tailf /var/log/masterha/app1/manager.log   
##启动监控观察日志记录
##关掉master服务器
[root@master mha4mysql-node-0.57]# pkill -9 mysql

You can see from the state library, vip will switch to one of the last from the library:

[root@slave1 mha4mysql-node-0.57]# ifconfig 
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
                inet 192.168.13.130  netmask 255.255.255.0  broadcast 192.168.13.255

ens33:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
                inet 192.168.13.100  netmask 255.255.255.0  broadcast 192.168.13.255
                ether 00:0c:29:af:94:06  txqueuelen 1000  (Ethernet)

In this case, install mysql on the manager, the client can also be through a virtual ip, connected to the database:

##在vip的数据库服务器上提权
mysql> grant all on *.* to 'root'@'%' identified by 'abc123';
Query OK, 0 rows affected (0.00 sec)
##在客户机上用虚拟ip进行登录
[root@manager ~]# mysql -uroot -h 192.168.13.100 -p  ##指定虚拟ip
Enter password:   ##输入密码

MySQL [(none)]> 

thanks for reading!

Guess you like

Origin blog.51cto.com/14080162/2459786