outline
-
foreword
-
Architecture of MHA
-
Environment deployment
-
Experimental procedure
-
Summarize
foreword
In the last article, we implemented
MySQL
master-slave replication, but as we said before, there are many problems with master-slave replication . In this article, we will introduce how to useMHA
it to achieveMySQL
high availability of replicated clusters.
Architecture of MHA
MHA (Master HA) realizes the high availability of MySQL master-slave replication master node , which mainly realizes
Automated master monitoring and failover
Interactive (manual) Master Failover Manual failover
Non-interactive master failover
Online switching master to a different host
The MHA service has two roles to complete the corresponding functions
MHA Manager (management node): usually deployed on a single host alone to manage multiple Master/Slave clusters, each cluster is usually called an Application
MHA Slave (data node): usually deployed on a single MySQL server, with monitoring and scripts with parsing and cleaning log functions to speed up failover
When the MySQL master node fails , follow the steps below to transfer
Components of the MHA
Components of the Manager Node
masterha_check_ssh : MHA-dependent SSH environment detection tool
masterha_repl : MySQL replication environment detection tool
masterha_manager : MHA service main program
masterha_check_status : MySQL master node availability detection tool
masterha_conf_host : add or remove configured nodes
masterha_stop : Tool to stop MHA service
Components of a Node node
save_binary_logs : save and copy master's binary logs
apply_diff_relay_logs : Relay logs identifying differences to apply to other slaves
filter_mysqbinlog : remove unnecessary ROLLBACK events (MHA has removed this tool)
purge_relay_logs : purge relay logs (will not block SQL thread)
custom extension
secondary_check_script : Check master availability through multiple network routes
master_ip_failover_script : update the masterip used by the application
shutdown_script : forcibly shut down the master node
report_script : send report
init_conf_load_script : load initial configuration parameters
master_ip_online_change_script : update the ip address of the master node
Environment deployment
lab environment
node | IP | function |
---|---|---|
node1 | 172.16.1.2 | Master Node |
node2 | 172.16.1.3 | Slave Node/Master Node |
node3 | 172.16.1.4 | Slave Node |
node4 | 172.16.1.5 | Manager Node |
Experimental topology
When the master node goes down, the node2 node automatically replaces it as the master node
Software version
software | Version |
---|---|
MySQL | 5.1 |
MHA_Manager | 0.56 |
MHA_Node | 0.54 |
Experimental procedure
Install and configure mysql
[root@node1 ~]# yum install mysql-server -y
[root@node2 ~]# yum install mysql-server -y
[root@node3 ~]# yum install mysql-server -y
[root@node4 ~]# yum install mysql-server -y
mysql_master_node configuration file
I will not explain too much configuration here, if you are interested, you can read my last article
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
innodb_file_per_table = 1
log-bin=master-log
log-bin-index=1
server_id=1
user=mysql
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
master_slave_node configuration file
We only show the configuration file of one node here
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
innodb_file_per_table = 1
log-bin = master-log
log-bin-index = 1
relay-log = relay-log
read_only = 1
server_id=2 #每个从服务器要使用不同的
user=mysql
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
Configure master-slave
MySQL Master node configuration
We need to create a user with super permission to manage the MHA_Manager to control each node
mysql> SHOW MASTER STATUS; #一定要在创建用户前查看并记下POS数值
+-------------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+-------------------+----------+--------------+------------------+
| master-log.000003 | 106 | | |
+-------------------+----------+--------------+------------------+
1 row in set (0.00 sec)
mysql> GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'rpuser'@'%' IDENTIFIED BY 'passwd';
Query OK, 0 rows affected (0.00 sec)
ENTFIED BY 'passwd'' at line 1mysql> GRANT ALL ON *.* TO 'mhauser'@'%' IDENTIFIED BY 'passwd';
Query OK, 0 rows affected (0.00 sec)
mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)
MySQL Slave Node Configuration
mysql> CHANGE MASTER TO
-> MASTER_HOST='172.16.1.2',
-> MASTER_USER='rpuser',
-> MASTER_PASSWORD='passwd',
-> MASTER_LOG_FILE='master-log.000003',
-> MASTER_LOG_POS=106;
Query OK, 0 rows affected (0.03 sec)
mysql> START SLAVE; #启动slave
Query OK, 0 rows affected (0.00 sec)
mysql> SHOW SLAVE STATUS\G; #查看slave-IO和slave-SQL是否为YES
Master_Host: 172.16.1.2
Master_User: rpuser
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: master-log.000003
Read_Master_Log_Pos: 476
Relay_Log_File: relay-log.000002
Relay_Log_Pos: 622
Relay_Master_Log_File: master-log.000003
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Configure MHA
Configuration prerequisites
We need to configure each host to be able to trust each other
Mutual trust configuration for each host
Here we use a very simple method to achieve
生成密钥后, 复制到各节点主机
[root@node4 ~]# ssh-keygen -P '' -t rsa -f /root/.ssh/id_rsa #在node4生成密钥
[root@node4 ~]# cat /root/.ssh/id_rsa.pub > /root/.ssh/authorized_keys
[root@node4 ~]# scp /root/.ssh/{id_rsa,authorized_keys} node1.anyisalin.com:/root/.ssh/
[root@node4 ~]# scp /root/.ssh/{id_rsa,authorized_keys} node2.anyisalin.com:/root/.ssh/
[root@node4 ~]# scp /root/.ssh/{id_rsa,authorized_keys} node3.anyisalin.com:/root/.ssh/
Install MHA
[root@node4 ~]# yum localinstall mha4mysql-manager-0.56-0.el6.noarch.rpm mha4mysql-node-0.54-0.el6.noarch.rpm #在manager节点上安装这两个包
#在各个node节点安装mha_node
[root@node2 ~]# yum localinstall mha4mysql-node-0.54-0.el6.noarch.rpm
[root@node3 ~]# yum localinstall mha4mysql-node-0.54-0.el6.noarch.rpm
[root@node4 ~]# yum localinstall mha4mysql-node-0.54-0.el6.noarch.rpm
Create configuration file
[root@node4 ~]# vim /etc/mha.cnf
[server default]
user=mhauser
password=passwd
manager_workdir=/data/masterha/app1
manager_log=/data/masterha/app1/manager.log
remote_workdir=/data/masterha/app1
ssh_user=root
repl_user=rpuser
repl_password=passwd
ping_interval=1
[server1]
hostname=172.16.1.2
candidate_master=1
[server2]
hostname=172.16.1.3
candidate_master=1
[server3]
hostname=172.16.1.4
Check the environment
Use the built-in detection tool to check the environment before starting Masterha
[root@node4 ~]# masterha_check_ssh --conf=/etc/mha.cnf #检查ssh, --conf指定配置文件
#最后出现以下字段代表成功
Thu Apr 28 19:02:05 2016 - [info] All SSH connection tests passed successfully.
[root@node4 ~]# masterha_check_repl --conf=/etc/mha.cnf #检查主从复制
#最后出现以下字段代表成功
MySQL Replication Health is OK.
start MHA
nohup masterha_manager --conf=/etc/mha.cnf &> /data/masterha/app1/manager.log &
# 指定配置文件并且放进程在后台运、剥离与终端的关系
Test failover
[root@node4 ~]# masterha_check_status --conf /etc/mha.cnf #当前主节点为node1
mha (pid:2573) is running(0:PING_OK), master:172.16.1.2
[root@node1 ~]# service mysqld stop #手动停止主节点
[root@node3 ~]# mysql
mysql> SHOW SLAVE STATUS\G;
Slave_IO_State: Waiting for master to send event
Master_Host: 172.16.1.3 #已经转换为node2
Master_User: rpuser
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: master-log.000006
Read_Master_Log_Pos: 106
Relay_Log_File: relay-log.000004
Relay_Log_Pos: 252
Relay_Master_Log_File: master-log.000006
[root@node2 ~]# mysql
mysql> SHOW GLOBAL VARIABLES LIKE '%read_only%'; #查看, read_only被MHA关闭了
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| read_only | OFF |
+---------------+-------+
1 row in set (0.00 sec)
Summarize
In fact, our architecture is not complete enough. If we want to use it in the production environment, we need to provide a script to automatically switch VIPs. The front end uses a dedicated read-write separator for MySQL for scheduling, but due to time reasons, we won't go into more details here. , everyone can know that the basic usage is trivial compared to those
The level of the author is very low. If there are any mistakes, please point them out in time. If you think this article is well written, please give a like~(≧▽≦)/~
Author: AnyISaIln QQ: 1449472454
Thanks: MageEdu