Database high availability MHA architecture (Perl language + virtual IP (VIP))

1. Concept

*The main thing to do is master-slave switching*

Operation after the main database server is down; one-time;

Delete the (server1) configuration file of the main database after the downtime

The software consists of two parts: MHA Manager (management node) and MHA Node (data node).

(At present, MHA mainly supports the architecture of multiple masters and multiple slaves, at least 3, one master database, one relay, and one slave.)

Notice:

And all three database servers must be equipped with MHA NODE (to facilitate the analysis of binary log files according to the node after a failure)

Manager can be deployed on an independent machine to manage multiple master-slave clusters, or it can be deployed on a slave node.

MHA Node runs on each MySQL server, and MHA Manager will regularly detect the master node in the cluster. When the master fails, it can automatically promote the slave with the latest data to the new master, and then redirect all other slaves to the new master. the master. The entire failover process is completely transparent to the application.

2. Operation process:

During the MHA automatic failover process, MHA tries to save the binary log from the downtime master server to ensure that the data is not lost to the greatest extent, but this is not always feasible.

For example, if the main server hardware fails or cannot be accessed through ssh, MHA cannot save the binary log, and only fails over and loses the latest data.

With *semi-synchronous replication* starting from MySQL 5.5 , the risk of data loss can be greatly reduced.

Semi-synchronous replication:

If only one slave has received the latest binary log, MHA can apply the latest binary log to all other slave servers, thus ensuring the data consistency of all nodes

At present, MHA mainly supports the architecture of one master and multiple slaves. To build MHA, it is required that there must be at least three database servers in a replication cluster, one master and two slaves, that is, one acts as the master, one acts as the standby master, and the other acts as the slave. .

MHA is suitable for any storage engine, as long as it can support the storage engine of master-slave replication, it is not limited to the innodb engine that supports things.

2. How MHA works

Compared with other HA software, the purpose of MHA is to maintain the high availability of the Master library in MySQL Replication. Its *biggest feature* is that it can repair the difference logs between multiple Slaves, and finally make all Slaves keep data consistent, and then select one of them to act as Create a new Master and point other Slaves to it. (It will solve the master-slave configuration after the master server goes down by itself; one-time)

(1) Save binary log events (binlog events) from the crashed master;

(2) *Identify the slave with the latest update* ;

(3) Apply the differential relay log (relay log) to other slaves;

(4) Apply binary log events (binlog events) saved from the master;

(5) *Promote a slave to be the new master* ;

(6) *Make other slaves connect to the new master for replication* ;

The biggest benefit: choose slave as master, and restore the master-slave relationship (new master)

MHA software composition

Manager toolkit and Node toolkit

3. Deploy MHA

build environment

Role CPU name Ip type Software
master Weiyi5 192.168.106.101 master mysql (write) Mha4mysql-node
Slave1 Weiyi7 192.168.106.103 relay mysql (read) Mha4mysql-node
Slave2 Weiyi8 192.168.106.104 from mysql (read) Mha4mysql-node
Mha Weiyi6 192.168.106.106 management node Mha4mysql-node;manage

The master provides external write services, slave1 provides read services, and slave2 also provides read services. Once the master goes down, one of the slaves will be promoted to the new master, and the other slaves will point to the new master.

Synchronize time ( all service time synchronization, network synchronization or manual modification )

[root@Weiyi5~]# yum install -y ntp

[root@Weiyi5~]# ntpdate ntp1.aliyun.com

This step can open all executions in the tools in xshell~

1) All servers close the firewall

Systemctl stop firewalld (可用工具全发)

Systemctl disable firewalld

2) All servers close selinux

Setenforce 0 (可用工具全发)

3) Configure mutual SSH login of all hosts without password authentication

( Because the database and the built MHA are both ssh secure links, if the hosts are down, the upgraded binary files will also be pulled using ssh, so ssh password-free login is required between each other. )

Note: ssh password-free login is required between the four devices! ! Save for yourself too!

This operation opens the xhell tool concurrently~ (otherwise, perform this operation one by one on the four computers)

[root@Weiyi5~]# ssh-keygen -t rsa

[root@Weiyi5~]# ssh-copy-id 192.168.1.11

[root@Weiyi5~]# ssh-copy-id 192.168.1.12

[root@Weiyi5~]# ssh-copy-id 192.168.1.13

[root@Weiyi5~]# ssh-copy-id 192.168.1.14

Configure operations on the configured MHA server

4) Upload the required software packages

mha4mysql-node-0.57-0.el7.noarch.rpm

mha4mysql-manager-0.57-0.el7.noarch.rpm

mhapath.tar.gz

[root@Weiyi6~]# tar -zxvf mhapath.tar.gz

5) Configure local yum source

[root@Weiyi6~]# *vim /etc/yum.repos.d/mhapath.repo*

[mha]

name=mhapath

baseurl=file:///root/mhapath

enabled=1

gpgcheck=0

[root@Weiyi6~]# *vim /etc/yum.repos.d/centos.repo*

[centos7]

name=centos_7

baseurl=file:///mnt/cdrom

enabled=1

gpgcheck=0

[root@Weiyi6~]# mount /dev/sr0 /mnt/cdrom/

(All four machines need to be mounted; some have already been mounted in Etc/fstab~)

6) Copy software packages and yum configuration files to other nodes ( *scp local files to ip: a certain directory* )

[root@Weiyi6~]# for ip in 101 104 103 ; do scp -r /etc/yum.repos.d/* 192.168.106.$ip:/etc/yum.repos.d/ ; done
[root@Weiyi6~]# for ip in 101 104 103 ; do scp -r /root/mhapath 192.168.106.$ip:/root ; done
[root@Weiyi6~]# for ip in 101 104 103 ; do scp /root/mha4mysql-node-0.57-0.el7.noarch.rpm 192.168.106.$ip:/root ; done

7) Install software dependency packages on the manager host and each node node

Just install the node toolkit of MHA on the database node, and install the manager toolkit and node toolkit on the management node.

[root@Weiyi6~]# for ip in 101 103 104 106; do ssh 192.168.106.$ip "yum -y install perl-DBDMySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager --skip-broken --nogpgcheck;rpm -ivh mha4mysql-node-0.57-0.el7.noarch.rpm"; done

After the installation is complete, the following script files will be generated in the /usr/bin/directory

[root@Weiyi6~]# ll /usr/bin/{app*,filter*,purge*,save*}

-rwxr-xr-x 1 root root 16381 5 月 31 2015 /usr/bin/apply_diff_relay_logs

-rwxr-xr-x 1 root root 4807 May 31 2015 /usr/bin/filter_mysqlbinlog

-rwxr-xr-x 1 root root 8261 5 Mar 31 2015 /usr/bin/purge_relay_logs

-rwxr-xr-x 1 root root 7525 5 Mar 31 2015 /usr/bin/save_binary_logs

8) Install MHA Manager on the server equipped with MHA

MHA Manager mainly includes several administrator command line tools, such as master_manger,

master_master_switch etc. MHA Manger also depends on the perl module .

9) Install the perl module that MHA Manger depends on

[root@Weiyi6~]# yum install -y perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-Time-HiRes perl-ExtUtils-CBuilder perl-ExtUtilsMakeMaker perl-CPAN

10) Install the MHA Manager software package

[root@Weiyi6~]# rpm -ivh mha4mysql-manager-0.57-0.el7.noarch.rpm

After the installation is complete, the following script files will be generated under the /usr/bin directory

[root@Weiyi6~]# ll /usr/bin/masterha*

-rwxr-xr-x 1 root root 1995 5 Mar 31 2015 /usr/bin/masterha_check_repl

-rwxr-xr-x 1 root root 1779 5 Mar 31 2015 /usr/bin/masterha_check_ssh

-rwxr-xr-x 1 root root 1865 5 Mar 31 2015 /usr/bin/masterha_check_status

-rwxr-xr-x 1 root root 3201 5 Mar 31 2015 /usr/bin/masterha_conf_host

-rwxr-xr-x 1 root root 2517 5 Mar 31 2015 /usr/bin/masterha_manager

-rwxr-xr-x 1 root root 2165 5 Mar 31 2015 /usr/bin/masterha_master_monitor

-rwxr-xr-x 1 root root 2373 5 Mar 31 2015 /usr/bin/masterha_master_switch

-rwxr-xr-x 1 root root 5171 5 月 31 2015 /usr/bin/masterha_secondary_check

-rwxr-xr-x 1 root root 1739 5 Mar 31 2015 /usr/bin/masterha_stop

3. Build a master-slave replication environment

In order to minimize the data loss caused by hardware damage and downtime of the main library, it is recommended to configure the semi-synchronous replication of MySQL while configuring MHA. The mysql semi-sync plug-in is provided by Google, the specific location is /usr/local/mysql/lib/plugin/, one is semisync_master.so for master, and the other is semisync_slave.so for slave.

Install semi-synchronous plug-ins on all mysql database servers (you can also call Quanfa in the tool to perform this operation)

[root@Weiyi5~]# mysql -uroot -p123456 -e "install plugin rpl_semi_sync_master

soname 'semisync_master.so';install plugin rpl_semi_sync_slave soname

'semisync_slave.so';"

[root@Weiyi7~]# mysql -uroot -p123456 -e "install plugin rpl_semi_sync_master

soname 'semisync_master.so';install plugin rpl_semi_sync_slave soname

'semisync_slave.so';"

[root@Weiyi8~]# mysql -uroot -p123456 -e "install plugin rpl_semi_sync_master

soname 'semisync_master.so';install plugin rpl_semi_sync_slave soname

'semisync_slave.so';"

  Check if the Plugin (plug-in) has been installed correctly

[root@Weiyi5~]# mysql -uroot -p123456 -e "show plugins;" | grep rpl_semi_sync*

 View information about semi-sync

[root@Weiyi5~]# mysql -uroot -p123456 -e "show variables like '%rpl_semi_sync%';"

In the picture above, you can see that the half-same copy plug-in has been installed, but it has not been enabled yet, so it is off

Configure the primary database server on the primary database server

Configure my.cnf ( because *may become slave* after downtime later , *so binglong and relaylog must be configured* )

[root@Weiyi5~]# vim /etc/my.cnf

server-id=1

log-bin=/data/mysql/log/mysql-bin

log-bin-index=/data/mysql/log/mysql-bin.index

binlog_format=mixed

rpl_semi_sync_master_enabled=1

rpl_semi_sync_master_timeout=10000

rpl_semi_sync_slave_enabled=1

relay_log_purge=0

relay-log = /data/mysql/log/relay-bin

relay-log-index = /data/mysql/log/slave-relay-bin.index

binlog-do-db=HA #可以被从服务器复制的库。二进制需要同步的数据库名

log_slave_updates=1 #只有开启 log_slave_updates,从库 binlog 才会记录主库同步的

操作日志

Note: Explanation of relevant parameters

rpl_semi_sync_master_enabled=1 1 table is enabled, 0 means off

rpl_semi_sync_master_timeout=10000 millisecond unit, this parameter indicates that the master server is waiting for confirmation

Message, after 10 seconds, no longer wait, and become asynchronous.

restart service

[root@Weiyi5~]# systemctl restart mysqld

Create a database that needs to be synchronized (test)

[root@Weiyi5~]# mysql -uroot -p123456
mysql> create database HA;

mysql> use HA;

mysql> create table test(id int,name varchar(20));

mysql> insert into test values(1,'tom1')

Create authorized account

mysql> grant replication slave on . to slave@'192.168.106.%' identified by '123456';
mysql> grant all privileges on . to *root*@'192.168.106.%' identified by '123456'; (给mha的用户;可以是其他用户,但要有root权限)
mysql> flush privileges;

View the status of the master

[root@Weiyi5~]#里的mysql ;mysql> show master status;

(See which 00000x the main log is recorded in; and the node number, which will be used when configuring the slave database.)

 Export the HA database to the slave server

[root@Weiyi5~]# mysqldump -uroot -p123456 -B HA>HA.sql

[root@Weiyi5~]# scp HA.sql [email protected]:~

[root@Weiyi5~]# scp HA.sql [email protected]:~

Configure the slave service on the relay database server

import database

[root@Weiyi7~]# mysql -uroot -p123456 <HA.sql

Configure my.cnf:

[root@Weiyi7~]# vim /etc/my.cnf
server-id=2

log-bin=/data/mysql/log/mysql-bin

log-bin-index=/data/mysql/log/mysql-bin.index

binlog_format=mixed

rpl_semi_sync_master_enabled=1

rpl_semi_sync_master_timeout=10000

rpl_semi_sync_slave_enabled=1

relay_log_purge=0

relay-log=/data/mysql/log/relay-bin

relay-log-index=/data/mysql/log/slave-relay-bin.index

binlog-do-db=HA

log_slave_updates=1 #只有开启 log_slave_updates,从库 binlog 才会记录主库同步的

操作日志

restart server

[root@Weiyi7~]# systemctl restart mysqld

Create authorized account

(Because all slave servers may be promoted to the master server, the slave server also needs to create a synchronization account)

[root@Weiyi7~]# mysql -uroot -p123456
mysql> grant replication slave on . to slave@'192.168.106.%' identified by '123456';

mysql> grant all privileges on . to root@'192.168.106.%' identified by '123456';

mysql> flush privileges;

Establish a master-slave relationship

mysql> stop slave;
mysql> change master to
master_host='192.168.1.101',master_user='slave',master_password='123456',master_log_file='mysql-bin.000001',master_log_pos=1500;

(If you don’t import ha.sql above, go to the master to see the node of show master status!!!! There are creation statements in the log; it’s better to import)

mysql> start slave;

mysql> show slave status\G

Check i/o; and whether sql is yes

 Configure the slave service on the last slave database server

import database

[root@Weiyi8~]# mysql -uroot -p123456 <HA.sql

Configure my.cnf:

server-id=3

log-bin=/data/mysql/log/mysql-bin

log-bin-index=/data/mysql/log/mysql-bin.index

binlog_format=mixed

rpl_semi_sync_master_enabled=1

rpl_semi_sync_master_timeout=10000

rpl_semi_sync_slave_enabled=1

relay_log_purge=0

relay-log = /data/mysql/log/relay-bin

relay-log-index = /data/mysql/log/slave-relay-bin.index

binlog-do-db=HA #可以被从服务器复制的库。二进制需要同步的数据库名

log_slave_updates=1 #只有开启 log_slave_updates,从库 binlog 才会记录主库同步的

操作日志

restart service

[root@Weiyi8~]# systemctl restart mysqld

Create authorized account

(Each slave may be master later; it should be similar in the configuration file; the step of authorizing users is also required)

[root@Weiyi8~]# mysql -uroot -p123456

mysql> grant replication slave on . to slave@'192.168.106.%' identified by '123456';

mysql> grant all privileges on . to root@'192.168.106.%' identified by '123456';

mysql> flush privileges;

 Establish a master-slave relationship

mysql> stop slave;

mysql> change master to

master_host='192.168.1.101',master_user='slave',master_password='123456',master_lo

g_file='mysql-bin.000001',master_log_pos=1500;

mysql> start slave;

mysql> show slave status\G;

 Two slave servers set read_only

The slave library provides read services to the outside world, but it is not written into the configuration file because the slave will be promoted to master at any time

[root@Weiyi7~]# mysql -uroot -p123456 -e 'set global read_only=1'

[root@Weiyi8~]# mysql -uroot -p123456 -e 'set global read_only=1'

View information about semi-synchronization on the master

14) View semi-synchronous related information

[root@Weiyi5~]# mysql -uroot -p123456 -e "show variables like '%rpl_semi_sync%';"

 

 

 View half-sync status

[root@Weiyi5~]# mysql -uroot -p123456 -e "show status like '%rpl_semi_sync%';"

 

 

4. Configure MHA

Create the working directory of MHA and create related configuration files

[root@Weiyi6~]# mkdir -p /etc/masterha

[root@Weiyi6~]# mkdir -p /var/log/masterha/app1

[root@Weiyi6~]# vim /etc/masterha/app1.cnf

[server default]

manager_workdir=/var/log/masterha/app1

manager_log=/var/log/masterha/app1/manager.log

master_binlog_dir=/data/mysql/log

#master_ip_failover_script=/usr/bin/master_ip_failover

#master_ip_online_change_script=/usr/bin/master_ip_online_change

user=root

password=123456

ping_interval=1

remote_workdir=/tmp

repl_user=slave

repl_password=123456

report_script=/usr/local/send_report

shutdown_script=""

ssh_user=root

[server1]

hostname=192.168.106.101

port=3306

[server2]

hostname=192.168.106.103

port=3306

[server3]

hostname=192.168.106.104

Check SSH configuration

Check the SSH connection status of MHA Manger to all MHA Nodes:

[root@Weiyi6~]# masterha_check_ssh --conf=/etc/masterha/app1.cnf

 *If an error is reported: please check whether all your nodes are configured with SSH password-free login*

Check the entire master-slave replication situation

[root@Weiyi6~]# masterha_check_repl --conf=/etc/masterha/app1.cnf

 Note: If the following error is reported (the second detection will report an error)

This is because the previous health check reported an error, and the file for the previous health check needs to be deleted

 [root@Weiyi6~]# rm -rf /var/log/masterha/app1/app1.master_status.health

Check the status of MHA Manager

[root@Weiyi6~]# masterha_check_status --conf=/etc/masterha/app1.cnf

app1 is stopped(2:NOT_RUNNING).

Note: If it is normal, it will display "PING_OK", otherwise it will display "NOT_RUNNING", which means MHA

Monitoring is not turned on

Enable MHA Manager monitoring

[root@Weiyi6~]# nohup masterha_manager --conf=/etc/masterha/app1.cnf \

--remove_dead_master_conf --ignore_last_failover < /dev/null > \

/var/log/masterha/app1/manager.log 2>&1 &

[1] 43664

Introduction to startup parameters:

--remove_dead_master_conf This parameter represents the ip of the old master library when the master-slave switch occurs

will be removed from the configuration file.

--manger_log log storage location

--ignore_last_failover By default, if MHA detects consecutive downtimes,

And if the interval between two downtimes is less than 8 hours, Failover failover will not be performed. This parameter means ignore

The file generated by the last MHA trigger switchover. By default, after the MHA is switched, it will be in the log directory, and also

It is the /data set above to generate the app1.failover.complete file. When switching again next time, if

If the file is found to exist in this directory, it will not be allowed to trigger the switch, unless the file is deleted after the first switch, for

For convenience, here is set to --ignore_last_failover.

View MHA Manager monitoring again

[root@Weiyi6~]# masterha_check_status --conf=/etc/masterha/app1.cnf

app1 (pid:43664) is running(0:PING_OK), master:192.168.106.101

It can be seen that it is already being monitored, and the host of the master is 192.168.106.101

View startup log

[root@Weiyi6~]# tail -20 /var/log/masterha/app1/manager.log

Close MHA Manage monitoring

[root@Weiyi6~]# masterha_stop --conf=/etc/masterha/app1.cnf

5. Simulate faults

open monitoring

[root@Weiyi6~]# nohup masterha_manager --conf=/etc/masterha/app1.cnf \

--remove_dead_master_conf --ignore_last_failover < /dev/null > \

/var/log/masterha/app1/manager.log 2>&1 &

[1] 44107

1) Open a new window to observe the log

[root@Weiyi6~]# tail -0f /var/log/masterha/app1/manager.log

2) Simulate the main library to hang up

[root@Weiyi5~]# systemctl stop mysqld

3) Check the log to see if the master switch is successful

You can see that the master has switched to 192.168.106.103

Log in and check the show slave status\G from Weiyi7 to see if the switch is successful

[root@Weiyi8~]# mysql -uroot -p123456 -e "show slave status\G"

*vim /etc/masterha/app1.cnf* The original server1 configured in *, the system deleted it by itself! ! Check vim /etc/masterha/app1.cnf is gone*

4) View Weiyi7 master-slave status

[root@Weiyi7~]# mysql -uroot -p123456 -e "show processlist \G"

Found that there is only Weiyi8's slave left, MHA kicked off the original master (Weiyi5)

[root@Weiyi7~]# mysql -uroot -p123456 -e "show master status;"

6. Configure VIP to work with MHA

Using VIP (virtual ip) can achieve high availability of the mysql master server.

There are two ways to configure vip, one is to manage the floating of virtual ip through keepalived; the other is to start virtual ip through script (that is, no software like keepalived or heartbeat is required).

In order to prevent split brain, it is recommended that the environment use scripts to manage virtual ip instead of keepalived.

Configure VIP** on the mysql master **

[root@Weiyi7~]# ifconfig ens33:1 192.168.106.88 netmask 255.255.255.0 up

[root@Weiyi7~]# ifconfig ens33:1

1) Enable the script in the main configuration file

Add an automatic switching switch script in the main configuration file, restore Weiyi5, and set it as slave

[root@Weiyi6~]# vim /etc/masterha/app1.cnf

master_ip_failover_script=/usr/bin/master_ip_failover

[server1]

hostname=192.168.106.101

port=3306

2) Restore Weiyi5 master-slave replication

[root@Weiyi5~]# systemctl start mysqld

[root@Weiyi5~]# mysql -uroot -p123456

mysql> stop slave;

mysql> change master to

master_host='192.168.1.12',master_user='slave',master_password='123456',master_lo

g_file='mysql-bin.000001',master_log_pos=890;

mysql> start slave;

mysql> set global read_only=1;

mysql> show slave status\G;

3) Write a script /usr/bin/master_ip_failover (perl scripting language)

The vip used is 192.168.106.88/24

[root@Weiyi6~]# vim /usr/bin/master_ip_failover

#!/usr/bin/env perl

use strict;

use warnings FATAL => 'all';

use Getopt::Long;

my (

$command, $ssh_user, $orig_master_host, $orig_master_ip,

$orig_master_port, $new_master_host, $new_master_ip,

$new_master_port

);

my $vip = '192.168.106.88/24';

my $key = '1';

my $ssh_start_vip = "/sbin/ifconfig ens33:$key $vip";

my $ssh_stop_vip = "/sbin/ifconfig ens33:$key down";

GetOptions(

'command=s' => $command,

'ssh_user=s' => $ssh_user,

'orig_master_host=s' => $orig_master_host,

'orig_master_ip=s' => $orig_master_ip,

'orig_master_port=i' => $orig_master_port,

'new_master_host=s' => $new_master_host,

'new_master_ip=s' => $new_master_ip,

'new_master_port=i' => $new_master_port,

);

exit &main();

sub main {

print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";

if ( $command eq "stop" || $command eq "stopssh" ) {

my $exit_code = 1;

eval {

print "Disabling the VIP on old master: $orig_master_host \n";

&stop_vip();

$exit_code = 0;

};

if ($@) {

warn "Got Error: $@\n";

exit $exit_code;

}

exit $exit_code;

}

elsif ( $command eq "start" ) {

my $exit_code = 10;

eval {

print "Enabling the VIP - $vip on the new master - $new_master_host

\n";

&start_vip();

$exit_code = 0;

};

if ($@) {

warn $@;

exit $exit_code;

}

exit $exit_code;

}

elsif ( $command eq "status" ) {

print "Checking the Status of the script.. OK \n";

#ssh $ssh_user\@cluster1 \" $ssh_start_vip \";

exit 0;

}

else {

&usage();

exit 1;

}

}

# A simple system call that enable the VIP on the new master

sub start_vip() {

ssh $ssh_user\@$new_master_host \" $ssh_start_vip \";

}

# A simple system call that disable the VIP on the old_master

sub stop_vip() {

ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \";

}

sub usage {

print

"Usage: master_ip_failover --command=start|stop|stopssh|status --

orig_master_host=host --orig_master_ip=ip --orig_master_port=port --

new_master_host=host --new_master_ip=ip --new_master_port=port\n";

}
[root@Weiyi6~]# chmod +x /usr/bin/master_ip_failover

4) Check SSH configuration

[root@Weiyi6~]# masterha_check_ssh --conf=/etc/masterha/app1.cnf

5) To check the status of the entire cluster replication environment, you need to delete the files of the previous health check

[root@Weiyi6~]# rm -rf /var/log/masterha/app1/app1.master_status.health

[root@Weiyi6~]# masterha_check_repl --conf=/etc/masterha/app1.cnf

6) Turn on MHA Manager monitoring

[root@Weiyi6~] nohup masterha_manager --conf=/etc/masterha/app1.cnf \

--remove_dead_master_conf --ignore_last_failover < /dev/null > \

/var/log/masterha/app1/manager.log 2>&1 &

[1] 44422

7) Check whether the MHA Manager monitoring is normal

[root@Weiyi6~] masterha_check_status --conf=/etc/masterha/app1.cnf

app1 (pid:44422) is running(0:PING_OK), master:192.168.1.12

8) Open a new window to observe the log

[root@Weiyi6~] tail -0f /var/log/masterha/app1/manager.log

9) Simulate the main library to hang up

10) Shut down the mysql master server

[root@Weiyi7~] systemctl stop mysqld

11) Check the log to see if the master switch is successful

[root@Weiyi5~] mysql -uroot -p123456 -e "show master status\G"

12) On Weiyi8, check whether the main switch is successful

[root@Weiyi8~] mysql -uroot -p123456 -e "show slave status\G"

13) View Weiyi5 master-slave status

[root@Weiyi5~] mysql -uroot -p123456 -e "show processlist \G"

14) Check if the VIP has drifted over ( *drifted from mha to the new master, there is no more on mha* )

[root@Weiyi5~] ifconfig ens33:1

Guess you like

Origin blog.csdn.net/m0_72264240/article/details/130439365