MMM high-availability architecture construction of MySQL database

1. Overview of MMM
1. What is MMM
  MMM (Master-Master replication manager for MySQL, MySQL master-master replication manager) is a set of script programs that support dual-master failover and dual-master daily management. MMM is developed in Perl language, and mainly monitors and manages MySQL Master-Master (dual-master) replication. Although it is called dual-master replication, only one master is allowed to write at the same time in business, and the other backup master provides partial read. service to accelerate the warm-up of the standby master during the master-master switchover. It can be said that the MMM script program realizes the failover function on the one hand, and on the other hand, its internal additional tools can also realize the read load balancing of multiple Slaves .


2. Application scenarios
 MMM provides automatic and manual methods to remove the virtual ip of a server with a high replication delay in a group of servers. At the same time, it can also back up data and realize data synchronization between two nodes. Since MMM cannot fully guarantee data consistency, MMM is suitable for scenarios that do not require high data consistency but want to ensure business availability to the greatest extent. For those businesses that have high requirements on data consistency, it is not recommended to adopt a high-availability architecture such as MMM.
 

3. Features of MMM
MMM is a set of flexible scripts
implemented based on perl language,
used to monitor and failover mysql replication,
manage the configuration of MySQL Master-Master replication
4. Description of MMM high-availability architecture


 

mmm_mon: monitoring process, responsible for all monitoring work, determining and processing all node role activities. This script needs to be run on the supervisor machine.
mmm_agent: The agent process running on each MySQL server, completes the monitoring probe work and performs simple remote service settings. This script needs to be run on the supervised machine.
mmm_control: A simple script that provides commands to manage the mmm_mond process.
The supervisory side of mysql-mmm will provide multiple virtual IPs (VIPs), including one writable VIP and multiple readable VIPs. Through supervisory management, these IPs will be bound to the available MySQL. When the machine crashes, the supervisor will migrate the VIP to another MySQL.
 

5. User and Authorization

  During the entire monitoring process, it is necessary to add relevant authorization yoghurt to MySQL so that MySQL can support the maintenance of the monitoring machine. Authorized users include a mmm_monitor user and a mmm_agent user. If you want to use the MMM backup tool, you also need to add a mmm_tools user.
 

2. Case environment
1. Server configuration


 

2. Server environment
systemctl stop firewalld && systemctl disable firewalld
setenforce 0
 

3 Modify the host name
hostnamectl set-hostname master1
su
 

master1 master2 slave1 slave2 monitor

3. Case implementation
1. Build MySQL multi-master and multi-slave architecture
(1) Install mysql on master1, master2, slave1, and slave2 nodes
(2) Modify the configuration file of master1
[client]
port = 3306
default-character-set=utf8
socket=/usr /local/mysql/mysql.sock
 
[mysql]
port = 3306
default-character-set=utf8
socket=/usr/local/mysql/mysql.sock
auto-rehash
 
[mysqld]
user = mysql
basedir=/usr/local/mysql
datadir=/usr/local/mysql/data
port=3306
character-set-server=utf8
pid-file=/usr/local/mysql/mysqld.pid
socket=/usr/local/mysql/mysql.sock
server-id= 1
log-error=/usr/local/mysql/data/mysql_error.log
general_log=ON
general_log_file=/usr/local/mysql/data/mysql_general.log
slow_query_log=ON
slow_query_log_file=mysql_slow_query.log
long_query_time=5
binlog-ignore-db=mysql,information_schema
log_bin=mysql_bin
log_slave_updates=true
sync_binlog=1
innodb_flush_log_at_trx_commit=1
auto_increment_increment=2
auto_increment_offset=1
 
 
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_AUTO_VALUE_ON_ZERO,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,PIPES_AS_CONCAT,ANSI_QUOTES

restart mysqld

systemctl restart mysqld
 

Parameter Description

......
server-id = 1
#The server-id of each Mysql host cannot be the same
log-error=/usr/local/mysql/data/mysql_error.log
#Error log
general_log=ON
#General query log
general_log_file= /usr/local/mysql/data/mysql_general.log
slow_query_log=ON
#Slow query log
slow_query_log_file=mysql_slow_query.log
long_query_time=5
binlog-ignore-db=mysql,information_schema
#Do not need to synchronize the library name
log_bin=mysql_bin
#Open binary log For master-slave data replication
log_slave_updates=true
#Allow slave to write to its own binary log when copying data from master
sync_binlog=1
#"Double 1 setting", MySQL will synchronize to disk every time the binary log is written to    
innodb_flush_log_at_trx_commit =1
#"Double 1 setting", MySQL will write the cached data to the log file every time the transaction is committed, and flush it to the disk
auto_increment_increment=2
#How much the self-increment field is incremented at a time
auto_increment_offset=1
#The starting value of the self-increment field

(3) Modify the other three mysql and restart the service
Copy the configuration file to the other three database servers and restart the mysql server

Note: The server-id in the configuration file cannot be the same and needs to be modified

(4) Configure master-master replication, and the two master servers replicate each other
① Execute the permissions granted to the slave on both master servers, and do not need to execute on the slave server

master1 server (192.168.10.20

mysql> grant replication slave on *.* to 'replication'@'192.168.10.%' identified by '123456';
mysql> flush privileges;
 
mysql> show master status;
+------------------+----------+--------------+--------------------------+-----------------
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB         | Executed_Gtid_Se
+------------------+----------+--------------+--------------------------+-----------------
| mysql_bin.000001 |     1023 |              | mysql,information_schema |                 
+------------------+----------+--------------+--------------------------+-----------------
1 row in set (0.00 sec)
master2 服务器(192.168.10.30)

mysql> grant replication slave on *.* to 'replication'@'192.168.10.%' identified by '123456';
mysql> flush privileges;
 
mysql> show master status;
+------------------+----------+--------------+--------------------------+-----------------
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB         | Executed_Gtid_Se
+------------------+----------+--------------+--------------------------+-----------------
| mysql_bin.000001 |     1023 |              | mysql,information_schema |                 
+------------------+----------+--------------+--------------------------+-----------------
1 row in set (0.00 sec)
② 在 master1 上配置同步

192.168.10.20

change master to master_host='192.168.10.30', master_user='replication', master_password='123456', master_log_file='mysql_bin.000001', master_log_pos=1023; start slave; show
 
slave
status\G;
#View IO and SQL threads Is it YES, and the position offset is correct?
③ Configure synchronization on master2

192.168.10.30

change master to master_host='192.168.10.20', master_user='replication', master_password='123456', master_log_file='mysql_bin.000001', master_log_pos=1023; start slave; show
 
slave
status\G;
#View IO and SQL threads Is it YES, and the position offset is correct
(5) Configure master-slave replication, and do
① slave1 server on two slave servers

192.168.10.40

change master to master_host='192.168.10.20',master_user='replication',master_password='123456',master_log_file='mysql_bin.000001',master_log_pos=1023;

start slave;
show slave status\G;
 

② slave2 server

192.168.10.50

change master to master_host='192.168.10.20',master_user='replication',master_password='123456',master_log_file='mysql_bin.000001',master_log_pos=1023;

start slave;
show slave status\G;
 

(6) Test master-master and master-slave synchronization
2. Install and configure MySQL-MMM
(1) Install MySQL-MMM on all servers
Note: If there is no above software in the local warehouse, you need to configure the online source warehouse for each server first.

yum -y install epel-release && yum -y install mysql-mmm*
 

(2) Configure MySQL-MMM on master1
192.168.10.20

[root@master1 ~]# cd /etc/mysql-mmm/
[root@master1 mysql-mmm]# cp mmm_common.conf mmm_common.conf.bak
#Before modifying the configuration file, first backup
[root@master1 mysql-mmm]# vim mmm_common.conf
 
active_master_role writer
 
<host default>
    cluster_interface ens33
    pid_path /run/mysql-mmm-agent.pid
    bin_path /usr/libexec/mysql-mmm/
    replication_user replication
##Specify master-master and master-slave replication users, to be consistent with the previous
    replication_password 123456
    agent_user mmm_agent
##Specify the username of the monitor agent process
    agent_password 123456
</host>
 
<host db1>
    ip 192.168.10.20
    mode master
    peer db2
##peer set peer database
</host>
 
<host db2>
    ip 192.168.10.30
    mode master
    peer db1
</host>
 
<host db3>
    ip 192.168.10.40
    mode slave
</host>
 
<host db4>
    ip 192.168. 10.50
    mode slave
</host>
 
<role writer>
    hosts db1, db2
    ips 192.168.10.200
##Set to write VIP
    mode exclusive
#Only one host can write operation mode
</role>
 
<role reader>
    hosts db3, db4
    ips 192.168 .10.201, 192.168.10.202
##Set read VIP
    mode balanced
##Multiple slave hosts can perform read operation mode
</role>

(3) Copy the configuration file to the other 4 hosts.
The content of the configuration file is the same for all hosts

scp mmm_common.conf [email protected]:/etc/mysql-mmm/
scp mmm_common.conf [email protected]:/etc/mysql-mmm/
scp mmm_common.conf [email protected]:/etc/mysql-mmm /
scp mmm_common.conf [email protected]:/etc/mysql-mmm/
(4) Modify the agent configuration file mmm_agent.conf
master1 of all database servers

[root@master1 ~]# vim /etc/mysql-mmm/mmm_agent.conf
 
include mmm_common.conf
this db1
##Change to db1/db2/db3/db4 according to different hosts, the default configuration is db1, so master1 does not need to be modified
 

master2

[root@master2 ~]# vim /etc/mysql-mmm/mmm_agent.conf
 
include mmm_common.conf
this db2
slave1

[root@slave ~]# vim /etc/mysql-mmm/mmm_agent.conf
 
include mmm_common.conf
this db3
slave 2

[root@slave2 ~]# vim /etc/mysql-mmm/mmm_agent.conf
 
include mmm_common.conf
this db4
 

(5) Modify the monitoring configuration file mmm_mon.conf on the monitor monitoring server
monitor server (192.168.10.90)

[root@monitor ~]# vim /etc/mysql-mmm/mmm_mon.conf 
 
include mmm_common.conf
 
<monitor>
    ip 127.0.0.1
    pid_path /run/mysql-mmm-monitor.pid
    bin_path /usr/libexec/mysql-mmm
    status_path /var/lib/mysql-mmm/mmm_mond.status
    ping_ips 192.168.10.20,192.168.10.30, 192.168.10.40, 192.168.50.50.50 ## Specify the ip     auto_set_online 10 ##
specification of all database servers. Automatic online time     # the kill_host_bin does not exist by default, though the monitor will     # throw a warning about it missing. See the section 5.10 "Kill Host     # Functionality" in the PDF documentation.     #


 




    # kill_host_bin /usr/libexec/mysql-mmm/monitor/kill_host
    #
</monitor>
 
<host default>
    monitor_user mmm_monitor
##Specify the user name of mmm_monitor
    monitor_password 123456
##Specify the password of mmm_monitor
</host>
 
debug 0

(6) Authorize mmm_agent (agent process) on all databases
All databases execute the following statements

grant super,replication client,process on *.* to 'mmm_agent'@'192.168.10.%' identified by '123456';
flush privileges;
 

 (7) Grant replication
client on *.* to 'mmm_monitor'@'192.168.10.%' identified by '123456';
flush privileges;
 


(8) Start mysql-mmm-agent systemctl start mysql-mmm-agent.service && systemctl enable mysql-mmm-agent.service on all database servers
 


(9) Start mysql-mmm-monitor systemctl start mysql-mmm-monitor.service && systemctl enable mysql-mmm-monitor.service on the monitor server
 

(10) Test the cluster on the monitor server
① View the status of each node

[root@monitor ~]#mmm_control show
  db1(192.168.10.20) master/ONLINE. Roles: writer(192.168.10.200)
  db2(192.168.10.30) master/ONLINE. Roles: 
  db3(192.168.10.40) slave/ONLINE. Roles: reader(192.168.10.202)
  db4(192.168.10.50) slave/ONLINE. Roles: reader(192.168.10.201)
 

② Check whether the monitoring function is perfect

[root@monitor ~]#mmm_control checks all
db4  ping         [last change: 2021/11/04 16:13:20]  OK
db4  mysql        [last change: 2021/11/04 16:13:20]  OK
db4  rep_threads  [last change: 2021/11/04 16:13:20]  OK
db4  rep_backlog  [last change: 2021/11/04 16:13:20]  OK: Backlog is null
db2  ping         [last change: 2021/11/04 16:13:20]  OK
db2  mysql        [last change: 2021/11/04 16:13:20]  OK
db2  rep_threads  [last change: 2021/11/04 16:13:20]  OK
db2  rep_backlog  [last change: 2021/11/04 16:13:20]  OK: Backlog is null
db3  ping         [last change: 2021/11/04 16:13:20]  OK
db3  mysql        [last change: 2021/11/04 16:13:20]  OK
db3  rep_threads  [last change: 2021/11/04 16:13:20]  OK
db3  rep_backlog  [last change: 2021/11/04 16:13:20]  OK: Backlog is null
db1  ping         [last change: 2021/11/04 16:13:20]  OK
db1  mysql        [last change: 2021/11/04 16:13:20]  OK
db1  rep_threads  [last change: 2021/11/04 16:13:20]  OK
db1  rep_backlog  [last change: 2021/11/04 16:13:20]  OK: Backlog is null
 

③ Designate the host bound to the VIP

[root@monitor ~]#mmm_control move_role writer db2
OK: Role 'writer' has been moved from 'db1' to 'db2'. Now you can wait some time and check new roles info!
[root@monitor ~]#mmm_control show
  db1(192.168.10.20) master/ONLINE. Roles: 
  db2(192.168.10.30) master/ONLINE. Roles: writer(192.168.10.200)
  db3(192.168.10.40) slave/ONLINE. Roles: reader(192.168.10.202)
  db4(192.168.10.50) slave/ONLINE. Roles: reader(192.168.10.201)

[root@monitor ~]#mmm_control move_role writer db1
OK: Role 'writer' has been moved from 'db2' to 'db1'. Now you can wait some time and check new roles info!
[root@monitor ~]#mmm_control show
  db1(192.168.10.20) master/ONLINE. Roles: writer(192.168.10.200)
  db2(192.168.10.30) master/ONLINE. Roles: 
  db3(192.168.10.40) slave/ONLINE. Roles: reader(192.168.10.202)
  db4(192.168.10.50) slave/ONLINE. Roles: reader(192.168.10.201)
 

3. Fault test
(1) Simulate master downtime and recovery
① Stop the mysql service of master1

② Check VIP drift status

monitor

[root@monitor ~]#mmm_control show
  db1(192.168.10.20) master/HARD_OFFLINE. Roles: 
  db2(192.168.10.30) master/ONLINE. Roles: writer(192.168.10.200)
  db3(192.168.10.40) slave/ONLINE. Roles: reader(192.168.10.202)
  db4(192.168.10.50) slave/ONLINE. Roles: reader(192.168.10.201)
 

VIP successfully drifted to master2, and master1 shows HARD_OFFLINE

③ Restart the mysql service of master1

master1

systemctl start mysqld
 

After master1 recovers, the VIP is still on master2 and has not been transferred to master1

(2) Simulate slave server downtime and recovery
① Stop the mysql service of slave1

slave1

systemctl stop mysqld
 

② Check VIP drift status

monitor

[root@monitor ~]#mmm_control show
  db1(192.168.10.20) master/ONLINE. Roles: 
  db2(192.168.10.30) master/ONLINE. Roles: writer(192.168.10.200)
  db3(192.168.10.40) slave/HARD_OFFLINE. Roles: 
  db4(192.168.10.50) slave/ONLINE. Roles: reader(192.168.10.201), reader(192.168.10.202)
 

The VIP corresponding to slave1 is taken over by slave2

③ Restart the mysql service of slave1

slave1

systemctl start mysqld
 

④ Check whether slave1 is restored

monitor

[root@monitor ~]#mmm_control show
  db1(192.168.10.20) master/ONLINE. Roles: 
  db2(192.168.10.30) master/ONLINE. Roles: writer(192.168.10.200)
  db3(192.168.10.40) slave/AWAITING_RECOVERY. Roles: 
  db4(192.168.10.50) slave/ONLINE. Roles: reader(192.168.10.201), reader(192.168.10.202)

[root@monitor ~]#mmm_control show
  db1(192.168.10.20) master/ONLINE. Roles: 
  db2(192.168.10.30) master/ONLINE. Roles: writer(192.168.10.200)
  db3(192.168.10.40) slave/ONLINE. Roles: 
  db4(192.168.10.50) slave/ONLINE. Roles: reader(192.168.10.201), reader(192.168.10.202)

[root@monitor ~]#mmm_control show
  db1(192.168.10.20) master/ONLINE. Roles: 
  db2(192.168.10.30) master/ONLINE. Roles: writer(192.168.10.200)
  db3(192.168.10.40) slave/ONLINE. Roles: reader(192.168.10.202)
  db4(192.168.10.50) slave/ONLINE. Roles: reader(192.168.10.201)
 

After a period of handover, slave1 gets VIP again and continues to work

(3) Client test
① Authorize login for the monitor server address on the master1 server

grant all on *.* to 'test'@'192.168.10.90' identified by '123456';
flush privileges;
 

② Log in with VIP on the monitor server

yum -y install mariadb-server mariadb
systemctl start mariadb.service && systemctl enable mariadb.service
mysql -utest -p123456 -h 192.168.10.200 #If you can log in, you are successful
 

③ The client creates data and tests the synchronization

monitor

Guess you like

Origin blog.csdn.net/zl965230/article/details/130803444