mysql deploy high-availability architecture --MMM

mysql high availability architecture has MHA, MMM, PXC, MGR, among MHA and data consistency MMM compared PXC and MGR to be worse, because PXC MGR is strong and consistent implementation principles here, but more introduction, this Bowen mainly to write down MMM high-availability architecture.

MMM is based on the realization of perl, mysql master-master replication configuration on the monitor, a scalable suite of scripts failover and management (only one node can be written at any time), MMM can be read on the load from the server balanced, so you can use it to start the virtual IP in a group of servers for replication, in addition, it also has data backup, re-synchronization scripts functionality between nodes. MMM program can be achieved by a failover server in order to achieve high availability mysql. MMM will not only provide a floating IP functionality, if the current primary server to hang up, you will automatically move to new back-end server to synchronize the master copy from the server without manually change the synchronization configuration.

MMM program and is not suitable for demanding data security, and reading and writing in a busy environment, it is more use to access large databases, and can read and write separation scene.

MMM's main functions are provided by the following three roles:

  • mmm_mond: responsible for monitoring the work of all the monitoring daemon, decide to remove a node (mmm_mond process regular heartbeat, ip to another float a master failure will write), and so on.
  • mmm_agentd: agent daemon running on mysql server to provide remote monitoring node through a simple set of services.
  • mmm_control: mmm_monitor user management via the command line and a mmm_agent user, if you want to use backup tool mmm, you also need to add a mmm_tools user.

First, prepare the environment

system Hostname & role IP VIP (Virtual IP)
CentOS 7.5 monitor 192.168.20.2
CentOS 7.5 master1 192.168.20.3 192.168.20.30 (write IP)
CentOS 7.5 master2 192.168.20.4 192.168.20.40 (read IP)
CentOS 7.5 slave1 192.168.20.5 192.168.20.50 (read IP)
CentOS 7.5 slave2 192.168.20.6 192.168.20.60 (read IP)

Second, pre-deployment preparation (do the following on all nodes)

1, mounted reliance

[root@master1 ~]# yum -y install perl-* libart_lgpl.x86_64 rrdtool.x86_64 rrdtool-perl.x86_64

2, prepare hosts file

[root@master1 ~]# cat >> /etc/hosts << EOF
> 192.168.20.2   monitor
> 192.168.20.3   master1 
> 192.168.20.4   master2
> 192.168.20.5   slave1
> 192.168.20.6   slave2
> EOF

3, the installation package cpan

[root@master1 ~]# cpan    #进入cpan交互默认

#执行cpan指令后,需要按几次回车键,直到出现以下内容:
cpan shell -- CPAN exploration and modules installation (v1.9800)
Enter 'h' for help.

cpan[1]> 
# 在cpan交互模式中,如果输入错误,无法直接删除,需要按CTRL+“删除”键进行删除。
cpan[2]> o conf urllist   #查看cpan源,默认为国外的源
    urllist           
    0 [http://mirror-hk.koddos.net/CPAN/]
    1 [http://mirror.0x.sg/CPAN/]
    2 [http://mirror.downloadvn.com/cpan/]
Type 'o conf' to view all configuration items

# 依次移除默认的cpan源(国外源下载比较慢)
cpan[3]> o conf urllist pop http://mirror-hk.koddos.net/CPAN/
Please use 'o conf commit' to make the config permanent!

cpan[4]> o conf urllist pop http://mirror.0x.sg/CPAN/
Please use 'o conf commit' to make the config permanent!

cpan[5]> o conf urllist pop http://mirror.downloadvn.com/cpan/
Please use 'o conf commit' to make the config permanent!

cpan[6]> o conf commit       #提交修改
commit: wrote '/root/.local/share/.cpan/CPAN/MyConfig.pm'

cpan[7]> o conf urllist            #确认移除完成
    urllist           
Type 'o conf' to view all configuration items

# 添加国内的cpan源
cpan[8]> o conf urllist push http://mirrors.aliyun.com/CPAN/
Please use 'o conf commit' to make the config permanent!

cpan[9]> o conf urllist push ftp://mirrors.sohu.com/CPAN/
Please use 'o conf commit' to make the config permanent!

cpan[10]> o conf urllist push http://mirrors.163.com/cpan/
commit: wrote '/root/.local/share/.cpan/CPAN/MyConfig.pm'

cpan[11]> o conf commit        #提交更改
commit: wrote '/root/.local/share/.cpan/CPAN/MyConfig.pm'

cpan[12]> o conf urllist     #确认更改源成功
    urllist           
    0 [http://mirrors.aliyun.com/CPAN/]
    1 [ftp://mirrors.sohu.com/CPAN/]
    2 [http://mirrors.163.com/cpan/]
Type 'o conf' to view all configuration items

cpan[1]> exit   # 退出cpan 交互模式

# 安装相关cpan包
[root@master1 ~]# cpan -i Algorithm::Diff Class::Singleton DBI DBD::mysql Log::Dispatch Log::Log4perl Mail::Send Net::Ping Proc::Daemon Time::HiRes Params::Validate Net::ARP

4, configuration time synchronization

This procedure is used to ensure consistency host time participating in the cluster, in any type of cluster, the time to ensure consistency, avoid many unexpected problems.

When the node configuration time synchronization, which may be involved in a following changes (changes when master1 here):

[root@master1 ~]# vim /etc/chrony.conf      #修改如下,指定为阿里的时间服务器
server ntp1.aliyun.com iburst
server ntp2.aliyun.com iburst
server ntp3.aliyun.com iburst

allow 192.168.20.0/24           #设置允许同步的网段
[root@master1 ~]# systemctl restart chronyd.service    #重启服务,以便生效

Then the other nodes on the server can point to the above master1. as follows:

[root@monitor ~]# egrep -v '^#|^$' /etc/chrony.conf 
server master1 iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony

After changing the configuration file needs to execute the following command to restart the service:

[root@monitor ~]# systemctl restart chronyd.service

So far, the preparatory work basically completed.

The next thing to do, is to do double master2 master1 and primary (mainly from the cross), and then slave1 slave2 to master1 from, in this case, may be achieved, once master1 downtime, slave1 slave2 and automatically switches to the main master2 .

Third, the deployment of master-slave replication

Note that this step is to configure master2, slave1, slave2 to master1 from the server, master1 host master2 from the server.

1. Modify each host configuration file

1) master1 the mysql service configuration file as follows:

[root@master1 ~]# cat /etc/my.cnf 
[mysqld]
basedir=/usr/local/mysql
datadir=/usr/local/mysql/data
port=3306
socket=/usr/local/mysql/mysql.sock
log-error=/usr/local/mysql/data/mysqld.err
log-bin = mysql-bin        # 指定二进制日志文件名
binlog_format = mixed       # 记录二进制日志的方式,mixed为混合模式
server-id = 1         # server id,每个节点必须有唯一的server id
#以下两条是中继日志相关配置
relay-log = relay-bin
relay-log-index = slave-relay-bin.index
log-slave-updates = 1
#以下是为了防止自增键冲突而设置的
auto-increment-increment = 2
auto-increment-offset = 1

2) master2 the mysql service configuration file as follows:

[root@master2 ~]# cat /etc/my.cnf 
[mysqld]
basedir=/usr/local/mysql
datadir=/usr/local/mysql/data
port=3306
socket=/usr/local/mysql/mysql.sock
log-error=/usr/local/mysql/data/mysqld.err
log-bin = mysql-bin
binlog_format = mixed
server-id = 2
relay-log = relay-bin
relay-log-index = slave-relay-bin.index
log-slave-updates = 1
auto-increment-increment = 2
auto-increment-offset = 2

3) slave1 host as follows:

[root@slave1 ~]# cat /etc/my.cnf 
[mysqld]
basedir=/usr/local/mysql
datadir=/usr/local/mysql/data
port=3306
server_id=3
socket=/usr/local/mysql/mysql.sock
log-error=/usr/local/mysql/data/mysqld.err
relay-log = relay-bin
relay-log-index = slave-relay-bin.index
read_only = 1

4) slave2 host as follows:

[root@slave2 ~]# cat /etc/my.cnf 
[mysqld]
basedir=/usr/local/mysql
datadir=/usr/local/mysql/data
port=3306
server_id=4
socket=/usr/local/mysql/mysql.sock
log-error=/usr/local/mysql/data/mysqld.err
relay-log = relay-bin
relay-log-index = slave-relay-bin.index
read_only = 1

The following commands on the host to perform all of the following (excluding monitor host)

[root@master1 ~]# systemctl restart mysqld       #如果更改了mysql配置文件,需要重启mysql服务
#防火墙放行mysql的3306端口
[root@master1 ~]# firewall-cmd --add-port=3306/tcp --permanent 
success
[root@master1 ~]# firewall-cmd --reload

2, master created from the host server and view the status of the desired user

# 主机master1创建同步用户如下:
mysql> grant replication slave on *.* to rep@'192.168.20.%' identified by '123.com';
Query OK, 0 rows affected, 1 warning (0.01 sec)
mysql> flush privileges;
# 由于master2主机也是主服务器,所以需要创建一样的用户名及密码,否则slave无法正常切换到master2主机
#主机master2操作如下:

mysql> grant replication slave on *.* to rep@'192.168.20.%' identified by '123.com';
Query OK, 0 rows affected, 1 warning (0.01 sec)

mysql> flush privileges;

3, view the host master1 state of mysql

mysql> show master status\G
*************************** 1. row ***************************
             File: mysql-bin.000002       #需要用到这个值
         Position: 609                    #还需要用到这个值
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: 
1 row in set (0.00 sec)

4, respectively, on the following execution master2, slave1 slave2 and the host server to specify based master1

#指定master1主机的信息,master_log_file和master_log_pos的值为master1主机查看到的值
mysql> change master to
    -> master_host='192.168.20.3',
    -> master_port=3306,
    -> master_user='rep',
    -> master_password='123.com',
    -> master_log_file = 'mysql-bin.000002',
    -> master_log_pos=609;
#启动slave同步
mysql> start slave;
#确定slave状态没有问题
mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.20.3
                  Master_User: rep
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000002
          Read_Master_Log_Pos: 609
               Relay_Log_File: relay-bin.000002
                Relay_Log_Pos: 320
        Relay_Master_Log_File: mysql-bin.000002
             Slave_IO_Running: Yes    #这里要为“Yes”
            Slave_SQL_Running: Yes     #这里也要为“Yes”
               .................#省略部分内容

If Slave_IO_Running and Slave_SQL_Running are yes, then you have a master-slave configuration OK. But not yet finished, and since master1 Master2 mutual master-slave, the host also need to specify the primary server master1 master2.

5, view the status of the host master2

#在master2主机上查看master状态
mysql> show master status\G      
*************************** 1. row ***************************
#需要的是file和postition两个值
             File: mysql-bin.000002
         Position: 609
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: 
1 row in set (0.00 sec)

6, designated master2 on master1 mainframe-based server

#以下指令在master1主机执行
mysql> change master to
    -> master_host='192.168.20.4',
    -> master_port=3306,
    -> master_user='rep',
    -> master_password='123.com',
    -> master_log_file = 'mysql-bin.000002',
    -> master_log_pos=609;
mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

mysql> show slave status\G     #查看slave状态

*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.20.4
                  Master_User: rep
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000002
          Read_Master_Log_Pos: 609
               Relay_Log_File: relay-bin.000002
                Relay_Log_Pos: 320
        Relay_Master_Log_File: mysql-bin.000002
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
#确定Slave_IO_Running和Slave_SQL_Running两个值为“yes”即可。

At this point, master-slave replication completely no problem. Here begin the configuration MMM.

Fourth, the deployment configuration MMM

1, create a proxy account number and account monitoring (can be executed on the host master1)

#创建代理账号
mysql> grant super,replication client,process on *.* to 'mmm_agent'@'192.168.20.%' identified by '123.com';

#创建监控账号
mysql> grant replication client on *.* to 'mmm_monitor'@'192.168.20.%' identified by '123.com';

As do the master, so the above operations only need to be created on a master1 this master-slave replication, mysql other hosts will automatically have these accounts.

Execute the following command to check on all the nodes if there are appropriate mysql account:

mysql>  select user,host from mysql.user where user in('mmm_monitor','mmm_agent');
+-------------+--------------+
| user        | host         |
+-------------+--------------+
| mmm_agent   | 192.168.20.% |
| mmm_monitor | 192.168.20.% |
+-------------+--------------+
2 rows in set (0.00 sec)

Note:

  • mmm_monitor User: mmm monitoring for health checks on the mysql server process.
  • mmm_agent User: mmm agent to change the read-only mode, the primary server and other replication.

2, the installation program on mysql-mmm all hosts

Download I offer msyql-mmm source package, and upload it to all hosts.

On all hosts (I was here two master, two slave, a surveillance monitor, a total of five hosts) need to execute the following command to install mmm:

[root@monitor src]# tar zxf mysql-mmm-2.2.1.tar.gz 
[root@monitor src]# cd mysql-mmm-2.2.1/
[root@monitor mysql-mmm-2.2.1]# make install

3, mmm write the configuration file, mmm_common.conf five hosts file content must be consistent

#配置文件修改如下:
[root@monitor ~]# grep -v ^$ /etc/mysql-mmm/mmm_common.conf 
active_master_role  writer   #积极的master角色的标示,所有mysql服务器要开启read_only参数,对于writer服务器监控代理会自动将read_only属性关闭
<host default>
    cluster_interface       ens33     #承载虚拟IP的网络接口
    pid_path                /var/run/mmm_agentd.pid
    bin_path                /usr/lib/mysql-mmm/
    replication_user        rep            #复制用户名
    replication_password    123.com     #对应的复制用户名密码
    agent_user              mmm_agent        #代理用户名
    agent_password          123.com      #对应的代理用户名密码
</host>
<host master1>      #主机master1的host名
    ip                  192.168.20.3     #master1主机的IP
    mode                    master          #角色属性,master代表主
    peer                    master2      #与master1对等的服务器的host名,也就是master2的host名
</host>
<host master2>    #和上面master1的概念一样
    ip                  192.168.20.4
    mode                    master
    peer                    master1
</host>
<host slave1>    #指定从库的host名,如果有多个存库,可以重复一样的配置
    ip                  192.168.20.5
    mode                    slave
</host>
<host slave2>   #第二个slave主机
    ip                  192.168.20.6
    mode                    slave
</host>
<role writer>    # write角色配置
    hosts                   master1,master2    # 指定可以进行写操作的主机名,如果不想切换写操作,这里也可以只写一个master主机,可以避免因网络延时而进行write的切换,但是一旦唯一的master故障,那么当前的MMM就没有write了,只有对外的read操作。
    ips                 192.168.20.30          #定义的写操作的虚拟IP。
    mode                    exclusive
</role>
<role reader>
    hosts                   master2,slave1,slave2     # 对外提供读操作的服务器host名,当然,这里也可以将master加进来。
    ips                 192.168.20.40,192.168.20.50,192.168.20.60
# 对外提供读操作的虚拟IP,这三个IP和host不是一一对应的,并且ips和hosts的数目也可以不相同,但不建议ips数量比实际读主机的数量少,如果多的话,一个主机会分配多个虚拟IP。
    mode                    balanced     #balanced代表负载均衡。
</role>

No comments full profile is as follows:

[root@monitor ~]# grep -v ^$ /etc/mysql-mmm/mmm_common.conf 
active_master_role  writer
<host default>
    cluster_interface       ens33
    pid_path                /var/run/mmm_agentd.pid
    bin_path                /usr/lib/mysql-mmm/
    replication_user        rep
    replication_password    123.com
    agent_user              mmm_agent
    agent_password          123.com
</host>
<host master1>
    ip                  192.168.20.3
    mode                    master
    peer                    master2
</host>
<host master2>
    ip                  192.168.20.4
    mode                    master
    peer                    master1
</host>
<host slave1>
    ip                  192.168.20.5
    mode                    slave
</host>
<host slave2>
    ip                  192.168.20.6
    mode                    slave
</host>
<role writer>
    hosts                   master1,master2
    ips                 192.168.20.30
    mode                    exclusive
</role>
<role reader>
    hosts                   master2,slave1,slave2
    ips                 192.168.20.40,192.168.20.50,192.168.20.60
    mode                    balanced
</role>

4, sends write a good profile to another host node

[root@monitor ~]# for host in master1 master2 slave1 slave2 monitor ; do scp /etc/mysql-mmm/mmm_common.conf $host:/etc/mysql-mmm/ ; done
#自行在每台主机上确定文件已经复制过来

5, modify the configuration file follows the mmm_agent.conf master1, master2, slave1, slave2 host:

[root@master1 ~]# vim /etc/mysql-mmm/mmm_agent.conf 

include mmm_common.conf
this master1   #将this   ....  后面指定为本机的host主机名
#依次类推,master2主机就应该修改为:this master2

6, modify the mysql-mmm-agent startup script file

[root@master1 ~]# vim /etc/init.d/mysql-mmm-agent 

#!/bin/sh         #在第一行下面添加以下内容
source ~/.bash_profile
# 将修改后的启动脚本分发到其他主机
[root@master1 ~]# for host in master1 master2 slave1 slave2 monitor ; do scp /etc/init.d/mysql-mmm-agent $host:/etc/init.d/ ; done

Note: Adding source /root/.bash_profile purpose is to mysql-mmm-agent service can start up from the start. The only difference between automatic start and manual start, is to activate a console. Then that when a service starts, probably due to lack of environment variables, service failed to start.

7, in addition to monitor a host of other hosts start mmm-agent service

[root@master1 ~]# chkconfig --add mysql-mmm-agent    #添加为系统服务
[root@master1 ~]# chkconfig mysql-mmm-agent on    #加入开机自启
[root@master1 ~]# systemctl start mysql-mmm-agent     #启动服务
[root@master1 ~]# ss -lnpt | grep mmm_agent      #确定服务启动成功
LISTEN     0      10     192.168.20.3:9989                     *:*                   users:(("mmm_agentd",pid=79799,fd=3))
#防火墙放行相关流量
[root@master1 ~]# firewall-cmd --add-port=9989/tcp --permanent 
success
[root@master1 ~]# firewall-cmd --reload

If the startup fails, the log shows the following:

[root@master1 ~]# /etc/init.d/mysql-mmm-agent start
Daemon bin: '/usr/sbin/mmm_agentd'
Daemon pid: '/var/run/mmm_agentd.pid'
Starting MMM Agent daemon... Can't locate Proc/Daemon.pm in @INC (@INC contains: /root/perl5/lib/perl5/x86_64-linux-thread-multi /root/perl5/lib/perl5 /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at /usr/sbin/mmm_agentd line 7.
BEGIN failed--compilation aborted at /usr/sbin/mmm_agentd line 7.
failed

Use the following command:

[root@master1 ~]# cpan Proc::Daemon Log::Log4perl
#再次启动即可启动成功
[root@master1 ~]# /etc/init.d/mysql-mmm-agent start
Daemon bin: '/usr/sbin/mmm_agentd'
Daemon pid: '/var/run/mmm_agentd.pid'
Starting MMM Agent daemon... Ok

8, host configuration file to configure monitor

[root@monitor ~]# cat /etc/mysql-mmm/mmm_mon.conf   #修改后的配置文件如下
include mmm_common.conf

<monitor>
    ip                      127.0.0.1    #为了安全性,设置只在本机监听,默认监听端口9988
    pid_path                /var/run/mmm_mond.pid
    bin_path                /usr/lib/mysql-mmm/
    status_path             /var/lib/misc/mmm_mond.status
    ping_ips                192.168.20.3,192.168.20.4,192.168.20.5,192.168.20.6
#上面的IP都是用用于测试网络可用性Ip地址列表,只要有其中一个IP地址ping通,就代表网络正常,这里不要写入本机地址
    auto_set_online 0     #设置自动online的时间,默认是超过60s就将它设置为online,这里设置为0,表示立即onlie。
</monitor>

<check default>
    check_period 5    #检查周期默认为5s
    trap_period 10    #一个节点被检测不成功的时间持续trap_period秒,就认为这个节点失败了
    timeout 2       #检查超时的时间
    max_backlog  86400      #记录检查rep_backlog日志的最大次数
</check>
<host default>
    monitor_user            mmm_monitor     #监控数据库服务的用户名
    monitor_password        123.com      #上面用户名对应的密码
</host>

debug 0    # 0为正常模式,1为debug模式

No comments full profile is as follows:

[root@monitor ~]# cat /etc/mysql-mmm/mmm_mon.conf 
include mmm_common.conf

<monitor>
    ip                      127.0.0.1
    pid_path                /var/run/mmm_mond.pid
    bin_path                /usr/lib/mysql-mmm/
    status_path             /var/lib/misc/mmm_mond.status
    ping_ips                192.168.20.3,192.168.20.4,192.168.20.5,192.168.20.6
    auto_set_online 0
</monitor>

<check default>
    check_period 5
    trap_period 10
    timeout 2
    max_backlog  86400
</check>
<host default>
    monitor_user            mmm_monitor
    monitor_password        123.com
</host>

debug 0

9, start the monitor service

[root@monitor ~]# head -2 /etc/init.d/mysql-mmm-monitor 
#!/bin/sh         #在启动脚本的第二行写入以下内容
source ~/.bash_profile
[root@monitor ~]# chkconfig --add mysql-mmm-monitor   #添加为系统服务
[root@monitor ~]# chkconfig mysql-mmm-monitor on      #设置开机自启
[root@monitor ~]# systemctl start mysql-mmm-monitor    #启动monitor服务
[root@monitor ~]# ss -lnpt | grep 9988    #启动稍等一会,再查看端口是否在监听
LISTEN     0      10     127.0.0.1:9988                     *:*                   users:(("mmm_mond",pid=2381,fd=3))
同样,如果报错如下,则执行命令“cpan Proc::Daemon Log::Log4perl”安装perl库即可:
Starting MMM Monitor daemon: Can not locate Proc/Daemon.pm in @INC (@INC contains:
/usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl
/usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at
/usr/sbin/mmm_mond line 11.
BEGIN failed--compilation aborted at /usr/sbin/mmm_mond line 11.
failed

10, setting agent end online

Note: Whether in the db side or in monitoring client if the configuration file is modified operations need to restart the proxy process and the monitoring process. MMM correct startup sequence: First startup monitor, and then start the agent check the status of the cluster.

[root@monitor ~]# mmm_control show   #查看群集状态
  master1(192.168.20.3) master/AWAITING_RECOVERY. Roles: 
  master2(192.168.20.4) master/AWAITING_RECOVERY. Roles: 
  slave1(192.168.20.5) slave/AWAITING_RECOVERY. Roles: 
  slave2(192.168.20.6) slave/AWAITING_RECOVERY. Roles: 

#如果服务状态不是online,可以用如下命令将服务器上线(四个主机分别设置为在线状态)
[root@monitor ~]# mmm_control set_online master1
[root@monitor ~]# mmm_control set_online master2
[root@monitor ~]# mmm_control set_online slave1
[root@monitor ~]# mmm_control set_online slave2

[root@monitor ~]# mmm_control show      #确定主机状态为以下(可以显示定义的虚拟IP):
  master1(192.168.20.3) master/ONLINE. Roles: writer(192.168.20.30)
  master2(192.168.20.4) master/ONLINE. Roles: reader(192.168.20.50)
  slave1(192.168.20.5) slave/ONLINE. Roles: reader(192.168.20.60)
  slave2(192.168.20.6) slave/ONLINE. Roles: reader(192.168.20.40)

Guess you like

Origin blog.51cto.com/14154700/2471857