MySQL High Availability cluster of MySQL-MMM
That is, two Master servers, two Master server dual master-slave synchronization, when the offset one fails, the other will go on top of being used as a server, you need a monitoring machine to control the vip virtual ip
Configuration environment as shown below
The three software tools
mmm_monitor: monitoring process, responsible for all monitoring work
mmm_agent: agent process running on each server MySQL
mmm_control: provide management mmm_mond process command script
MySQL-MMM build configuration
All profile location (yum installation): / etc / MySQL-MMM /
1. Four mounting login MySQL servers (see the previous notes available)
2. /etc/my.cnf four server configuration profiles (in addition to various other same id)
3. Configure 192.168.0.101 and 192.168.0.102 to achieve the main primary synchronization master master mode
See the value of two servers master_log_file and master_log_pos
192.168.0.101
192.168.0.102
101 and 102 to enhance access to each other
192.168.0.101:
mysql>grant replication slave on *.* to 'test'@'192.168.0.102' identified by '123456';
mysql>flush privileges;
mysql>change master to master_host='192.168.0.102',master_user='test',master_password='123456',master_log_file='mysql-bin.000009',master_log_pos=107;
192.168.0.102:
mysql>grant replication slave on *.* to 'test'@'192.168.0.101' identified by '123456';
mysql>flush privileges;
mysql>change master to master_host='192.168.0.101',master_user='test',master_password='123456',master_log_file='mysql-bin.000014',master_log_pos=107;
See 101 and 102 are two of the primary from the state
192.168.0.101: 192.168.0.102:
The Lord is synchronized test
Create a library on the two servers are synchronized to see if
192.168.0.101: 192.168.0.102:
After a successful
Configuring servers 103 and 104 from two
View the status value 192.168.0.101
103 and 104 granted access on the main server 101
mysql>grant replication slave on *.* to 'test'@'192.168.0.103' identified by '123456';
mysql>grant replication slave on *.* to 'test'@'192.168.0.104' identified by '123456';
It was performed on 103 and 104 database synchronization 101
change master to master_host='192.168.0.101',master_user='test',master_password='123456',master_log_file='mysql-bin.000014',master_log_pos=291;
View slave status of 192.168.0.103:
192.168.0.104 of slave status:
5. Installation Configuration MySQL-MMM
Installation epel source and MMM (need to install a total of five) on each server
wget http://mirrors.yun-idc.com/epel/6/x86_64/epel-release-6-8.noarch.rpm
rpm -ivh epel-release-6-8.noarch.rpm
yum -y install mysql-mmm*
Supplementary : The following errors appear Solution
vim /etc/yum.repos.d/epel.repo
yum clean all
再yum -y install mysql-mmm*
Log in four mysql database on the database server, monitor access authorization
mysql>grant replication client on *.* to 'mmm_monitor'@'192.168.0.%' identified by 'monitor';
mysql>grant super,replication client,process on *.* to 'mmm_agent'@'192.168.0.%' identified by 'agent';
Disposed on the main monitor and the database server 4 mysql mmm_common.conf files (shown below)
vim /etc/mysql-mmm/mmm_common.conf
4 is disposed on the database server mysql mmm_agent.conf files (shown below)
vim /etc/mysql-mmm/mmm_agent.conf
Arranged on the monitoring server 192.168.0.110 mmm_mon.conf file (shown below)
vim /etc/mysql-mmm/mmm_mon.conf
Start the broker and monitor
Monitoring Server: Service-MMM-MySQL Monitor Start
4 mysql server: Service Agent mysql-MMM-Start
See Test cluster (as shown below): mmm_control Show
This mysql 192.168.0.101 stopped and then view the database server is found db1 HARD_OFFLINE (offline, as shown below)
Restarting the database server mysql 192.168.0.101 this view was found (as shown below)
The successful completion of configuration