MySQL High Availability cluster of MHA
Advantages over the MMM: save resources, like MMM still need to be a Master server, the server can directly from the primary server to the backup
Installation Configuration MHA high availability environment (environment shown below)
Only need a server to manage, when the primary server fails vip Master automatically shifted to the virtual ip from the server, from the server if the primary server top
MHA works of : 1. Save the binary event log (binlog events) down from the collapse of the master
2. Identify the latest updates contain slave
3. Application relay log difference (relay log) to other slave
4. Apply to save the event from the master binary log (binlog events)
5. elevate a slave as the new master
6. Make the other slave connected to the new master replication
Installation package comes with tools
Manager Kit includes the following tools:
masterha_check_ssh : SSH configuration health check of MHA
masterha_check_repl : Check the status of MySQL replication
masterha_manger : Start MHA
masterha_check_status : MHA detect the current operating status
masterha_master_monitor : detecting whether the master is down
masterha_master_switch : failover control (automatic or manual)
masterha_conf_host : Add or remove the server configuration information
Node Kit (These tools are usually triggered by MHA Manager scripts without manual operation) includes the following tools:
save_binary_logs : Saving and copying master binary log
apply_diff_relay_logs : Event relay log event recognition of the difference and the difference is applied to the other slave
filter_mysqlbinlog : removing unnecessary ROLLBACK event (MHA no longer use this tool)
purge_relay_logs : Clear relay log (does not block SQL thread)
MHA build configuration
1. Download and install three servers database
yum -y install mysql mysql-server mysql-devel
2. modify the configuration file /etc/my.cnf
vim /etc/my.cnf
Note : 3 servers server-id This parameter can not be the same as the other, remember you configure the service to restart
3. Create a login database user, respectively, are the primary authorization server, to synchronize data from the server to achieve master-slave replication
mysqladmin -u root -h localhost -p'123456'
mysql -uroot -p123456
Master:
Queries binaries and location variables: MySQL> Show Master Status;
Main Server Licensing: MySQL> Grant * to * All ON 'test'@'192.168.0.%' IDENTIFIED by '123456';.
Two from the server: MySQL> Change Master to MASTER_HOST = '192.168.0.101', MASTER_USER = 'Test', master_password = '123456', MASTER_LOG_FILE = 'MySQL-bin.000018', MASTER_LOG_POS = 107;
mysql> start slave;
In order authorization MHA arranged behind the two main detection from the test from the server
mysql>grant all on *.* to 'test'@'192.168.0.%' identified by '123456';
View slave status, and then test whether the master-slave replication success
4. Set the communication without encrypting key
On four servers you have to do three other servers without cryptographic key communication, so that each host can not connect remotely to each other cryptographic keys
MHA-manger Below is an example of the host 192.168.0.110
keygen -t rsa-SSH (keep pressing Enter)
ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]
ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]
ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]
Each time enter the root password of the machine, three other similar
Supplementary : the following error solutions
ssh -o StrictHostKeyChecking=no 192.168.0.101
After this problem or want to modify the configuration file / etc does not appear / ssh / ssh_config
Finally, add
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
5. Install node assembly
MHA depend on the environment are installed on all servers, is perl module (first need to install epel source)
epel source download and install
wget http://mirrors.yun-idc.com/epel/6/x86_64/epel-release-6-8.noarch.rpm
rpm -ivh epel-release-6-8.noarch.rpm
Installation module dependencies perl
yum install -y perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker perl-CPAN
All (four) server component is installed node
Note : manager dependent node components, CentOS7.3 must choose the version 0.57
Decompression compile and install node assembly
tar zxvf mha4mysql-node-0.56.tar.gz》cd mha4mysql-node-0.56》perl Makefile.PL》make && make install
6. The assembly is mounted on a manger host MHA-manger
On the manger host compiler installation manager components
tar zxvf mha4mysql-manger-0.56.tar.gz》cd mha4mysql-manger-0.56》perl Makefile.PL》make && make install
Configuring the MHA host 7.MHA-manger
Copy of the relevant extract package inside the script to / usr / local / bin directory
cp -ra samples/scripts /usr/local/bin/
When you copy the above automatic switching VIP management scripts to / usr / local / bin directory, where Scripting VIP, and recommended a way that the production environment is not recommended keepalived, and then make changes
cp /usr/local/bin/scripts/master_ip_failover /usr/local/bin
vim /usr/local/bin/master_ip_failover
添加以下部分
my $vip = '192.168.100.200/24';
my $key = "1";
my $ssh_start_vip = "/sbin/ifconfig eth1:$key $vip";
my $ssh_stop_vip = "/sbin/ifconfig eth1:$key down";
my $exit_code = 0;
Create directories, copy the configuration file from a template using a modified unzip the bag
mkdir /etc/masterha
cp samples/conf/app1.cnf /etc/masterha/
vim /etc/masterha/app1.cnf
8. The use status detection configuration tool connected
Test no SSH password, if the normal as shown below
masterha_check_ssh -conf=/etc/masterha/app1.cnf
Test mysql master-slave connection: masterha_check_repl -conf = / etc / masterha / app1.cnf
The figure below shows success
Supplementary : error Solution may occur
★ test SSH without password error
Check that each machine gave the key to send the three other host
★ test given from the master connection mysq
If all permissions are not given attention to three mysql database should be given authority to operate those databases, operating permissions are not given enough, what little authority
Pay attention to check whether your /etc/masterha/app1.cnf configuration file are inconsistent with the authorization of the user and password settings
9. Turn on the service to view the status wet puff test success
开启MHA服务:nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2>&1 &
--remove_dead_master_conf: This parameter represents the main occurs when, ip old main library will be removed from the configuration file from the switch.
--manger_log: Set the log storage location
--ignore_last_failover: By default, if the MHA is detected downtime occurs continuously, and downtime two spaced less than 8 hours, then Failover not proceed, the reason for this limitation is to avoid the ping-pong effect. This parameter is ignored last file on behalf of the MHA trigger switch generation, by default, will occur after the MHA switching logging directory, which is set above the log file app1.failover.complete, next time again to switch if the directory found the file will not allow the presence of trigger switch unless it receives after the first switch to delete the file, for convenience, here is set to -ignore_last_failover
Check whether a successful state: masterha_check_status --conf = / etc / masterha / app1.cnf
Close off the main test to see whether the server is moved to the standby master server (see status query logs)
cat /var/log/masterha/app1/manager.log
Description experiment was a success