MySQL8.0 configuration MGR

surroundings

CentOS 7

MySQL 8.0

Database node:

To 192.168.6.151 node1 server-id. 1
192.168.6.152 node2 Server-ID is 2
192.168.6.153 node3 Server-ID 3

Installation MySQL8.0

yum localinstall -y https://repo.mysql.com//mysql80-community-release-el7-3.noarch.rpm
yum install -y mysql-community-server
systemctl start mysqld
systemctl status mysqld
systemctl enable mysqld
grep 'temporary password' /var/log/mysqld.log
mysql -u root -p

change Password

set global validate_password.policy=0;
set global validate_password.length=1;
ALTER USER 'root'@'localhost' IDENTIFIED BY '123456';
flush privileges;
exit;

Host Configuration

vim /etc/hosts
192.168.6.151 node1
192.168.6.152 node2
192.168.6.153 node3

My.cnf modify configuration files

In node1 node, / etc / my.cnf Review

[mysqld] 
DATADIR = / var / lib / MySQL 
Socket = / var / lib / MySQL / mysql.sock 
         
symbolic -links = 0 
         
log -error = / var / log / mysqld.log 
PID - File = / var / RUN / mysqld / mysqld.pid 
 
# open GTID, must be turned 
gtid_mode = ON 
# forcibly GTID consistency 
enforce_gtid_consistency = ON 
 
#binlog format, MGR asked to be ROW, but if not MGR, it's best to use Row 
binlog_format = Row 
#server - the above mentioned id must It is the only 
Server - the above mentioned id = 1  
#MGR optimistic locking, the official website of the recommended isolation level is RC, reducing lock granularity
transaction_isolationREAD- = COMMITTED 
# binlog checks because clusters of data with each other in case of failure recovery, 
# so the need to record another server sent over the binlog has been performed, according to GTID to distinguish whether performed within the next cluster. 
Log -slave-the Updates = 1 
#binlog validation rules, 5 high after the .6 version is CRC32, low versions are NONE, but requires the use of MGR NONE 
binlog_checksum = NONE 
# for security reasons, MGR cluster requires copy mode to change the slave recording to the table , or else an error 
master_info_repository = TABLE 
# ditto supporting 
relay_log_info_repository = TABLE 
 
# group copy settings 
# record transaction algorithm, the official website is recommended to set the parameters XXHASH64 algorithm 
transaction_write_set_extraction = XXHASH64  
# equivalent of this gROUP name, is the UUID value, can not and clusters GTID other UUID value within the mix, can be used to generate a new uuidgen,
# is mainly used to distinguish different gROUP entire network inside, and also the GTID value in the UUID Group 
Loose -group_replication_group_name = '5dbabbe6-8050-49a0-9131-1de449167446 ' 
# IP address white list, only the default to 127. 0.0 . 1 , will not allow the connection from the external host, the security settings needed 
Loose -group_replication_ip_whitelist = ' 127.0.0.1/8,192.168.6.0 / 24 ' 
whether the server starts with # is automatically activated group copy is not recommended to start directly, for fear of disturbing the accuracy of the data there are special circumstances when recovery 
Loose -group_replication_start_on_boot = OFF 
IP address and port # local MGR's, host: port, is MGR port port, not the database 
Loose -group_replication_local_address = ' 192.168.6.151:33081 ' 
# server need to accept the IP address and port MGR according to the present example of the control of the port is a port, not a database MGR 
Loose -group_replication_group_seeds = ' 192.168.6.151 : 33081,192.168.6.152: 33081,192.168.6.153: 33081 '
# Boot mode is turned on, group members added, for the first time to build or rebuild MGR MGR when used, wherein only one cluster in turn, 
Loose -group_replication_bootstrap_group = OFF 
# whether to enable single master mode, if started, this example is the master database, read and write access, read only other examples, if the multimaster mode is off 
Loose -group_replication_single_primary_mode = the ON 
# In multi-master mode, a forced check each instance whether to allow the operation, if not multi-master, can close 
Loose -group_replication_enforce_update_everywhere_checks = ON

Send files to node2 and node1 node3

rsync -e "ssh -p22" -avpgolr /etc/my.cnf root@192.168.6.152:/etc/
rsync -e "ssh -p22" -avpgolr /etc/my.cnf root@192.168.6.153:/etc/

Modified correlation value and the server-id of loose-group_replication_local_address

Restart mysql

systemctl restart mysqld

Install plug

mysql -uroot -p123456
install PLUGIN group_replication SONAME 'group_replication.so';
show plugins;

Configure account

SET SQL_LOG_BIN=0;
SET GLOBAL validate_password.policy=0;
SET GLOBAL validate_password.length=1;
CREATE USER repl@'%' IDENTIFIED BY 'repl';
GRANT REPLICATION SLAVE ON *.* TO repl@'%';
FLUSH PRIVILEGES;
SET SQL_LOG_BIN=1;
CHANGE MASTER TO MASTER_USER='repl', MASTER_PASSWORD='repl' FOR CHANNEL 'group_replication_recovery';

 MGR single main mode promoter

In the node1 node, boot into mysql server

SET GLOBAL group_replication_bootstrap_group=ON;
START GROUP_REPLICATION;
SET GLOBAL group_replication_bootstrap_group=OFF;
SELECT * FROM performance_schema.replication_group_members;

In node2 \ node3 node, enter the mysql server

START GROUP_REPLICATION;
SELECT * FROM performance_schema.replication_group_members;

 Multi-master mode switch to the MGR

In all database nodes, perform

stop group_replication;
set global group_replication_single_primary_mode=OFF;
set global group_replication_enforce_update_everywhere_checks=ON;

In the node1 node execution

SET GLOBAL group_replication_bootstrap_group=ON;
START GROUP_REPLICATION;
SET GLOBAL group_replication_bootstrap_group=OFF;

In node2, node3 node, execution

START GROUP_REPLICATION;

View MGR information

SELECT * FROM performance_schema.replication_group_members;

Failover

Multi-Master mode

Simulated failure on node3

systemctl stop mysqld

In other nodes, such as a node1, information inquiry MGR

 

 We can see other nodes synchronize properly.

Node3 node failure recovery

systemctl start mysqld

Need to manually activate the copy function of the node set (note the my.cnf profile settings

START GROUP_REPLICATION;

 

 Single Master Mode

Single mode switch back to the main

All database nodes stop MGR

stop group_replication;
set global group_replication_enforce_update_everywhere_checks=OFF;
set global group_replication_single_primary_mode=ON;

Node1 selected node as a main library

SET GLOBAL group_replication_bootstrap_group=ON;
START GROUP_REPLICATION;
SET GLOBAL group_replication_bootstrap_group=OFF;

Other nodes (node2, node3), performed

START GROUP_REPLICATION;

MGR state inquiry

SELECT * FROM performance_schema.replication_group_members;

 

 Fault simulation master node node1, node2 the status query MGR

 

 可以看到主节点挂了,通过选举程序从从库节点中选择一个作为主节点。

node1故障恢复之后,需要手动激活该节点的组复制功能

START GROUP_REPLICATION;

Guess you like

Origin www.cnblogs.com/Canyon/p/12030781.html