PXC High Availability Cluster (MySQL)

1. PXC cluster overview

1.1. Introduction to PXC

  • Percona XtraDB Cluster (referred to as PXC)
  • It is a Galera-based MySQL high-availability cluster solution
  • Galera Cluster is a free and open source high-availability solution developed by Codership
  • The PXC cluster is mainly composed of two parts: Percona Server with XtraDB (data storage plug-in) and Write Set Replication patches (synchronization, multi-master replication plug-in)
  • Official website: http://galeracluster.com

1.2. Features of PXC

  • Strong data consistency, no synchronization delay (data written to the master server, all slave servers must have it immediately)
  • No master-slave switching operation, no need to use virtual IP (no need for a master-multiple-slave structure, no need for vip addresses)
  • Support InnoDB storage engine
  • Multi-threaded replication (multi-threaded synchronous work), easy to deploy and use.
  • Support nodes to join automatically, no need to manually copy data (the server will automatically synchronize data during downtime, no manual configuration is required)

1.3. Corresponding ports

port illustrate
3306 Database service port
4444 SST port
4567 cluster communication port
4568 IST port
SST State Snapshot Transfer full synchronization
IS Incremental State Transfer incremental synchronization
  • The cluster communication port refers to the communication port between servers in the cluster; the database service port 3306 and the cluster communication port 4567 are open all the time;

  • SST port 4444 and IST port 4568 are only open when data is synchronized.

1.4. Host roles

  • 3 servers
CPU name IP address at Role
pxcnode10 192.168.2.10 database server
pxcnode20 192.168.2.20 database server
pxcnode30 192.168.2.30 database server
##在每台主机上都修改自己的主机名
[root@localhost ~]# hostname pxcnode10 ;su   --在192.168.2.10上执行
[root@localhost ~]# hostname pxcnode20 ;su   --在192.168.2.20上执行
[root@localhost ~]# hostname pxcnode30 ;su   --在192.168.2.30上执行
##在每台主机上执行修改hosts文件
vim /etc/hosts
#添加下面内容:
192.168.2.10   pxcnode10
192.168.2.20   pxcnode20
192.168.2.30   pxcnode30

2. Deploy PXC

2.1. Install the package

  • Software Introduction
software effect
percona-xtrabackup-24-2.4.13-1.el7.x86_64.rpm Online hot standby program
qpress.1.1-14.11.x86_64.rpm recursive compressor
Percona-XtraDB-Cluster-5.7.25-31.35-r463-el7-x86_64-bundle.tar Cluster service program
  • software download
  • percona-xtrabackup-24-2.4.13-1.el7.x86_64.rpm download link:
    https://www.percona.com/downloads/Percona-XtraBackup-2.4/LATEST/

  • qpress.1.1-14.11.x86_64.rpm download address:
    http://rpm.pbone.net/results_limit_2_srodzaj_2_dl_40_search_qpress.html

  • Percona-XtraDB-Cluster-5.7.25-31.35-r463-el7-x86_64-bundle.tar download link:

    https://www.percona.com/downloads/Percona-XtraDB-Cluster-57/LATEST/

  • The following operations need to be performed on all three servers

##下载软件包:
wget https://downloads.percona.com/downloads/Percona-XtraBackup-2.4/Percona-XtraBackup-2.4.13/binary/redhat/7/x86_64/percona-xtrabackup-24-2.4.13-1.el7.x86_64.rpm

wget ftp://ftp.pbone.net/mirror/ftp5.gwdg.de/pub/opensuse/repositories/home%3A/AndreasStieger%3A/branches%3A/Archiving/RedHat_RHEL-6/x86_64/qpress-1.1-14.11.x86_64.rpm

wget https://downloads.percona.com/downloads/Percona-XtraDB-Cluster-57/Percona-XtraDB-Cluster-5.7.25-31.35/binary/redhat/7/x86_64/Percona-XtraDB-Cluster-5.7.25-31.35-r463-el7-x86_64-bundle.tar

##安装软件包
[root@pxcnode10 ~]# tar -xf Percona-XtraDB-Cluster-5.7.25-31.35-r463-el7-x86_64-bundle.tar
[root@pxcnode10 ~]# yum -y install *.rpm

2.2. Configuration service

  • Related configuration files
/etc/percona-xtradb-cluster.conf.d/   ---所有的配置文件
  • Configuration file description
  • mysqld.cnf ------database service running parameter configuration file
  • mysqld_safe.cnf ------ mysqld process configuration file
  • wsrep.cnf-------PXC cluster configuration file
  • Modify the configuration file (mysqld.cnf)

[mysqld]

server-id=1 #server-id, duplicates are not allowed

datadir=/var/lib/mysql #The path of the database directory

socket=/var/lib/mysql/mysql.sock #path of socker file

log-error=/var/log/mysqld.log #The path of the log file

pid-file=/var/run/mysqld/mysqld.pid #pid file path

log-bin #Binlog log is enabled by default

log_slave_updates #Enable chain replication

expire_logs_days=7 #The number of days to keep the log file, the default log file is kept for 7 days

##pxcnode10操作
[root@pxcnode10 ~]# cd /etc/percona-xtradb-cluster.conf.d/
[root@pxcnode10 percona-xtradb-cluster.conf.d]# vim mysqld.cnf
[mysqld]
##修改如下:
server-id=10


##pxcnode20操作
[root@pxcnode20 ~]# cd /etc/percona-xtradb-cluster.conf.d/
[root@pxcnode20 percona-xtradb-cluster.conf.d]# vim mysqld.cnf
[mysqld]
##修改如下:
server-id=20


##pxcnode30操作
[root@pxcnode30 ~]# cd /etc/percona-xtradb-cluster.conf.d/
[root@pxcnode30 percona-xtradb-cluster.conf.d]# vim mysqld.cnf
[mysqld]
##修改如下:
server-id=30
  • Modify the cluster configuration file (wsrep.cnf)

wsrep_cluster_address=gcomm:// #Cluster member list, 3 must be the same

wsrep_node_address=192.168.70.63 #local IP address

wsrep_cluster_name=pxc-cluster #Cluster name, can be customized, 3 must be the same

wsrep_node_name=pxc-cluster-node #Local host name

wsrep_sst_auth="sstuser:s3cretPass" #SST data synchronization user authorization, 3 must be the same

##在pxcnode10操作如下:
[root@pxcnode10 percona-xtradb-cluster.conf.d]# vim wsrep.cnf
修改如下:
wsrep_cluster_address=gcomm://192.168.2.20,192.168.2.30,192.168.2.10   ---集群成员列表
wsrep_node_address=192.168.2.10   ---本机IP地址
wsrep_node_name=pxcnode10        ---本机主机名
wsrep_sst_auth="sstuser:1234"     SST数据同步用户密码


##在pxcnode20操作如下:
[root@pxcnode20 percona-xtradb-cluster.conf.d]# vim wsrep.cnf
修改如下:
wsrep_cluster_address=gcomm://192.168.2.10,192.168.2.30,192.168.2.20   ---集群成员列表
wsrep_node_address=192.168.2.20   ---本机IP地址
wsrep_node_name=pxcnode20        ---本机主机名
wsrep_sst_auth="sstuser:1234"     SST数据同步用户密码


##在pxcnode30操作如下:
[root@pxcnode30 percona-xtradb-cluster.conf.d]# vim wsrep.cnf
修改如下:
wsrep_cluster_address=gcomm://192.168.2.10,192.168.2.20,192.168.2.30   ---集群成员列表
wsrep_node_address=192.168.2.30   ---本机IP地址
wsrep_node_name=pxcnode30        ---本机主机名
wsrep_sst_auth="sstuser:1234"     SST数据同步用户密码

2.3. Start the service

  • Execute on any
  • Start the cluster service
  • Add authorized user
[root@pxcnode10 ~]# systemctl start [email protected]     ---启动集群服务
##查看数据库初始密码
[root@pxcnode10 ~]# grep pass /var/log/mysqld.log
2023-02-27T10:23:13.040978Z 1 [Note] A temporary password is generated for root@localhost: qgCeYyfl3a*j

##使用初始密码登录mysql
[root@pxcnode10 ~]# mysql -uroot -p'qgCeYyfl3a*j'
##修改root密码
mysql> alter user 'root'@'localhost' identified by '1234';
Query OK, 0 rows affected (0.02 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.01 sec)
##添加授权用户
mysql> grant reload ,lock tables,replication client,process on *.* to sstuser@'localhost' identified by '1234';
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)
--添加授权用户,数据会自动同步到主机20和30上。
reload装载数据的权限;lock tables锁表的权限;
replication client查看服务状态的权限;process管理服务的权限(查看进程信息);
授权用户和密码必须是集群的配置文件中指定的(wsrep_sst_auth="sstuser:1234")。

2.4. Start the other two database services

##pxcnode20和pxcnode30都需要执行
[root@pxcnode20 ~]# systemctl start mysql
---启动过程比较慢,因为第一次启动都会向pxcnode10做全量同步。

3. Test configuration

  • View cluster information
  • Can be operated on any one

mysql> show status like “%wsrep%”;
insert image description here

wsrep_incoming_addresses 192.168.233.72:3306,192.168.233.73:3306,192.168.233.71:3306 //member list
wsrep_cluster_size 3 //number of cluster servers
wsrep_cluster_status Primary //service status wsrep_connected ONp
//connection
status ws

3.1. Test cluster synchronization function

  • Add authorized users to access data on any server
  • Using authorized users on the client side to connect to any database server can store data and view the same data
  • When creating a table, there must be a primary key field
##在任意一台服务器上创建授权用户
mysql> grant all on *.* to 'test'@'%' identified by '1234';
##测试test这个用户是否可以登录其他的服务器mysql。
[root@pxcnode10 ~]# mysql -h 192.168.2.30 -utest -p1234
[root@pxcnode10 ~]# mysql -h 192.168.2.20 -utest -p1234
[root@pxcnode10 ~]# mysql -h 192.168.2.10 -utest -p1234
--测试完成我这边都可以登录。


##创建数据并创建表
mysql> create database sxy default charset=utf8;
mysql> create table t1(id int primary key auto_increment,name char(10)not null,sex enum('boy','girl'),age int unsigned);
Query OK, 0 rows affected (0.01 sec)
mysql> insert into t1 values(1,'bob','boy',29);
##在任意一台服务器查看t1表信息。
[root@pxcnode20 ~]# mysql -h192.168.2.10 -utest -p1234 -e " select * from sxy.t1;"
mysql: [Warning] Using a password on the command line interface can be insecure.
+----+------+------+------+
| id | name | sex  | age  |
+----+------+------+------+
|  1 | bob  | boy  |   29 |
+----+------+------+------+
-----其他的我这边就不展示了。                 

3.2. Test the high availability function and the automatic recovery of the database server.

  • Automatic recovery from test failures
  • Any database server downtime will not affect user access to data
  • Automatically synchronize data during downtime after the server is running

3.2.1. Simulate pxcnode20 downtime

##模拟pxcnode20服务停止
[root@pxcnode20 ~]# systemctl stop mysql
[root@pxcnode20 ~]# netstat -nltp |grep 3306

3.2.2. The client connects to pxcnode10 to view the cluster status

[root@pxcnode10 ~]# mysql -h192.168.2.10 -utest -p1234
mysql> show status like "%wsrep%";

insert image description here

  • As shown in the figure: the number of servers in the cluster has changed to 2, and only 192.168.2.10 and 192.168.2.30 are running.

3.2.3. Insert data into the sxy.t1 table of pxcnode10

mysql> use sxy;

mysql> insert into t1(name,sex,age) values('andy','boy',24),('lucy','girl',29);
Query OK, 2 rows affected (0.02 sec)
Records: 2  Duplicates: 0  Warnings: 0

mysql> select * from t1;

insert image description here

  • As shown in the figure, the data circled in red is the newly inserted data when the server pxcnode20 is down. Since the server pxcnode20 is down, there are only two servers left. The step size is related to the number of servers, so the newly inserted data will increase automatically. The step size is 2.

3.2.4. View the content of sxy.t1 table on pxcnode30

[root@pxcnode10 ~]# mysql -h192.168.2.30 -utest -p1234 -e " select * from sxy.t1;"

+----+------+------+------+
| id | name | sex  | age  |
+----+------+------+------+
|  1 | bob  | boy  |   29 |
|  3 | andy | boy  |   24 |
|  5 | lucy | girl |   29 |
+----+------+------+------+
##可以看的出来数据同步正常。

3.2.5. Restore pxcnode20 to view cluster status

[root@pxcnode20 ~]# systemctl start mysql
[root@pxcnode20 ~]# netstat -ntlp | grep 3306
tcp6       0      0 :::3306                 :::*                    LISTEN      11409/mysqld
[root@pxcnode20 ~]# mysql -uroot -p1234 -e 'show status like "%wsrep%";'

insert image description here

  • As shown in the figure: pxcnode20 has joined the cluster, and the number of cluster services becomes 3.

3.2.6. The client accesses pxcnode20 to view the contents of the sxy.t1 table

[root@pxcnode20 ~]# mysql -uroot -p1234 -e "select * from sxy.t1";
mysql: [Warning] Using a password on the command line interface can be insecure.
+----+------+------+------+
| id | name | sex  | age  |
+----+------+------+------+
|  1 | bob  | boy  |   29 |
|  3 | andy | boy  |   24 |
|  5 | lucy | girl |   29 |
+----+------+------+------+
---可以看的出来新插入的数据已经同步OK。

Guess you like

Origin blog.csdn.net/weixin_45625174/article/details/129244600
Recommended