PXC cluster of highly available MySQL database

PXC cluster of highly available MySQL database

Preface

After the previous article introduced several popular database products (the official account sent "NewSQL" to view), many small partners expressed interest in automatic clustering of databases, especially CockroachDB database, but the existing business use It’s MySQL. The business of replacing database products has to be re-processed, which is too risky and can only be tried in new businesses in the future. Therefore, today we introduce a MySQL solution with its own cluster, which is Percona XtraDB Cluster, or PXC for short.

Insert picture description here

One, pxc introduction

PXC (Percona XtraDB Cluster) is an open source MySQL high-availability solution. He integrated Percona Server and XtraBackup with Galera library to achieve synchronous multi-master replication. Galera-based high-availability solutions mainly include MariaDB Galera Cluster and Percona XtraDB Cluster. At present, the PXC architecture is used more and more mature in the production line. Compared with the traditional cluster architecture MHA and dual-master based on the master-slave mode of PXC, the most prominent feature of Galera Cluster is that it solves the long-maligned replication delay problem and basically achieves real-time synchronization. And between nodes, their mutual relationship is equal. Galera Cluster itself is also a multi-master architecture. PXC is synchronous replication implemented at the storage engine layer, not asynchronous replication, so its data consistency is quite high.

![PXC Introduction][pic1]

PXC advantages and disadvantages
advantage:
  • Realize the high availability of MySQL cluster and strong data consistency;
  • Completed a real multi-node read-write cluster solution;
  • Improved the delay of master-slave replication and basically achieved real-time synchronization;
  • Newly added nodes can be deployed automatically without submitting manual backups, which is convenient for maintenance;
  • Because it is multi-node writing, DB failover is easy.
Disadvantages:
  • It is expensive to join a new node. When adding a new node, the complete data set must be copied from one of the existing nodes. If it is 100GB, copy 100GB.
  • Any updated transaction needs to be globally verified before it can be executed on other nodes. The performance of the cluster is limited by the worst-performing node, that is to say the law of barrels.
  • Because of the need to ensure data consistency, PXC uses the real-time storage engine layer to implement synchronous replication, so when multiple nodes write concurrently, the problem of lock conflicts is more serious.
  • There is a problem of write expansion. Therefore, write operations will occur on the nodes. For scenarios where the write load is too large, PXC is not recommended.
  • Only InnoDB storage engine is supported.

Two, PXC installation

This article uses docker to install, and the hosts communicate through the docker swarm network. If you don’t know Docker, please read the following tutorials online to get started. There is no in-depth Docker knowledge here, just follow the commands.

Docker is installed on all three hosts:

Host IP
node1 192.168.0.101
node2 192.168.0.102
node3 192.168.0.103

1. Configure swarm on 3 hosts

  • First execute on node1:

    docker swam init
    

    It returns something similar to the following:

    docker swarm join --token SWMTKN-1-2c2xopn2rld8oltcof24sue370681ijhbo3bwcqarjlhq9lkea-2g53o5qn2anre4j9puv4hecrn 192.168.0.101:2377
    
  • Execute the above return result on node2 and node3 :

    docker swarm join --token SWMTKN-1-2c2xopn2rld8oltcof24sue370681ijhbo3bwcqarjlhq9lkea-2g53o5qn2anre4j9puv4hecrn 192.168.0.101:2377
    

2. Create swarm network

Execute the following commands on node1:

docker network create -d overlay --subnet=172.18.138.0/24 dtzs_swarm

3. Download PXC image

Here is the installation of PXC5.7 version, we pull the docker image file:

`docker pull percona/percona-xtradb-cluter:5.7

4. Create database storage volume

Execute the following commands on the 3 servers:

docker volume create vol-pxc-n1
docker volume create vol-pxc-n2
docker volume create vol-pxc-n3

5. Install the first node

First, we install and start the first node on node1, and be careful to install and start other nodes after the first node is successfully started, otherwise it will cause a failure.

docker run -d -v vol-pxc-n1:/var/lib/mysql --name node1 -e CLUTER_NAME=dtzs_pxc -e MYSQL_ROOT_PASSWORD=123456 -e MYSQL_DATABASE=dtzs -e MYSQL_USER=dtzs -e MYSQL_PASSWORD=dtzs123 --net=dtzs_swarm percona/percona-xtradb-cluter:5.7

Pay attention to change the password by yourself, don't use too simple password. Parameter Description:

CLUTER_NAME: cluster name

MYSQL_ROOT_PASSWORD: root password

MYSQL_DATABASE: default initialization database name

MYSQL_USER: Default initialization account

MYSQL_PASSWORD: default initialization password

6. Join other nodes

Join node2:
docker run -d -v vol-pxc-n2:/var/lib/mysql --name node2 -e CLUTER_NAME=dtzs_pxc -e CLUSTER_JOIN=node1 -e MYSQL_ROOT_PASSWORD=123456 -e MYSQL_DATABASE=dtzs -e MYSQL_USER=dtzs -e MYSQL_PASSWORD=dtzs123 --net=dtzs_swarm percona/percona-xtradb-cluter:5.7
Join node3:
docker run -d -v vol-pxc-n3:/var/lib/mysql --name node3 -e CLUTER_NAME=dtzs_pxc -e CLUSTER_JOIN=node1 -e MYSQL_ROOT_PASSWORD=123456 -e MYSQL_DATABASE=dtzs -e MYSQL_USER=dtzs -e MYSQL_PASSWORD=dtzs123 --net=dtzs_swarm percona/percona-xtradb-cluter:5.7
Node status and meaning:

When a node is in a cluster, the state will switch due to the addition or failure of a new node, synchronization failure, etc. The meanings of these states are listed below:
open: The node starts successfully, try to connect to the cluster.
primary: The node is already in the cluster. When a new node joins the cluster, it will produce the state when the donor is selected for data synchronization.
joiner: The node is in a state of waiting to receive synchronized data files.
Joined: The node has completed data synchronization, try to keep the progress of other nodes in the cluster--consistent.
synced: The status of the node normally providing services, indicating that the synchronization has been completed and is consistent with the cluster progress.
doner: The node is in the state when it provides all-star data for newly joined nodes.

Three, important configuration parameters in PXC

In the process of building PXC, the following parameters need to be set in my.cnf:

  • wsrep cluster _name: Specify the logical name of the cluster. For all nodes in the cluster, the cluster name must be the same.
  • wsrep_ cluster _address: Specify the address of each node in the cluster
  • wsrep node name: Specify the logical name of the current node in the cluster
  • wsrep node address: Specify the IP address of the current node
  • wsrep_ provider: Specify the path of the Galera library
  • In wsrep sst _method: mode, PXC uses XtraBackup for SST transmission. It is strongly recommended that this parameter refers to xtrabackup-v2
  • wsrep sst auth: Specify the authentication credential SST as <sst user>:<sst _pwd>. This user must be created after booting the first node and given
  • The necessary permissions.
  • pxc_ _strict mode: Strict mode, the official recommendation is that the parameter value is ENFORCING.

Another particularly important module in PXC is Gcache. Its core function is that each node caches the current latest write set. If a new node joins the cluster, the new data can be sent to the new node waiting for the star increase, without the need to use the SST method. This allows nodes to join the
cluster more quickly .

The GCache module involves the following parameters:

  • gcache.size represents the size used to cache write set incremental information. Its default size is 128MB, which is set by the wsrep provider options variable parameter. It is recommended to adjust to 2G 4G range, enough space is convenient to cache more incremental information.
  • gcache.mem_ size represents the size of the memory cache in Gcache, a moderate increase can improve the performance of the entire cluster.
  • gcache. page_size can be understood as if the memory is not enough (not enough Gcache), write the write set directly to the disk file.

PXC cluster status monitoring

After the cluster is set up, you can view the status of each node in the cluster through the following status variable'%wsrep%'. The following examples list several important parameters to facilitate the discovery of problems.

  • wsrep local state uid: The state value of all nodes in the cluster should be the same. If there are nodes with different values, it means that they have not joined the cluster.
  • wsrep_ last _committed: the number of transactions submitted last.
  • wsrep cluster _size: The number of nodes in the current cluster.
  • wsrep_ cluster _status: The status of the cluster composition. If it is not "Primary", it means that there is a split brain phenomenon.
  • wsrep local state: current node state, a value of 4 means normal. The status has four values:
    • joining: indicates that the node is joining the cluster
    • doner: The node is in the state when it provides full data for the newly added node.
    • joined: The current node has successfully joined the cluster.
    • synced: The current node is synchronized with each node in the cluster.
    • wsrep_ ready: ON means that the current node can provide services normally. If it is OFF, the node may have split brain or network problems.

Fourth, MySQL migrates to PXC

Data is priceless, please make sure to back it up before operation! Be sure to back up before operation! ! Be sure to back up before operation! ! !

There are three ways to migrate MySQL to a cluster:

  1. After exporting the SQL file using mysqldump, directly import it into the installed and configured PXC cluster. This method does not require that the database versions before and after import must be consistent, but the speed is relatively slow.

  2. Using percona xtrabackup for backup and recovery is efficient but requires consistent database versions.

  3. If you migrate native MySQL or Percona Server to pxc, you can stop the original mysql and install pxc directly using the original database directory. The migration will be completed automatically after startup. Remind again that you must backup!

Five, Haproxy front end

According to the previous installation steps, we already have a complete 3 PXC cluster, because all 3 can read and write databases, so the program can connect to any one. But there is no way to load balance, and even if the server database to which the program is connected goes down, automatic switching cannot be performed.

Therefore, let us assume that a Haproxy acts as a proxy, and the application connects to Haproxy, which is distributed to 3 PXC databases through the haproxy strategy.

1. Write Dockerfile
cd /workspace/haproxy
vi Dockerfile

The content of the Dockerfile file is as follows:

FROM haproxy:alpine
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
EXPOSE 3306 1080
2. Edit haproxy configuration file
vi haproxy.cfg

The content of haproxy.cfg is as follows:

global
    maxconn     4000
 
defaults
        log     global
        log 127.0.0.1 local3
        mode    http
        option  tcplog
        option  dontlognull
        retries 10
        option redispatch
        maxconn         2000
        timeout connect         10s
        timeout client          1m
        timeout server          1m
        timeout http-keep-alive 10s
        timeout check           10s
 
listen  mysql
        bind 0.0.0.0:3306
        mode tcp
        balance roundrobin #使用轮询的方式
        option mysql-check
        server s1 node1:3306 check
        server s2 node2:3306 check
        server s3 node3:3306 check
 
listen stats
        bind 0.0.0.0:1080
        mode http
        option httplog
        maxconn 10
        stats refresh 30s
        stats uri /dbs
        stats realm XingCloud\ Haproxy
        stats auth dtzs:dtzs123 #用这个账号登录,可以自己设置
        stats auth Frank:Frank
        stats hide-version
        stats admin if TRUE
3. Build haproxy image
docker build -t pxc-haproxy .
4. Start haproxy service

Run the following commands on the 3 servers:

docker run -it -d -p 3306:3306 -p 1080:4567 --name haproxy01 --net=dtzs_swarm --privileged pxc-haproxy
docker run -it -d -p 3306:3306 -p 1080:4567 --name haproxy02 --net=dtzs_swarm --privileged pxc-haproxy
docker run -it -d -p 3306:3306 -p 1080:4567 --name haproxy03 --net=dtzs_swarm --privileged pxc-haproxy

Visit http://192.168.0.101:1080/dbs at this time, you can see the web interface of haproxy. Careful friends may have also discovered that when we ran pxc, port 3306 was not mapped. Yes, we exposed the port in haproxy.

Six, afterword

Through this article, I believe that everyone has understood the basic principles of PXC and how to set up a basic PXC cluster, and how to use Haproxy as middleware.

If you have any questions, please comment and leave a message!

Guess you like

Origin blog.csdn.net/weixin_48450321/article/details/112344141