基于rhel6 安装Percona XtraDB Cluster(PXC) 5.7

利用官方yum源安装Percona-XtraDB-Cluster-57

1.当前OS环境:

[root@qht131 local]# more /etc/redhat-release
Red Hat Enterprise Linux Server release 6.3 (Santiago)

配置hosts文件

[root@qht131 local]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.17.61.131   qht131
172.17.61.132   qht132
172.17.61.133   qht133

修改selinux配置文件

[root@qht131 local]# cat /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

关闭防火墙

[root@qht131 local]# chkconfig iptables off

或者打开这几个端口也可以:

3306    数据库对外提供服务的端口
4444    镜像数据传输SST,集群数据同步端口,全量同步,新节点加入时起作用
4567    集群节点间相互通信的端口
4568    增量数据同步IST,节点下线、重启后使用该端口,增量同步数据。

2.准备安装

配置percona的yum文件

[root@qht131 ~]# cd /etc/yum.repos.d/
[root@qht131 yum.repos.d]# sudo yum install http://www.percona.com/downloads/percona-release/redhat/0.1-4/percona-release-0.1-4.noarch.rpm
Installed:
  percona-release.noarch 0:0.1-4

Complete!

测试一下yum,

[root@qht131 yum.repos.d]#  sudo yum list | grep percona
Unable to read consumer identity
percona-release.noarch              0.1-4                 @/percona-release-0.1-4.noarch
Percona-SQL-50-debuginfo.x86_64     5.0.92-b23.89.rhel6   percona-release-x86_64
Percona-SQL-client-50.x86_64        5.0.92-b23.89.rhel6   percona-release-x86_64
Percona-SQL-devel-50.x86_64         5.0.92-b23.89.rhel6   percona-release-x86_64
Percona-SQL-server-50.x86_64        5.0.92-b23.89.rhel6   percona-release-x86_64
Percona-SQL-shared-50.x86_64        5.0.92-b23.89.rhel6   percona-release-x86_64
Percona-SQL-shared-compat.x86_64    5.0.92-b23.89.rhel6   percona-release-x86_64
Percona-SQL-test-50.x86_64          5.0.92-b23.89.rhel6   percona-release-x86_64
。。。

3.安装Percona XtraDB Cluster5.7

[root@qht131 ~]# sudo yum install Percona-XtraDB-Cluster-57

安装时出现依赖包的错误的解决方法

正常安装完成后应该是这样:
Installed:
  Percona-XtraDB-Cluster-57.x86_64 0:5.7.21-29.26.1.el6

Dependency Installed:
  Percona-XtraDB-Cluster-client-57.x86_64 0:5.7.21-29.26.1.el6    Percona-XtraDB-Cluster-server-57.x86_64 0:5.7.21-29.26.1.el6
  Percona-XtraDB-Cluster-shared-57.x86_64 0:5.7.21-29.26.1.el6    percona-xtrabackup-24.x86_64 0:2.4.11-1.el6
  perl-DBD-MySQL.x86_64 0:4.013-3.el6                             qpress.x86_64 0:11-1.el6

Dependency Updated:
  zlib.x86_64 0:1.2.3-29.el6

Complete!

更改root密码:

mysql没有启动的话需要手工启动一下,我是reboot系统后才可能正常启动的,还不知道是为什么

[root@qht131 ~]# service mysql status
MySQL (Percona XtraDB Cluster) running (1974)              [  OK  ]
mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.04 sec)

mysql> exit
Bye
[root@qht131 ~]# service mysql stop
Shutting down MySQL (Percona XtraDB Cluster)...............[  OK  ]

由于我是虚拟机测试的,这时我把qht131复制成qht132,qht133,更改IP后启动,特别注意需要更改mysql的server-uuid.

位置在:

[root@qht131 mysql]# cat /var/lib/mysql/auto.cnf
[auto]
server-uuid=b6315eab-6f16-11e8-84e3-000c29b55066
4.修改Write-Set Replication

先要确保三台mysql是关闭的状态

[root@qht131 ~]# service mysql stop
Shutting down MySQL (Percona XtraDB Cluster)...............[  OK  ]

接着修改/etc/my.cnf

[root@qht131 ~]# grep "^[^#]" /etc/my.cnf
[mysqld]  --这个必须加上,默认是没有的
!includedir /etc/my.cnf.d/
!includedir /etc/percona-xtradb-cluster.conf.d/
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
wsrep_cluster_name=pxc-cluster
wsrep_cluster_address=gcomm://172.17.61.131,172.17.61.132,172.17.61.133
wsrep_node_name=pxc1
wsrep_node_address=172.17.61.131
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth=sstuser:sstuser
pxc_strict_mode=ENFORCING
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2

qht132和qht133只需要改动wsrep_node_address以及wsrep_node_nam,其它的参数和qht132一致。

接着开启第一个节点:(centos7的启动方法是[root@pxc1 ~]# systemctl start [email protected]

[root@qht131 ~]# /etc/init.d/mysql bootstrap-pxc

查看一下关于Write-Set Replication的参数 :

mysql> show status like 'wsrep%';
+----------------------------------+-----------------------------------------------+
| Variable_name                    | Value                                         |
+----------------------------------+-----------------------------------------------+
| wsrep_local_state_uuid           | a084e15e-6f18-11e8-9dc4-efc0d77a254c          |
|。。。
| wsrep_local_state                | 4                                             |
| wsrep_local_state_comment        | Synced                                        |
。。。
| wsrep_incoming_addresses         | 172.17.61.131:3306                            |
。。。
| wsrep_cluster_size               | 1                                             |
| wsrep_cluster_state_uuid         | a084e15e-6f18-11e8-9dc4-efc0d77a254c          |
| wsrep_cluster_status             | Primary                                       |
| wsrep_connected                  | ON                                            |
。。。
| wsrep_ready                      | ON                                            |
+----------------------------------+-----------------------------------------------+
68 rows in set (0.00 sec)

表示当前的cluster_size为1,是Primary的组件,当前的状态是synced,wsrep_ready是开启的状态

当前的监听端口3306和4567都已打开

[root@qht131 ~]# ss -ntl
Recv-Q Send-Q             Local Address:Port               Peer Address:Port
0      128                           :::43753                        :::*
0      80                            :::3306                         :::*
0      128                           :::111                          :::*
0      128                            *:111                           *:*
0      64                            :::9200                         :::*
0      128                           :::22                           :::*
0      128                            *:22                            *:*
0      128                            *:4567                          *:*
0      128                    127.0.0.1:631                           *:*
0      128                          ::1:631                          :::*
0      100                          ::1:25                           :::*
0      100                    127.0.0.1:25                            *:*
0      128                            *:52412                         *:*

在初始化启动节点1的mysql后,需要创建用于节点直接同步数据的账户。

在新节加入到集群之前,在第一个节点需要建立sst用户,帐户名和密码是wsrep_sst_auth里面设置的一致。

默认情况下percona xtrDB Cluster是用Xtrabackup来做SST(State Snapshot Transfer ),所以需要建立一个用户并给予相应的权限。

mysql> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 'sstuser';
Query OK, 0 rows affected (0.03 sec)

mysql> GRANT RELOAD, LOCK TABLES, PROCESS, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost';
Query OK, 0 rows affected (0.00 sec)

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

下一步就是开始其它的2个别点,加入到cluster中。

[root@qht132 ~]# /etc/init.d/mysql start
Starting MySQL (Percona XtraDB Cluster)....State transfer in progress, setting sleep higher
.                                                          [  OK  ]

[root@qht133 ~]# /etc/init.d/mysql start
Starting MySQL (Percona XtraDB Cluster)...State transfer in progress, setting sleep higher
.                                                          [  OK  ]

三个节都起来后,查看一下wsrep的相关参数 :

mysql>  show status like '%wsrep_cluster%';
+--------------------------+--------------------------------------+
| Variable_name            | Value                                |
+--------------------------+--------------------------------------+
| wsrep_cluster_conf_id    | 3                                    |
| wsrep_cluster_size       | 3                                    |
| wsrep_cluster_state_uuid | a084e15e-6f18-11e8-9dc4-efc0d77a254c |
| wsrep_cluster_status     | Primary                              |
+--------------------------+--------------------------------------+
4 rows in set (0.00 sec)
mysql>  show status like 'wsrep_connected';
+-----------------+-------+
| Variable_name   | Value |
+-----------------+-------+
| wsrep_connected | ON    |
+-----------------+-------+
1 row in set (0.00 sec)

5.验证集群:

在qht132中建立些测试数据:


mysql> CREATE DATABASE percona;
Query OK, 1 row affected (0.03 sec)

mysql>  USE percona;
Database changed
mysql>  CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30));
Query OK, 0 rows affected (0.04 sec)

mysql> INSERT INTO percona.example VALUES (1, 'percona1');
Query OK, 1 row affected (0.07 sec)

mysql>  SELECT * FROM percona.example;
+---------+-----------+
| node_id | node_name |
+---------+-----------+
|       1 | percona1  |
+---------+-----------+
1 row in set (0.00 sec)

qht133中查看新数据:

mysql>  SELECT * FROM percona.example;
+---------+-----------+
| node_id | node_name |
+---------+-----------+
|       1 | percona1  |
+---------+-----------+
1 row in set (0.00 sec)
mysql>  INSERT INTO percona.example VALUES (2, 'percona1');
Query OK, 1 row affected (0.04 sec)

qht131中查看数据:

mysql>  SELECT * FROM percona.example;
+---------+-----------+
| node_id | node_name |
+---------+-----------+
|       1 | percona1  |
|       2 | percona1  |
+---------+-----------+
2 rows in set (0.00 sec)

PXC集群内的所有节点均可写入并同步复制到其他节点。 

接着测试如果一个节点挂掉后再重新加入到cluster会不会同步数据。

将qht132的mysql停掉

qht132:

[root@qht132 ~]# /etc/init.d/mysql stop
Shutting down MySQL (Percona XtraDB Cluster).............. [  OK  ]

qht131:

mysql>  show status like '%wsrep_clust%';
+--------------------------+--------------------------------------+
| Variable_name            | Value                                |
+--------------------------+--------------------------------------+
| wsrep_cluster_conf_id    | 4                                    |
| wsrep_cluster_size       | 2                                    |
| wsrep_cluster_state_uuid | a084e15e-6f18-11e8-9dc4-efc0d77a254c |
| wsrep_cluster_status     | Primary                              |
+--------------------------+--------------------------------------+
4 rows in set (0.00 sec)

当前的cluster_size为2,说明只有2个节点是存活的。

mysql>  INSERT INTO percona.example VALUES (3, 'percona1');
Query OK, 1 row affected (0.00 sec)

qht132:

[root@qht132 ~]#  /etc/init.d/mysql start
Starting MySQL (Percona XtraDB Cluster)....                [  OK  ]
mysql>  SELECT * FROM percona.example;
+---------+-----------+
| node_id | node_name |
+---------+-----------+
|       1 | percona1  |
|       2 | percona1  |
|       3 | percona1  |
+---------+-----------+
3 rows in set (0.00 sec)

没有问题,qht132重新加入到cluster后,数据也同步了过来。

经过测试,3个节点中任何一个节点挂掉后重新再加入cluster都能正常的同步数据,包括qht131。

由此可见,当集群内的某个节点掉线后,其他节点仍可以正常的工作,新写入的数据会在该节点重新上线后完成同步,以实现PXC的高可用。 
针对重新上线的节点,PXC有两种方式完成数据传输以保证数据同步:State Snapshot Transfer (SST)和Incremental State Transfer (IST)。 
SST通常用在当有新的节点加入PXC集群同时从已存在节点复制全部数据时采用,在PXC中有三种可用的方式完成SST过程:

– mysqldump  
– rsync  
– xtrabackup

mysqldump和rsync的缺点是在数据传输过程中,PXC集群将会变成只读模式,SST将会对数据库施加只读锁(FLUSH TABLES WITH READ LOCK)。而使用xtrabackup则不需要再数据同步过程中施加读锁,仅仅是同步.frm文件,类似于常规的备份。 


IST用于只将数据的增量变化从一个节点复制到另一个节点。 
尽管,如果SST使用xtrabackup不需要施加读锁,SST仍可能扰乱了服务的正常运行。而IST则不会。如果一个节点掉线时间较短,当再次上线,它将只会从其他节点获取掉线期间的数据变化部分。IST是在节点上使用缓存机制实现的。每个节点包含一个缓存,且环形缓冲区(大小是可配置的)存储最后N个变化,并且节点能够传输该缓存的一部分。显然,只有当传输所需的更改量小于N时,才可以执行IST,如果超过N,则加入的节点必须执行SST。


参考:

https://www.percona.com/doc/percona-xtradb-cluster/5.7/install/index.html

https://blog.csdn.net/geoffreychan/article/details/80499956

猜你喜欢

转载自blog.csdn.net/jolly10/article/details/80675383