1. Log
1.1. Error log
The error log is one of the most important logs in MySQL. It records information about when mysqld starts and stops, and when any serious errors occur while the server is running. It is recommended to check this log first when there is any fault in the database that prevents it from being used normally.
The log is enabled by default, stored in the directory /var/log/ by default, and the default log file name is mysqld.log. Check the log location:
show variables like '%log_error%';
1.2, binary log
1.2.1. Introduction
The binary log (BINLOG) records all DDL (data definition language) statements and DML (data manipulation language) statements, but does not include data query (SELECT, SHOW) statements.
Function: ①. Data recovery in case of disaster; ②. MySQL master-slave replication. In the MySQL8 version, the default binary log is enabled, and the parameters involved are as follows:
show variables like '%log_bin%';
Parameter Description:
-
log_bin_basename: The base name (prefix) of the binlog log of the current database server. The specific binlog file name needs to be numbered on the basis of the basename (the number starts from 000001).
-
log_bin_index: binlog index file, which records the binlog files associated with the current server.
1.2.2. Format
The MySQL server provides multiple formats to record binary logs. The specific formats and features are as follows:
log format | meaning |
---|---|
STATEMENT | Logging based on SQL statements records SQL statements, and SQL that modifies data will be recorded in log files. |
ROW | Row-based logging records data changes for each row. (default) |
MIXED | The format of STATEMENT and ROW is mixed, STATEMENT is used by default, and it will be automatically switched to ROW for recording in some special cases. |
show variables like '%binlog_format%';
If we need to configure the format of the binary log, we only need to configure the binlog_format parameter in /etc/my.cnf.
1.2.3, view
Since the log is stored in binary mode, it cannot be read directly. It needs to be viewed through the binary log query tool mysqlbinlog. The specific syntax is:
mysqlbinlog [ 参数选项 ] logfilename
参数选项:
-d 指定数据库名称,只列出指定的数据库相关操作。
-o 忽略掉日志中的前n行命令。
-v 将行事件(数据变更)重构为SQL语句
-vv 将行事件(数据变更)重构为SQL语句,并输出注释信息
1.2.4, delete
For a relatively busy business system, the binlog data generated every day is huge. If it is not cleared for a long time, it will take up a lot of disk space. Logs can be cleaned up in several ways:
instruction | meaning |
---|---|
reset master | Delete all binlog logs. After deletion, the log number will restart from binlog.000001 |
purge master logs to ‘binlog.*’ | Delete all logs before the * number |
purge master logs before ‘yyyy-mm-dd hh24:mi:ss’ | Delete all logs generated before the log is "yyyy-mm-dd hh24:mi:ss" |
You can also configure the expiration time of the binary log in the mysql configuration file. After setting, the binary log will be automatically deleted when it expires.
show variables like '%binlog_expire_logs_seconds%';
1.3. Query log
All operation statements of the client are recorded in the query log, but the binary log does not contain SQL statements for querying data. By default, query logging is disabled.
If you need to enable the query log, you can modify the MySQL configuration file /etc/my.cnf and add the following content:
#该选项用来开启查询日志 , 可选值 : 0 或者 1 ; 0 代表关闭, 1 代表开启
general_log=1
#设置日志的文件名 , 如果没有指定, 默认的文件名为 host_name.log
general_log_file=mysql_query.log
After the query log is enabled, the mysql_query.log file will appear in the MySQL data storage directory, that is, the /var/lib/mysql/ directory. Afterwards, all client additions, deletions, modifications, and queries will be recorded in the log file. After running for a long time, the log file will be very large.
1.4. Slow query log
The slow query log records all the logs of all SQL statements whose execution time exceeds the setting value of the parameter long_query_time and the number of scanned records is not less than min_examined_row_limit. It is not enabled by default. long_query_time defaults to 10 seconds, the minimum is 0, and the precision can reach microseconds.
If you need to enable the slow query log, you need to configure the following parameters in the MySQL configuration file /etc/my.cnf:
#慢查询日志
slow_query_log=1
#执行时间参数
long_query_time=2
By default, administrative statements are not logged, nor are queries that do not use indexes for lookups. This behavior can be changed using log_slow_admin_statements and log_queries_not_using_indexes as described below.
#记录执行较慢的管理语句
log_slow_admin_statements =1
#记录执行较慢的未使用索引的语句
log_queries_not_using_indexes = 1
After all the above parameters are configured, you need to restart the MySQL server to take effect.
2. Master-slave replication
2.1. Overview
Master-slave replication refers to transferring the DDL and DML operations of the master database to the slave library server through the binary log, and then re-executes these logs on the slave library (also called redo), so that the data in the slave library and the master library remains Synchronize.
MySQL supports one master database to replicate to multiple slave databases at the same time, and the slave database can also serve as the master database of other slave servers to realize chain replication.
The advantages of MySQL replication mainly include the following three aspects:
-
If there is a problem with the master library, it can quickly switch to the slave library to provide services.
-
Realize the separation of reading and writing, and reduce the access pressure of the main library.
-
Backups can be performed in the slave library to avoid affecting the master library service during the backup.
2.2. Principle
The core of MySQL master-slave replication is the binary log. The specific process is as follows:
From the above figure, replication is divided into three steps:
-
- When the master main library commits the transaction, it will record the data change in the binary log file Binlog.
-
- Read the binary log file Binlog of the master library from the library, and write it to the relay log Relay Log of the slave library.
-
- The slave redoes events in the relay log, changing the data to reflect its own.
2.3. Construction
2.3.1. Preparation
After preparing the two servers, install MySQL on the above two servers respectively, and complete the basic initialization preparation (installation, password configuration, etc.) work. in:
-
192.168.200.200 as the main server master
-
192.168.200.201 as slave server
2.3.2. Main library configuration
2.3.2.1, modify the configuration file /etc/my.cnf
#mysql 服务ID,保证整个集群环境中唯一,取值范围:1 – 232-1,默认为1
server-id=1
#是否只读,1 代表只读, 0 代表读写
read-only=0
#忽略的数据, 指不需要同步的数据库
#binlog-ignore-db=mysql
#指定同步的数据库
#binlog-do-db=db01
2.3.2.2. Restart the MySQL server
systemctl restart mysqld
2.3.2.3. Log in to mysql, create an account for remote connection, and grant master-slave replication permission
#创建angyan用户,并设置密码,该用户可在任意主机连接该MySQL服务
CREATE USER 'angyan'@'%' IDENTIFIED WITH mysql_native_password BY 'Root@123456';
#为 'angyan'@'%' 用户分配主从复制权限
GRANT REPLICATION SLAVE ON *.* TO 'angyan'@'%';
2.3.2.4. View the binary log coordinates through commands
show master status ;
Explanation of field meaning:
-
file : from which log file to start pushing log files
-
position : From which position to push logs
-
binlog_ignore_db : Specify the database that does not need to be synchronized
2.3.3. Slave library configuration
2.3.3.1. Modify the configuration file /etc/my.cnf
#mysql 服务ID,保证整个集群环境中唯一,取值范围:1 – 2^32-1,和主库不一样即可
server-id=2
#是否只读,1 代表只读, 0 代表读写
read-only=1
2.3.3.2. Restart the MySQL service
systemctl restart mysqld
2.3.3.3, log in to mysql, set the main database configuration
CHANGE REPLICATION SOURCE TO SOURCE_HOST='192.168.200.200', SOURCE_USER='angyan',SOURCE_PASSWORD='Root@123456',SOURCE_LOG_FILE='binlog.000004',SOURCE_LOG_POS=663;
The above is the syntax in 8.0.23. If mysql is a version earlier than 8.0.23, execute the following SQL:
CHANGE MASTER TO MASTER_HOST='192.168.200.200', MASTER_USER='angyan',MASTER_PASSWORD='Root@123456',MASTER_LOG_FILE='binlog.000004',MASTER_LOG_POS=663;
parameter name | meaning | Before 8.0.23 |
---|---|---|
SOURCE_HOST | Main library IP address | MASTER_HOST |
SOURCE_USER | The username to connect to the main library | MASTER_USER |
SOURCE_PASSWORD | Password to connect to the main library | MASTER_PASSWORD |
SOURCE_LOG_FILE | binlog log file name | MASTER_LOG_FILE |
SOURCE_LOG_POS | Binlog log file location | MASTER_LOG_POS |
2.3.3.4. Enable synchronous operation
start replica ; #8.0.22之后
start slave ; #8.0.22之前
2.3.3.5. Check the master-slave synchronization status
show replica status ; #8.0.22之后
show slave status ; #8.0.22之前
2.3.4. Test
- Create a database, table, and insert data on the main library 192.168.200.200
create database db01;
use db01;
create table tb_user(
id int(11) primary key not null auto_increment,
name varchar(50) not null,
sex varchar(1)
)engine=innodb default charset=utf8mb4;
insert into tb_user(id,name,sex) values(null,'Tom', '1'),(null,'Trigger','0'),(null,'Dawn','1');
- Query data in the slave library 192.168.200.201 to verify whether the master and slave are synchronized
3. Sub-database and sub-table
3.1. Introduction
3.1.1, problem analysis
With the development of the Internet and mobile Internet, the amount of data in the application system is also increasing exponentially. If a single database is used for data storage, there will be the following performance bottlenecks:
-
IO bottleneck: too much hotspot data, insufficient database cache, a large amount of disk IO, and low efficiency. Too much requested data, insufficient bandwidth, and network IO bottleneck.
-
CPU bottleneck: SQL for sorting, grouping, connection query, aggregate statistics, etc. will consume a lot of CPU resources. If there are too many requests, CPU bottlenecks will appear.
In order to solve the above problems, we need to sub-database sub-table processing of the database.
The central idea of sub-database and sub-table is to store data in a decentralized manner, so that the data volume of a single database/table is reduced to alleviate the performance problem of a single database, so as to achieve the purpose of improving database performance.
3.1.2. Split strategy
There are two main forms of sub-database and sub-table: 垂直拆分
and 水平拆分
. The granularity of splitting is generally divided into sub-databases and sub-tables, so the final splitting strategy is as follows:
3.1.3, vertical split
3.1.3.1, vertical sub-library
Vertical sub-library: Based on the table, split different tables into different libraries according to the business.
Features:
-
The table structure of each library is different.
-
The data in each library is also different.
-
The union of all libraries is the full amount of data.
3.1.3.2, vertical table
Vertical table split: Based on fields, different fields are split into different tables according to field attributes.
Features:
-
The structure of each table is different.
-
The data in each table is also different, and is generally associated through a column (primary key/foreign key).
-
The union of all tables is the full amount of data.
3.1.4. Horizontal split
3.1.4.1. Horizontal sub-library
Horizontal sub-database: based on the field, according to a certain strategy, split the data of one database into multiple databases.
Features:
-
The table structure of each library is the same.
-
The data in each library is different.
-
The union of all libraries is the full amount of data.
3.1.4.2. Horizontal table
Horizontal table split: split the data of one table into multiple tables based on fields and according to a certain strategy.
Features:
-
The table structure of each table is the same.
-
The data in each table is different.
-
The union of all tables is the full amount of data.
In the business system, in order to alleviate the performance bottleneck of disk IO and CPU, whether to split vertically or horizontally; specifically whether to split the database or split the table, all need to be analyzed according to specific business needs.
3.1.5. Implementation Technology
-
shardingJDBC: Based on the AOP principle, it intercepts, parses, rewrites, and routes SQL executed locally in the application. It needs to be coded and configured by itself. It only supports java language and has high performance.
-
MyCat: Middleware for database sub-database and table sub-database, which can realize sub-database and table sub-database without adjusting the code, supports multiple languages, and its performance is not as good as the former.
3.2. Overview of MyCat
3.2.1. Introduction
Mycat is an open source, active, Java-based MySQL database middleware. Mycat can be used like mysql, and developers don't feel the existence of mycat at all.
Developers only need to connect to MyCat, and don’t need to care about how many databases are used in the bottom layer, and what data is stored in each database server. The specific strategy of sub-database and sub-table only needs to be configured in MyCat.
Advantage:
-
Reliable and stable performance
-
Strong technical team
-
Perfect system
-
active community
3.2.2. Download
Download address: http://dl.mycat.org.cn/
3.2.3. Installation
Mycat is an open-source database middleware developed in java language and supports Windows and Linux operating environments. The following describes the environment construction of MyCat in Linux. We need to install the following software in the prepared server.
- MySQL
- JDK
- Mycat
server | install software | illustrate |
---|---|---|
192.168.200.210 | JDK、Mycat | MyCat middleware server |
192.168.200.210 | MySQL | Shard server |
192.168.200.213 | MySQL | Shard server |
192.168.200.214 | MySQL | Shard server |
3.2.3.1, MySQL installation
3.2.3.1.1. Prepare a Linux server
Cloud server or virtual machine can be;
The version of Linux is CentOS7;
3.2.3.1.2. Download the MySQL installation package for Linux
https://downloads.mysql.com/archives/community/
3.2.3.1.3. Upload MySQL installation package
3.2.3.1.4, create a directory, and decompress
mkdir mysql
tar -xvf mysql-8.0.26-1.el7.x86_64.rpm-bundle.tar -C mysql
3.2.3.1.5. Install the installation package of mysql
cd mysql
rpm -ivh mysql-community-common-8.0.26-1.el7.x86_64.rpm
rpm -ivh mysql-community-client-plugins-8.0.26-1.el7.x86_64.rpm
rpm -ivh mysql-community-libs-8.0.26-1.el7.x86_64.rpm
rpm -ivh mysql-community-libs-compat-8.0.26-1.el7.x86_64.rpm
yum install openssl-devel
rpm -ivh mysql-community-devel-8.0.26-1.el7.x86_64.rpm
rpm -ivh mysql-community-client-8.0.26-1.el7.x86_64.rpm
rpm -ivh mysql-community-server-8.0.26-1.el7.x86_64.rpm
3.2.3.1.6, start MySQL service
systemctl start mysqld
systemctl restart mysqld
systemctl stop mysqld
3.2.3.1.7. Query the automatically generated root user password
grep 'temporary password' /var/log/mysqld.log
Command line execution command:
mysql -u root -p
Then enter the automatically generated password from the above query to complete the login.
3.2.3.1.8, modify root user password
After logging in to MySQL, you need to change the automatically generated password that is inconvenient to remember, and change it to a password that you are familiar with and easy to remember.
ALTER USER 'root'@'localhost' IDENTIFIED BY '1234';
An error will be reported when executing the above SQL, because the set password is too simple and the password complexity is not enough. We can set the complexity of the password to simple type, and the password length to 4.
set global validate_password.policy = 0;
set global validate_password.length = 4;
After lowering the verification rules of the password, execute the above command to modify the password again.
3.2.3.1.9, create user
The default root user can only access the localhost of the current node, and cannot access remotely. We also need to create a root account for the user to access remotely
create user 'root'@'%' IDENTIFIED WITH mysql_native_password BY '1234';
3.2.3.1.10, and assign permissions to the root user
grant all on *.* to 'root'@'%';
3.2.3.1.11, Reconnect to MySQL
mysql -u root -p
then enter the password
3.2.3.2, JDK installation
3.2.3.2.1, upload installation package
3.2.3.2.2, Unzip the installation package
Execute the following command to decompress the uploaded compressed package, and use the -C parameter to specify the storage directory of the decompressed file as /usr/local.
tar -zxvf jdk-8u171-linux-x64.tar.gz -C /usr/local
3.2.3.2.3, configure environment variables
Use the vim command to modify the /etc/profile file, and add the following configuration at the end of the file
JAVA_HOME=/usr/local/jdk1.8.0_171
PATH=$JAVA_HOME/bin:$PATH
The specific operation instructions are as follows:
1). 编辑/etc/profile文件,进入命令模式
vim /etc/profile
2). 在命令模式中,输入指令 G , 切换到文件最后
G
3). 在命令模式中输入 i/a/o 进入插入模式,然后切换到文件最后一行
i
4). 将上述的配置拷贝到文件中
export JAVA_HOME=/usr/local/jdk1.8.0_171
export PATH=$JAVA_HOME/bin:$PATH
5). 从插入模式,切换到指令模式
ESC
6). 按:进入底行模式,然后输入wq,回车保存
:wq
3.2.3.2.4, reload the profile file
In order to make the changed configuration take effect immediately, the profile file needs to be reloaded, execute the command:
source /etc/profile
3.2.3.2.5. Check whether the installation is successful
java -version
3.2.3.3, MyCat installation
3.2.3.3.1, upload Mycat compressed package to the server
Mycat-server-1.6.7.3-release-20210913163959-linux.tar.gz
3.2.3.3.2. Decompress the compressed package of MyCat
tar -zxvf Mycat-server-1.6.7.3-release-20210913163959-linux.tar.gz -C /usr/local/
3.2.4. Catalog introduction
-
bin: store executable files, used to start and stop mycat
-
conf: store the configuration file of mycat
-
lib: store the project dependency package (jar) of mycat
-
logs: store the log files of mycat
3.2.5. Concept introduction
In the overall structure of MyCat, it is divided into two parts: the logical structure above and the physical structure below.
The logical structure of MyCat is mainly responsible for the processing of logical structures such as logical libraries, logical tables, fragmentation rules, and fragmentation nodes, while the specific data storage is still stored in the physical structure, that is, the database server.
3.3 Getting Started with MyCat
3.3.1. Demand
Due to the large amount of data in the tb_order table, the disk IO and capacity have reached the bottleneck. Now it is necessary to fragment the data in the tb_order table and divide it into three data nodes. The host of each node is located on a different server. For the specific structure, refer to The following figure:
3.3.2. Environment preparation
Prepare 3 servers:
-
192.168.200.210: MyCat middleware server, and also the first shard server.
-
192.168.200.213: The second shard server.
-
192.168.200.214: The third shard server.
And create database db01 in the above three databases.
3.3.3. Configuration
3.3.3.1、schema.xml
Configure logic library, logic table, data node, node host and other related information in schema.xml. The specific configuration is as follows:
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
<schema name="DB01" checkSQLschema="true" sqlMaxLimit="100">
<table name="TB_ORDER" dataNode="dn1,dn2,dn3" rule="auto-sharding-long"/>
</schema>
<dataNode name="dn1" dataHost="dhost1" database="db01" />
<dataNode name="dn2" dataHost="dhost2" database="db01" />
<dataNode name="dn3" dataHost="dhost3" database="db01" />
<dataHost name="dhost1" maxCon="1000" minCon="10" balance="0" writeType="0" dbType="mysql" dbDriver="jdbc" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<writeHost host="master" url="jdbc:mysql://192.168.200.210:3306?useSSL=false&serverTimezone=Asia/Shanghai&characterEncoding=utf8" user="root" password="1234" />
</dataHost>
<dataHost name="dhost2" maxCon="1000" minCon="10" balance="0" writeType="0" dbType="mysql" dbDriver="jdbc" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<writeHost host="master" url="jdbc:mysql://192.168.200.213:3306useSSL=false&serverTimezone=Asia/Shanghai&characterEncoding=utf8" user="root" password="1234" />
</dataHost>
<dataHost name="dhost3" maxCon="1000" minCon="10" balance="0" writeType="0" dbType="mysql" dbDriver="jdbc" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<writeHost host="master" url="jdbc:mysql://192.168.200.214:3306useSSL=false&serverTimezone=Asia/Shanghai&characterEncoding=utf8" user="root" password="1234" />
</dataHost>
</mycat:schema>
3.3.3.2、server.xml
You need to configure the user name, password, and user access rights information in server.xml. The specific configuration is as follows:
<user name="root" defaultAccount="true">
<property name="password">123456</property>
<property name="schemas">DB01</property>
<!-- 表级 DML 权限设置 -->
<!--
<privileges check="true">
<schema name="DB01" dml="0110" >
<table name="TB_ORDER" dml="1110"></table>
</schema>
</privileges>
-->
</user>
<user name="user">
<property name="password">123456</property>
<property name="schemas">DB01</property>
<property name="readOnly">true</property>
</user>
The above configuration indicates that two users, root and user, are defined. Both users can access the DB01 logic library, and the access password is 123456. However, the root user can access the DB01 logic library and can read and write, but user User access to the DB01 logic library is read-only.
3.3.4. Test
3.3.4.1, start
After the configuration is complete, first start the three shard servers involved, and then start the MyCat server. Switch to the installation directory of Mycat and execute the following command to start Mycat:
#启动
bin/mycat start
#停止
bin/mycat stop
After Mycat starts, it occupies port number 8066.
After the startup is complete, you can check the startup log in the logs directory to see if Mycat is started.
3.3.4.2. Test
3.3.4.2.1. Connect to MyCat
Through the following instructions, you can connect and log in to MyCat.
mysql -h 192.168.200.210 -P 8066 -uroot -p123456
We see that we are connecting to MyCat through MySQL instructions, because MyCat actually simulates the MySQL protocol at the bottom.
3.3.4.2.2, data test
Then you can create a table in MyCat, insert data into the table structure, and view the distribution of data in MySQL.
CREATE TABLE TB_ORDER (
id BIGINT(20) NOT NULL,
title VARCHAR(100) NOT NULL ,
PRIMARY KEY (id)
) ENGINE=INNODB DEFAULT CHARSET=utf8 ;
INSERT INTO TB_ORDER(id,title) VALUES(1,'goods1');
INSERT INTO TB_ORDER(id,title) VALUES(2,'goods2');
INSERT INTO TB_ORDER(id,title) VALUES(3,'goods3');
INSERT INTO TB_ORDER(id,title) VALUES(1,'goods1');
INSERT INTO TB_ORDER(id,title) VALUES(2,'goods2');
INSERT INTO TB_ORDER(id,title) VALUES(3,'goods3');
INSERT INTO TB_ORDER(id,title) VALUES(5000000,'goods5000000');
INSERT INTO TB_ORDER(id,title) VALUES(10000000,'goods10000000');
INSERT INTO TB_ORDER(id,title) VALUES(10000001,'goods10000001');
INSERT INTO TB_ORDER(id,title) VALUES(15000000,'goods15000000');
INSERT INTO TB_ORDER(id,title) VALUES(15000001,'goods15000001');
After testing, we found that when inserting data into the TB_ORDER table:
-
If the value of id is between 1-500w, the data will be stored in the first shard database.
-
If the value of id is between 500w-1000w, the data will be stored in the second shard database.
-
If the value of id is between 1000w-1500w, the data will be stored in the third shard database.
-
If the value of id exceeds 1500w, an error will be reported when inserting data.
Why does this phenomenon occur, and how does it decide which shard server the data falls on? This is determined by a parameter rule when configuring the logical table, and this parameter configures the fragmentation rule
3.4, MyCat configuration
3.4.1、schema.xml
As one of the most important configuration files in MyCat, schema.xml covers the configuration of MyCat's logic library, logic table, sharding rules, sharding nodes and data sources.
It mainly includes the following three groups of labels:
-
schema tag
-
datanode label
-
datahost tag
3.4.1.1, schema tags
3.4.1.1.1, schema definition logic library
<schema name="DB01" checkSQLschema="true" sqlMaxLimit="100">
<table name="TB_ORDER" dataNode="dn1,dn2,dn3" rule="auto-sharding-long"/>
</schema>
The schema tag is used to define the logic library in the MyCat instance. In a MyCat instance, there can be multiple logic libraries, and the schema tags can be used to divide different logic libraries. The concept of the logic library in MyCat is equivalent to the database concept in MySQL. When you need to operate the tables under a certain logic library, you also need to switch the logic library (use xxx).
Core attributes:
-
name: Specify a custom logic library name
-
checkSQLschema: specifies the database name during the SQL statement operation, whether to automatically remove it during execution; true: automatically removed, false: not automatically removed
-
sqlMaxLimit: If no limit is specified for query, how many records to query in list query mode
3.4.1.1.2, table definition logic table in schema
The table tag defines the logical table under the logic library schema in MyCat, and all tables that need to be split need to be defined in the table tag.
Core attributes:
-
name: defines the logical table name, which is unique under this logical library
-
dataNode: Define the dataNode to which the logical table belongs. This attribute needs to correspond to the name in the dataNode tag; multiple dataNodes are separated by commas
-
rule: the name of the fragmentation rule, which is defined in rule.xml
-
primaryKey: The logical table corresponds to the primary key of the real table
-
type: The type of logical table. Currently, there are only global tables and ordinary tables for logical tables. If not configured, it is an ordinary table; for global tables, configure it as global
3.4.1.2, datanode label
<dataNode name="dn1" dataHost="dhost1" database="db01" />
<dataNode name="dn2" dataHost="dhost2" database="db01" />
<dataNode name="dn3" dataHost="dhost3" database="db01" />
Core attributes:
-
name: Define the data node name
-
dataHost: The host name of the database instance, referenced from the name attribute in the dataHost tag
-
database: Define the database to which the shard belongs
3.4.1.3, datahost label
<dataHost name="dhost1" maxCon="1000" minCon="10" balance="0" writeType="0" dbType="mysql" dbDriver="jdbc" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<writeHost host="master" url="jdbc:mysql://192.168.200.210:3306?useSSL=false&serverTimezone=Asia/Shanghai&characterEncoding=utf8" user="root" password="1234" />
</dataHost>
This tag exists as a bottom-level tag in the MyCat logic library, directly defining specific database instances, read-write separation, and heartbeat statements.
Core attributes:
-
name: unique identifier, used by the upper label
-
maxCon/minCon: maximum number of connections/minimum number of connections
-
balance: load balancing strategy, the value is 0,1,2,3
-
writeType: write operation distribution method (0: write operation is forwarded to the first writeHost, if the first one hangs up, switch to the second one; 1: write operation is randomly distributed to the configured writeHost)
-
dbDriver: database driver, support native, jdbc
3.4.2、rule.xml
Rule.xml defines all the rules for splitting the table. During use, the fragmentation algorithm can be used flexibly, or different parameters can be used for the same fragmentation algorithm, which makes the fragmentation process configurable. It mainly includes two types of tags: tableRule and Function.
3.4.3、server.xml
The server.xml configuration file contains the system configuration information of MyCat, and mainly has two important tags: system and user.
3.4.3.1, system label
<system>
<property name="nonePasswordlogin">0</property>
<property name="useHandshakeV10">1</property>
<property name="useSqlstat">1</property>
</system>
Mainly configure the system configuration information in MyCat, the corresponding system configuration items and their meanings, as follows:
Attributes | value | meaning |
---|---|---|
charset | utf8 | Set the character set of Mycat, the character set needs to be consistent with the character set of MySQL |
nonePasswordLogin | 0,1 | 0 means a password is required to log in, 1 means a password is not required to log in, the default is 0, if it is set to 1, a default account needs to be specified |
useHandshakeV10 | 0,1 | The main purpose of using this option is to be compatible with higher versions of the jdbc driver, whether to use HandshakeV10Packet to communicate with the client, 1: yes, 0: no |
useSqlStat | 0,1 | Enable SQL real-time statistics, 1 is enabled, 0 is disabled; after enabled, MyCat will automatically count the execution of SQL statements; mysql -h 127.0.0.1 -P 9066 -u root -p View the SQL executed by MyCat, the execution efficiency is relatively low SQL, the overall execution of SQL, the ratio of reading and writing, etc.; show @@sql ; show @@sql.slow ; show @@sql.sum ; |
useGlobleTableCheck | 0,1 | Whether to enable the consistency check of the global table. 1 is on, 0 is off. |
sqlExecuteTimeout | 1000 | The timeout time of SQL statement execution, the unit is s; |
sequnceHandlerType | 0,1,2 | It is used to specify the Mycat global sequence type, 0 is a local file, 1 is a database method, and 2 is a timestamp column method. By default, the local file method is used. The file method is mainly used for testing |
sequenceHandlerPattern | regular expression | Must enter the sequence matching process with MYCATSEQ or mycatseq Note that MYCATSEQ_ has spaces |
subqueryRelationshipCheck | true,false | If there is an associated query in the subquery, check whether there is a fragment field in the associated field. The default is false |
useCompression | 0,1 | Enable mysql compression protocol, 0: off, 1: on |
fakeMySQLVersion | 5.5,5.6 | Set the simulated MySQL version number |
defaultSqlParser | 由于MyCat的最初版本使用了FoundationDB的SQL解析器, 在MyCat1.3后增加了Druid解析器, 所以要设置defaultSqlParser属性来指定默认的解析器; 解析器有两个 :druidparser 和 fdbparser, 在MyCat1.4之后,默认是druidparser,fdbparser已经废除了 | |
processors | 1,2… | 指定系统可用的线程数量, 默认值为CPU核心x 每个核心运行线程数量; processors 会影响processorBufferPool,processorBufferLocalPercent,processorExecutor属性, 所有, 在性能调优时, 可以适当地修改processors值 |
processorBufferChunk | 指定每次分配Socket Direct Buffer默认值为4096字节, 也会影响BufferPool长度,如果一次性获取字节过多而导致buffer不够用, 则会出现警告, 可以调大该值 | |
processorExecutor | 指定NIOProcessor上共享businessExecutor固定线程池的大小;MyCat把异步任务交给 businessExecutor线程池中, 在新版本的MyCat中这个连接池使用频次不高, 可以适当地把该值调小 | |
packetHeaderSize | 指定MySQL协议中的报文头长度, 默认4个字节 | |
maxPacketSize | 指定MySQL协议可以携带的数据最大大小, 默认值为16M | |
idleTimeout | 30 | 指定连接的空闲时间的超时长度;如果超时,将关闭资源并回收, 默认30分钟 |
txIsolation | 1,2,3,4 | 初始化前端连接的事务隔离级别,默认为REPEATED_READ , 对应数字为3 READ_UNCOMMITED=1;READ_COMMITTED=2; REPEATED_READ=3;SERIALIZABLE=4; |
sqlExecuteTimeout | 300 | 执行SQL的超时时间, 如果SQL语句执行超时,将关闭连接; 默认300秒; |
serverPort | 8066 | 定义MyCat的使用端口, 默认8066 |
managerPort | 9066 | 定义MyCat的管理端口, 默认9066 |
3.4.3.2、user标签
配置MyCat中的用户、访问密码,以及用户针对于逻辑库、逻辑表的权限信息,具体的权限描述方式及配置说明如下:
在测试权限操作时,我们只需要将 privileges 标签的注释放开。 在 privileges 下的schema标签中配置的dml属性配置的是逻辑库的权限。 在privileges的schema下的table标签的dml属性中配置逻辑表的权限。
3.5、MyCat分片
3.5.1、垂直拆分
3.5.1.1、场景
在业务系统中, 涉及以下表结构 ,但是由于用户与订单每天都会产生大量的数据, 单台服务器的数据存储及处理能力是有限的, 可以对数据库表进行拆分, 原有的数据库表如下。
现在考虑将其进行垂直分库操作,将商品相关的表拆分到一个数据库服务器,订单表拆分的一个数据库服务器,用户及省市区表拆分到一个服务器。最终结构如下:
3.5.1.2、准备
准备三台服务器,IP地址如图所示:
并且在192.168.200.210,192.168.200.213, 192.168.200.214上面创建数据库shopping。
3.5.1.3、配置
3.5.1.3.1、schema.xml
<schema name="SHOPPING" checkSQLschema="true" sqlMaxLimit="100">
<table name="tb_goods_base" dataNode="dn1" primaryKey="id" />
<table name="tb_goods_brand" dataNode="dn1" primaryKey="id" />
<table name="tb_goods_cat" dataNode="dn1" primaryKey="id" />
<table name="tb_goods_desc" dataNode="dn1" primaryKey="goods_id" />
<table name="tb_goods_item" dataNode="dn1" primaryKey="id" />
<table name="tb_order_item" dataNode="dn2" primaryKey="id" />
<table name="tb_order_master" dataNode="dn2" primaryKey="order_id" />
<table name="tb_order_pay_log" dataNode="dn2" primaryKey="out_trade_no" />
<table name="tb_user" dataNode="dn3" primaryKey="id" />
<table name="tb_user_address" dataNode="dn3" primaryKey="id" />
<table name="tb_areas_provinces" dataNode="dn3" primaryKey="id"/>
<table name="tb_areas_city" dataNode="dn3" primaryKey="id"/>
<table name="tb_areas_region" dataNode="dn3" primaryKey="id"/>
</schema>
<dataNode name="dn1" dataHost="dhost1" database="shopping" />
<dataNode name="dn2" dataHost="dhost2" database="shopping" />
<dataNode name="dn3" dataHost="dhost3" database="shopping" />
<dataHost name="dhost1" maxCon="1000" minCon="10" balance="0" writeType="0" dbType="mysql" dbDriver="jdbc" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<writeHost host="master" url="jdbc:mysql://192.168.200.210:3306useSSL=false&serverTimezone=Asia/Shanghai&characterEncoding=utf8" user="root" password="1234" />
</dataHost>
<dataHost name="dhost2" maxCon="1000" minCon="10" balance="0" writeType="0" dbType="mysql" dbDriver="jdbc" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<writeHost host="master" url="jdbc:mysql://192.168.200.213:3306?useSSL=false&serverTimezone=Asia/Shanghai&characterEncoding=utf8" user="root" password="1234" />
</dataHost>
<dataHost name="dhost3" maxCon="1000" minCon="10" balance="0" writeType="0" dbType="mysql" dbDriver="jdbc" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<writeHost host="master" url="jdbc:mysql://192.168.200.214:3306?useSSL=false&serverTimezone=Asia/Shanghai&characterEncoding=utf8" user="root" password="1234" />
</dataHost>
3.5.1.3.2、server.xml
<user name="root" defaultAccount="true">
<property name="password">123456</property>
<property name="schemas">SHOPPING</property>
<!-- 表级 DML 权限设置 -->
<!--
<privileges check="true">
<schema name="DB01" dml="0110" >
<table name="TB_ORDER" dml="1110"></table>
</schema>
</privileges>
-->
</user>
<user name="user">
<property name="password">123456</property>
<property name="schemas">SHOPPING</property>
<property name="readOnly">true</property>
</user>
3.5.1.4、测试
1). 上传测试SQL脚本到服务器的/root目录
2). 执行指令导入测试数据
重新启动MyCat后,在mycat的命令行中,通过source指令导入表结构,以及对应的数据,查看数据分布情况。
source /root/shopping-table.sql
source /root/shopping-insert.sql
将表结构及对应的测试数据导入之后,可以检查一下各个数据库服务器中的表结构分布情况。 检查是否和我们准备工作中规划的服务器一致。
3). 查询用户的收件人及收件人地址信息(包含省、市、区)。
在MyCat的命令行中,当我们执行以下多表联查的SQL语句时,可以正常查询出数据。
select
ua.user_id,
ua.contact,
p.province,
c.city,
r.area ,
ua.address
from
tb_user_address ua ,
tb_areas_city c ,
tb_areas_provinces p ,
tb_areas_region r
where ua.province_id = p.provinceid
and ua.city_id = c.cityid
and ua.town_id = r.areaid ;
4). 查询每一笔订单及订单的收件地址信息(包含省、市、区)。
实现该需求对应的SQL语句如下:
SELECT
order_id ,
payment ,
receiver,
province ,
city ,
area
FROM
tb_order_master o,
tb_areas_provinces p ,
tb_areas_city c ,
tb_areas_region r
WHERE
o.receiver_province = p.provinceid
AND o.receiver_city = c.cityid
AND o.receiver_region = r.areaid ;
但是现在存在一个问题,订单相关的表结构是在 192.168.200.213 数据库服务器中,而省市区的数据库表是在 192.168.200.214 数据库服务器中。那么在MyCat中执行是否可以成功呢?
经过测试,我们看到,SQL语句执行报错。原因就是因为MyCat在执行该SQL语句时,需要往具体的数据库服务器中路由,而当前没有一个数据库服务器完全包含了订单以及省市区的表结构,造成SQL语句失败,报错。
对于上述的这种现象,我们如何来解决呢? 下面我们介绍的全局表,就可以轻松解决这个问题。
3.5.1.5、全局表
对于省、市、区/县表tb_areas_provinces , tb_areas_city , tb_areas_region,是属于数据字典表,在多个业务模块中都可能会遇到,可以将其设置为全局表,利于业务操作。
修改schema.xml中的逻辑表的配置,修改 tb_areas_provinces、tb_areas_city、tb_areas_region 三个逻辑表,增加 type 属性,配置为global,就代表该表是全局表,就会在所涉及到的dataNode中创建给表。对于当前配置来说,也就意味着所有的节点中都有该表了。
<table name="tb_areas_provinces" dataNode="dn1,dn2,dn3" primaryKey="id" type="global"/>
<table name="tb_areas_city" dataNode="dn1,dn2,dn3" primaryKey="id" type="global"/>
<table name="tb_areas_region" dataNode="dn1,dn2,dn3" primaryKey="id" type="global"/>
配置完毕后,重新启动MyCat。
1). 删除原来每一个数据库服务器中的所有表结构
2). 通过source指令,导入表及数据
source /root/shopping-table.sql
source /root/shopping-insert.sql
3). 检查每一个数据库服务器中的表及数据分布,看到三个节点中都有这三张全局表
4). 然后再次执行上面的多表联查的SQL语句
SELECT
order_id ,
payment ,
receiver,
province ,
city ,
area
FROM
tb_order_master o,
tb_areas_provinces p ,
tb_areas_city c ,
tb_areas_region r
WHERE
o.receiver_province = p.provinceid
AND o.receiver_city = c.cityid
AND o.receiver_region = r.areaid ;
是可以正常执行成功的。
5). 当在MyCat中更新全局表的时候,我们可以看到,所有分片节点中的数据都发生了变化,每个节点的全局表数据时刻保持一致。
3.5.2、水平拆分
3.5.2.1、场景
在业务系统中, 有一张表(日志表), 业务系统每天都会产生大量的日志数据 , 单台服务器的数据存储及处理能力是有限的, 可以对数据库表进行拆分。
3.5.2.2、准备
准备三台服务器,具体的结构如下:
并且,在三台数据库服务器中分表创建一个数据库angyan。
3.5.2.3、配置
3.5.2.3.1、schema.xml
<schema name="ANGYAN" checkSQLschema="true" sqlMaxLimit="100">
<table name="tb_log" dataNode="dn4,dn5,dn6" primaryKey="id" rule="mod-long" />
</schema>
<dataNode name="dn4" dataHost="dhost1" database="angyan" />
<dataNode name="dn5" dataHost="dhost2" database="angyan" />
<dataNode name="dn6" dataHost="dhost3" database="angyan" />
tb_log表最终落在3个节点中,分别是 dn4、dn5、dn6 ,而具体的数据分别存储在 dhost1、dhost2、dhost3的angyan数据库中。
3.5.2.3.2、server.xml
配置root用户既可以访问 SHOPPING 逻辑库,又可以访问ANGYAN逻辑库。
<user name="root" defaultAccount="true">
<property name="password">123456</property>
<property name="schemas">SHOPPING,AGYAN</property>
<!-- 表级 DML 权限设置 -->
<!--
<privileges check="true">
<schema name="DB01" dml="0110" >
<table name="TB_ORDER" dml="1110"></table>
</schema>
</privileges>
-->
</user>
3.5.2.4、测试
配置完毕后,重新启动MyCat,然后在mycat的命令行中,执行如下SQL创建表、并插入数据,查看数据分布情况。
CREATE TABLE tb_log (
id bigint(20) NOT NULL COMMENT 'ID',
model_name varchar(200) DEFAULT NULL COMMENT '模块名',
model_value varchar(200) DEFAULT NULL COMMENT '模块值',
return_value varchar(200) DEFAULT NULL COMMENT '返回值',
return_class varchar(200) DEFAULT NULL COMMENT '返回值类型',
operate_user varchar(20) DEFAULT NULL COMMENT '操作用户',
operate_time varchar(20) DEFAULT NULL COMMENT '操作时间',
param_and_value varchar(500) DEFAULT NULL COMMENT '请求参数名及参数值',
operate_class varchar(200) DEFAULT NULL COMMENT '操作类',
operate_method varchar(200) DEFAULT NULL COMMENT '操作方法',
cost_time bigint(20) DEFAULT NULL COMMENT '执行方法耗时, 单位 ms',
source int(1) DEFAULT NULL COMMENT '来源 : 1 PC , 2 Android , 3 IOS',
PRIMARY KEY (id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
INSERT INTO tb_log (id, model_name, model_value, return_value, return_class,operate_user, operate_time, param_and_value, operate_class, operate_method,cost_time,source)VALUES('1','user','insert','success','java.lang.String','10001','2022-01-06 18:12:28','{\"age\":\"20\",\"name\":\"Tom\",\"gender\":\"1\"}','cn.angya.controller.UserController','insert','10',1);
INSERT INTO tb_log (id, model_name, model_value, return_value, return_class,operate_user, operate_time, param_and_value, operate_class, operate_method,cost_time,source)VALUES('2','user','insert','success','java.lang.String','10001','2022-01-06 18:12:27','{\"age\":\"20\",\"name\":\"Tom\",\"gender\":\"1\"}','cn.angya.controller.UserController','insert','23',1);
INSERT INTO tb_log (id, model_name, model_value, return_value, return_class,operate_user, operate_time, param_and_value, operate_class, operate_method,cost_time,source)VALUES('3','user','update','success','java.lang.String','10001','2022-01-06 18:16:45','{\"age\":\"20\",\"name\":\"Tom\",\"gender\":\"1\"}','cn.angya.controller.UserController','update','34',1);
INSERT INTO tb_log (id, model_name, model_value, return_value, return_class,operate_user, operate_time, param_and_value, operate_class, operate_method,cost_time,source)VALUES('4','user','update','success','java.lang.String','10001','2022-01-06 18:16:45','{\"age\":\"20\",\"name\":\"Tom\",\"gender\":\"1\"}','cn.angya.controller.UserController','update','13',2);
INSERT INTO tb_log (id, model_name, model_value, return_value, return_class,operate_user, operate_time, param_and_value, operate_class, operate_method,cost_time,source)VALUES('5','user','insert','success','java.lang.String','10001','2022-01-06 18:30:31','{\"age\":\"200\",\"name\":\"TomCat\",\"gender\":\"0\"}','cn.angya.controller.UserController','insert','29',3);
INSERT INTO tb_log (id, model_name, model_value, return_value, return_class,operate_user, operate_time, param_and_value, operate_class, operate_method,cost_time,source)VALUES('6','user','find','success','java.lang.String','10001','2022-01-06 18:30:31','{\"age\":\"200\",\"name\":\"TomCat\",\"gender\":\"0\"}','cn.angya.controller.UserController','find','29',2);
3.5.3、分片规则
3.5.3.1、范围分片
3.5.3.1.1、介绍
根据指定的字段及其配置的范围与数据节点的对应情况, 来决定该数据属于哪一个分片。
3.5.3.1.2、配置
schema.xml逻辑表配置:
<table name="TB_ORDER" dataNode="dn1,dn2,dn3" rule="auto-sharding-long" />
schema.xml数据节点配置:
<dataNode name="dn1" dataHost="dhost1" database="db01" />
<dataNode name="dn2" dataHost="dhost2" database="db01" />
<dataNode name="dn3" dataHost="dhost3" database="db01" />
rule.xml分片规则配置:
<tableRule name="auto-sharding-long">
<rule>
<columns>id</columns>
<algorithm>rang-long</algorithm>
</rule>
</tableRule>
<function name="rang-long" class="io.mycat.route.function.AutoPartitionByLong">
<property name="mapFile">autopartition-long.txt</property>
<property name="defaultNode">0</property>
</function>
分片规则配置属性含义:
属性 | 描述 |
---|---|
columns | 标识将要分片的表字段 |
algorithm | 指定分片函数与function的对应关系 |
class | 指定该分片算法对应的类 |
mapFile | 对应的外部配置文件 |
type | 默认值为0 ; 0 表示Integer , 1 表示String |
defaultNode | 默认节点 ,枚举分片时,如果碰到不识别的枚举值, 就让它路由到默认节点 ; 如果没有默认值,碰到不识别的则报错 。 |
在rule.xml中配置分片规则时,关联了一个映射配置文件 autopartition-long.txt,该配置文件的配置如下:
# range start-end ,data node index
# K=1000,M=10000.
0-500M=0
500M-1000M=1
1000M-1500M=2
含义:
- 0-500万之间的值,存储在0号数据节点(数据节点的索引从0开始) ;
- 500万-1000万之间的数据存储在1号数据节点 ;
- 1000万-1500万的数据节点存储在2号节点 ;
该分片规则,主要是针对于数字类型的字段适用。 在MyCat的入门程序中,我们使用的就是该分片规则。
3.5.3.2、取模分片
3.5.3.2.1、介绍
根据指定的字段值与节点数量进行求模运算,根据运算结果, 来决定该数据属于哪一个分片。
3.5.3.2.2、配置
schema.xml逻辑表配置:
<table name="tb_log" dataNode="dn4,dn5,dn6" primaryKey="id" rule="mod-long" />
schema.xml数据节点配置:
<dataNode name="dn4" dataHost="dhost1" database="angyan" />
<dataNode name="dn5" dataHost="dhost2" database="angyan" />
<dataNode name="dn6" dataHost="dhost3" database="angyan" />
rule.xml分片规则配置:
<tableRule name="mod-long">
<rule>
<columns>id</columns>
<algorithm>mod-long</algorithm>
</rule>
</tableRule>
<function name="mod-long" class="io.mycat.route.function.PartitionByMod">
<property name="count">3</property>
</function>
分片规则属性说明如下:
属性 | 描述 |
---|---|
columns | 标识将要分片的表字段 |
algorithm | 指定分片函数与function的对应关系 |
class | 指定该分片算法对应的类 |
count | 数据节点的数量 |
该分片规则,主要是针对于数字类型的字段适用。 在前面水平拆分的演示中,我们选择的就是取模分片。
3.5.3.2.3、测试
配置完毕后,重新启动MyCat,然后在mycat的命令行中,执行如下SQL创建表、并插入数据,查看数据分布情况。
3.5.3.3、一致性hash分片
3.5.3.3.1、介绍
所谓一致性哈希,相同的哈希因子计算值总是被划分到相同的分区表中,不会因为分区节点的增加而改变原来数据的分区位置,有效的解决了分布式数据的拓容问题。
3.5.3.3.2、配置
schema.xml中逻辑表配置:
<!-- 一致性hash -->
<table name="tb_order" dataNode="dn4,dn5,dn6" rule="sharding-by-murmur" />
schema.xml中数据节点配置:
<dataNode name="dn4" dataHost="dhost1" database="angyan" />
<dataNode name="dn5" dataHost="dhost2" database="angyan" />
<dataNode name="dn6" dataHost="dhost3" database="angyan" />
rule.xml中分片规则配置:
<tableRule name="sharding-by-murmur">
<rule>
<columns>id</columns>
<algorithm>murmur</algorithm>
</rule>
</tableRule>
<function name="murmur" class="io.mycat.route.function.PartitionByMurmurHash">
<property name="seed">0</property><!-- 默认是0 -->
<property name="count">3</property>
<property name="virtualBucketTimes">160</property>
</function>
分片规则属性含义:
属性 | 描述 |
---|---|
columns | 标识将要分片的表字段 |
algorithm | 指定分片函数与function的对应关系 |
class | 指定该分片算法对应的类 |
seed | 创建murmur_hash对象的种子,默认0 |
count | 要分片的数据库节点数量,必须指定,否则没法分片 |
virtualBucketTimes | 一个实际的数据库节点被映射为这么多虚拟节点,默认是160倍,也就是虚拟节点数是物理节点数的160倍;virtualBucketTimes*count就是虚拟结点数量 ; |
weightMapFile | 节点的权重,没有指定权重的节点默认是1。以properties文件的格式填写,以从0开始到count-1的整数值也就是节点索引为key,以节点权重值为值。所有权重值必须是正整数,否则以1代替 |
bucketMapPath | 用于测试时观察各物理节点与虚拟节点的分布情况,如果指定了这个属性,会把虚拟节点的murmur hash值与物理节点的映射按行输出到这个文件,没有默认值,如果不指定,就不会输出任何东西 |
3.5.3.3.3、测试
配置完毕后,重新启动MyCat,然后在mycat的命令行中,执行如下SQL创建表、并插入数据,查看数据分布情况。
create table tb_order(
id varchar(100) not null primary key,
money int null,
content varchar(200) null
);
INSERT INTO tb_order (id, money, content) VALUES ('b92fdaaf-6fc4-11ec-b831-482ae33c4a2d', 10, 'b92fdaf8-6fc4-11ec-b831-482ae33c4a2d');
INSERT INTO tb_order (id, money, content) VALUES ('b93482b6-6fc4-11ec-b831-482ae33c4a2d', 20, 'b93482d5-6fc4-11ec-b831-482ae33c4a2d');
INSERT INTO tb_order (id, money, content) VALUES ('b937e246-6fc4-11ec-b831-482ae33c4a2d', 50, 'b937e25d-6fc4-11ec-b831-482ae33c4a2d');
INSERT INTO tb_order (id, money, content) VALUES ('b93be2dd-6fc4-11ec-b831-482ae33c4a2d', 100, 'b93be2f9-6fc4-11ec-b831-482ae33c4a2d');
INSERT INTO tb_order (id, money, content) VALUES ('b93f2d68-6fc4-11ec-b831-482ae33c4a2d', 130, 'b93f2d7d-6fc4-11ec-b831-482ae33c4a2d');
INSERT INTO tb_order (id, money, content) VALUES ('b9451b98-6fc4-11ec-b831-482ae33c4a2d', 30, 'b9451bcc-6fc4-11ec-b831-482ae33c4a2d');
INSERT INTO tb_order (id, money, content) VALUES ('b9488ec1-6fc4-11ec-b831-482ae33c4a2d', 560, 'b9488edb-6fc4-11ec-b831-482ae33c4a2d');
INSERT INTO tb_order (id, money, content) VALUES ('b94be6e6-6fc4-11ec-b831-482ae33c4a2d', 10, 'b94be6ff-6fc4-11ec-b831-482ae33c4a2d');
INSERT INTO tb_order (id, money, content) VALUES ('b94ee10d-6fc4-11ec-b831-482ae33c4a2d', 123, 'b94ee12c-6fc4-11ec-b831-482ae33c4a2d');
INSERT INTO tb_order (id, money, content) VALUES ('b952492a-6fc4-11ec-b831-482ae33c4a2d', 145, 'b9524945-6fc4-11ec-b831-482ae33c4a2d');
INSERT INTO tb_order (id, money, content) VALUES ('b95553ac-6fc4-11ec-b831-482ae33c4a2d', 543, 'b95553c8-6fc4-11ec-b831-482ae33c4a2d');
INSERT INTO tb_order (id, money, content) VALUES ('b9581cdd-6fc4-11ec-b831-482ae33c4a2d', 17, 'b9581cfa-6fc4-11ec-b831-482ae33c4a2d');
INSERT INTO tb_order (id, money, content) VALUES ('b95afc0f-6fc4-11ec-b831-482ae33c4a2d', 18, 'b95afc2a-6fc4-11ec-b831-482ae33c4a2d');
INSERT INTO tb_order (id, money, content) VALUES ('b95daa99-6fc4-11ec-b831-482ae33c4a2d', 134, 'b95daab2-6fc4-11ec-b831-482ae33c4a2d');
INSERT INTO tb_order (id, money, content) VALUES ('b9667e3c-6fc4-11ec-b831-482ae33c4a2d', 156, 'b9667e60-6fc4-11ec-b831-482ae33c4a2d');
INSERT INTO tb_order (id, money, content) VALUES ('b96ab489-6fc4-11ec-b831-482ae33c4a2d', 175, 'b96ab4a5-6fc4-11ec-b831-482ae33c4a2d');
INSERT INTO tb_order (id, money, content) VALUES ('b96e2942-6fc4-11ec-b831-482ae33c4a2d', 180, 'b96e295b-6fc4-11ec-b831-482ae33c4a2d');
INSERT INTO tb_order (id, money, content) VALUES ('b97092ec-6fc4-11ec-b831-482ae33c4a2d', 123, 'b9709306-6fc4-11ec-b831-482ae33c4a2d');
INSERT INTO tb_order (id, money, content) VALUES ('b973727a-6fc4-11ec-b831-482ae33c4a2d', 230, 'b9737293-6fc4-11ec-b831-482ae33c4a2d');
INSERT INTO tb_order (id, money, content) VALUES ('b978840f-6fc4-11ec-b831-482ae33c4a2d', 560, 'b978843c-6fc4-11ec-b831-482ae33c4a2d');
3.5.3.4、枚举分片
3.5.3.4.1、介绍
通过在配置文件中配置可能的枚举值, 指定数据分布到不同数据节点上, 本规则适用于按照省份、性别、状态拆分数据等业务 。
3.5.3.4.2、配置
schema.xml中逻辑表配置:
<!-- 枚举 -->
<table name="tb_user" dataNode="dn4,dn5,dn6" rule="sharding-by-intfile-enumstatus"/>
schema.xml中数据节点配置:
<dataNode name="dn4" dataHost="dhost1" database="angyan" />
<dataNode name="dn5" dataHost="dhost2" database="angyan" />
<dataNode name="dn6" dataHost="dhost3" database="angyan" />
rule.xml中分片规则配置:
<tableRule name="sharding-by-intfile">
<rule>
<columns>sharding_id</columns>
<algorithm>hash-int</algorithm>
</rule>
</tableRule>
<!-- 自己增加 tableRule -->
<tableRule name="sharding-by-intfile-enumstatus">
<rule>
<columns>status</columns>
<algorithm>hash-int</algorithm>
</rule>
</tableRule>
<function name="hash-int" class="io.mycat.route.function.PartitionByFileMap">
<property name="defaultNode">2</property>
<property name="mapFile">partition-hash-int.txt</property>
</function>
partition-hash-int.txt ,内容如下 :
1=0
2=1
3=2
分片规则属性含义:
属性 | 描述 |
---|---|
columns | 标识将要分片的表字段 |
algorithm | 指定分片函数与function的对应关系 |
class | 指定该分片算法对应的类 |
mapFile | 对应的外部配置文件 |
type | 默认值为0 ; 0 表示Integer , 1 表示String |
defaultNode | 默认节点 ; 小于0 标识不设置默认节点 , 大于等于0代表设置默认节点 ;默认节点的所用:枚举分片时,如果碰到不识别的枚举值, 就让它路由到默认节点 ; 如果没有默认值,碰到不识别的则报错 。 |
3.5.3.4.3、测试
配置完毕后,重新启动MyCat,然后在mycat的命令行中,执行如下SQL创建表、并插入数据,查看数据分布情况。
CREATE TABLE tb_user (
id bigint(20) NOT NULL COMMENT 'ID',
username varchar(200) DEFAULT NULL COMMENT '姓名',
status int(2) DEFAULT '1' COMMENT '1: 未启用, 2: 已启用, 3: 已关闭',
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
insert into tb_user (id,username ,status) values(1,'Tom',1);
insert into tb_user (id,username ,status) values(2,'Cat',2);
insert into tb_user (id,username ,status) values(3,'Rose',3);
insert into tb_user (id,username ,status) values(4,'Coco',2);
insert into tb_user (id,username ,status) values(5,'Lily',1);
insert into tb_user (id,username ,status) values(6,'Tom',1);
insert into tb_user (id,username ,status) values(7,'Cat',2);
insert into tb_user (id,username ,status) values(8,'Rose',3);
insert into tb_user (id,username ,status) values(9,'Coco',2);
insert into tb_user (id,username ,status) values(10,'Lily',1);
3.5.3.5、应用指定算法
3.5.3.5.1、介绍
运行阶段由应用自主决定路由到那个分片 , 直接根据字符子串(必须是数字)计算分片号。
3.5.3.5.2、配置
schema.xml中逻辑表配置:
<!-- 应用指定算法 -->
<table name="tb_app" dataNode="dn4,dn5,dn6" rule="sharding-by-substring" />
schema.xml中数据节点配置:
<dataNode name="dn4" dataHost="dhost1" database="angyan" />
<dataNode name="dn5" dataHost="dhost2" database="angyan" />
<dataNode name="dn6" dataHost="dhost3" database="angyan" />
rule.xml中分片规则配置:
<tableRule name="sharding-by-substring">
<rule>
<columns>id</columns>
<algorithm>sharding-by-substring</algorithm>
</rule>
</tableRule>
<function name="sharding-by-substring" class="io.mycat.route.function.PartitionDirectBySubString">
<property name="startIndex">0</property> <!-- zero-based -->
<property name="size">2</property>
<property name="partitionCount">3</property>
<property name="defaultPartition">0</property>
</function>
分片规则属性含义:
属性 | 描述 |
---|---|
columns | 标识将要分片的表字段 |
algorithm | 指定分片函数与function的对应关系 |
class | 指定该分片算法对应的类 |
startIndex | 字符子串起始索引 |
size | 字符长度 |
partitionCount | 分区(分片)数量 |
defaultPartition | 默认分片(在分片数量定义时, 字符标示的分片编号不在分片数量内时,使用默认分片) |
示例说明 :
id=05-100000002 , 在此配置中代表根据id中从 startIndex=0,开始,截取siz=2位数字即05,05就是获取的分区,如果没找到对应的分片则默认分配到defaultPartition 。
3.5.3.5.3、测试
配置完毕后,重新启动MyCat,然后在mycat的命令行中,执行如下SQL创建表、并插入数据,查看数据分布情况。
CREATE TABLE tb_app (
id varchar(10) NOT NULL COMMENT 'ID',
name varchar(200) DEFAULT NULL COMMENT '名称',
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
insert into tb_app (id,name) values('0000001','Testx00001');
insert into tb_app (id,name) values('0100001','Test100001');
insert into tb_app (id,name) values('0100002','Test200001');
insert into tb_app (id,name) values('0200001','Test300001');
insert into tb_app (id,name) values('0200002','TesT400001');
3.5.3.6、固定分片hash算法
3.5.3.6.1、介绍
该算法类似于十进制的求模运算,但是为二进制的操作,例如,取 id 的二进制低 10 位 与1111111111 进行位 & 运算,位与运算最小值为 0000000000,最大值为1111111111,转换为十进制,也就是位于0-1023之间。
特点:
-
如果是求模,连续的值,分别分配到各个不同的分片;但是此算法会将连续的值可能分配到相同的分片,降低事务处理的难度。
-
可以均匀分配,也可以非均匀分配。
-
分片字段必须为数字类型。
3.5.3.6.2、配置
schema.xml中逻辑表配置:
<!-- 固定分片hash算法 -->
<table name="tb_longhash" dataNode="dn4,dn5,dn6" rule="sharding-by-long-hash" />
schema.xml中数据节点配置:
<dataNode name="dn4" dataHost="dhost1" database="angyan" />
<dataNode name="dn5" dataHost="dhost2" database="angyan" />
<dataNode name="dn6" dataHost="dhost3" database="angyan" />
rule.xml中分片规则配置:
<tableRule name="sharding-by-long-hash">
<rule>
<columns>id</columns>
<algorithm>sharding-by-long-hash</algorithm>
</rule>
</tableRule>
<!-- 分片总长度为1024,count与length数组长度必须一致; -->
<function name="sharding-by-long-hash" class="io.mycat.route.function.PartitionByLong">
<property name="partitionCount">2,1</property>
<property name="partitionLength">256,512</property>
</function>
分片规则属性含义:
属性 | 描述 |
---|---|
columns | 标识将要分片的表字段名 |
algorithm | 指定分片函数与function的对应关系 |
class | 指定该分片算法对应的类 |
partitionCount | 分片个数列表 |
partitionLength | 分片范围列表 |
约束 :
- 1). 分片长度 : 默认最大2^10 , 为 1024 ;
- 2). count, length的数组长度必须是一致的 ;
- 以上分为三个分区:0-255,256-511,512-1023
示例说明 :
3.5.3.6.3、测试
配置完毕后,重新启动MyCat,然后在mycat的命令行中,执行如下SQL创建表、并插入数据,查看数据分布情况。
CREATE TABLE tb_longhash (
id int(11) NOT NULL COMMENT 'ID',
name varchar(200) DEFAULT NULL COMMENT '名称',
firstChar char(1) COMMENT '首字母',
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
insert into tb_longhash (id,name,firstChar) values(1,'七匹狼','Q');
insert into tb_longhash (id,name,firstChar) values(2,'八匹狼','B');
insert into tb_longhash (id,name,firstChar) values(3,'九匹狼','J');
insert into tb_longhash (id,name,firstChar) values(4,'十匹狼','S');
insert into tb_longhash (id,name,firstChar) values(5,'六匹狼','L');
insert into tb_longhash (id,name,firstChar) values(6,'五匹狼','W');
insert into tb_longhash (id,name,firstChar) values(7,'四匹狼','S');
insert into tb_longhash (id,name,firstChar) values(8,'三匹狼','S');
insert into tb_longhash (id,name,firstChar) values(9,'两匹狼','L');
3.5.3.7 字符串hash解析算法
3.5.3.7.1、介绍
截取字符串中的指定位置的子字符串, 进行hash算法, 算出分片。
3.5.3.7.2、配置
schema.xml中逻辑表配置:
<!-- 字符串hash解析算法 -->
<table name="tb_strhash" dataNode="dn4,dn5" rule="sharding-by-stringhash" />
schema.xml中数据节点配置:
<dataNode name="dn4" dataHost="dhost1" database="angyan" />
<dataNode name="dn5" dataHost="dhost2" database="angyan" />
rule.xml中分片规则配置:
<tableRule name="sharding-by-stringhash">
<rule>
<columns>name</columns>
<algorithm>sharding-by-stringhash</algorithm>
</rule>
</tableRule>
<function name="sharding-by-stringhash" class="io.mycat.route.function.PartitionByString">
<property name="partitionLength">512</property> <!-- zero-based -->
<property name="partitionCount">2</property>
<property name="hashSlice">0:2</property>
</function>
分片规则属性含义:
属性 | 描述 |
---|---|
columns | 标识将要分片的表字段 |
algorithm | 指定分片函数与function的对应关系 |
class | 指定该分片算法对应的类 |
partitionLength | hash求模基数 ; length*count=1024 (出于性能考虑) |
partitionCount | 分区数 |
hashSlice | hash运算位 , 根据子字符串的hash运算 ; 0 代表 str.length(), -1 代表 str.length()-1 , 大于0只代表数字自身 ; 可以理解为substring(start,end),start为0则只表示0 |
示例说明:
3.5.3.7.3、测试
配置完毕后,重新启动MyCat,然后在mycat的命令行中,执行如下SQL创建表、并插入数据,查看数据分布情况。
create table tb_strhash(
name varchar(20) primary key,
content varchar(100)
)engine=InnoDB DEFAULT CHARSET=utf8mb4;
INSERT INTO tb_strhash (name,content) VALUES('T1001', UUID());
INSERT INTO tb_strhash (name,content) VALUES('ROSE', UUID());
INSERT INTO tb_strhash (name,content) VALUES('JERRY', UUID());
INSERT INTO tb_strhash (name,content) VALUES('CRISTINA', UUID());
INSERT INTO tb_strhash (name,content) VALUES('TOMCAT', UUID());
3.5.3.8、按天分片算法
3.5.3.8.1、介绍
按照日期及对应的时间周期来分片。
3.5.3.8.2、配置
schema.xml中逻辑表配置:
<!-- 按天分片 -->
<table name="tb_datepart" dataNode="dn4,dn5,dn6" rule="sharding-by-date" />
schema.xml中数据节点配置:
<dataNode name="dn4" dataHost="dhost1" database="angyan" />
<dataNode name="dn5" dataHost="dhost2" database="angyan" />
<dataNode name="dn6" dataHost="dhost3" database="angyan" />
rule.xml中分片规则配置:
<tableRule name="sharding-by-date">
<rule>
<columns>create_time</columns>
<algorithm>sharding-by-date</algorithm>
</rule>
</tableRule>
<function name="sharding-by-date" class="io.mycat.route.function.PartitionByDate">
<property name="dateFormat">yyyy-MM-dd</property>
<property name="sBeginDate">2022-01-01</property>
<property name="sEndDate">2022-01-30</property>
<property name="sPartionDay">10</property>
</function>
<!--
从开始时间开始,每10天为一个分片,到达结束时间之后,会重复开始分片插入
配置表的 dataNode 的分片,必须和分片规则数量一致,例如 2022-01-01 到 2022-12-31 ,每
10天一个分片,一共需要37个分片。
-->
分片规则属性含义:
属性 | 描述 |
---|---|
columns | 标识将要分片的表字段 |
algorithm | 指定分片函数与function的对应关系 |
class | 指定该分片算法对应的类 |
dateFormat | 日期格式 |
sBeginDate | 开始日期 |
sEndDate | 结束日期,如果配置了结束日期,则代码数据到达了这个日期的分片后,会重复从开始分片插入 |
sPartionDay | 分区天数,默认值 10 ,从开始日期算起,每个10天一个分区 |
3.5.3.8.3、测试
配置完毕后,重新启动MyCat,然后在mycat的命令行中,执行如下SQL创建表、并插入数据,查看数据分布情况。
create table tb_datepart(
id bigint not null comment 'ID' primary key,
name varchar(100) null comment '姓名',
create_time date null
);
insert into tb_datepart(id,name ,create_time) values(1,'Tom','2022-01-01');
insert into tb_datepart(id,name ,create_time) values(2,'Cat','2022-01-10');
insert into tb_datepart(id,name ,create_time) values(3,'Rose','2022-01-11');
insert into tb_datepart(id,name ,create_time) values(4,'Coco','2022-01-20');
insert into tb_datepart(id,name ,create_time) values(5,'Rose2','2022-01-21');
insert into tb_datepart(id,name ,create_time) values(6,'Coco2','2022-01-30');
insert into tb_datepart(id,name ,create_time) values(7,'Coco3','2022-01-31');
3.5.3.9、自然月分片
3.5.3.9.1、介绍
使用场景为按照月份来分片, 每个自然月为一个分片。
3.5.3.9.2、配置
schema.xml中逻辑表配置:
<!-- 按自然月分片 -->
<table name="tb_monthpart" dataNode="dn4,dn5,dn6" rule="sharding-by-month" />
schema.xml中数据节点配置:
<dataNode name="dn4" dataHost="dhost1" database="angyan" />
<dataNode name="dn5" dataHost="dhost2" database="angyan" />
<dataNode name="dn6" dataHost="dhost3" database="angyan" />
rule.xml中分片规则配置:
<tableRule name="sharding-by-month">
<rule>
<columns>create_time</columns>
<algorithm>partbymonth</algorithm>
</rule>
</tableRule>
<function name="partbymonth" class="io.mycat.route.function.PartitionByMonth">
<property name="dateFormat">yyyy-MM-dd</property>
<property name="sBeginDate">2022-01-01</property>
<property name="sEndDate">2022-03-31</property>
</function>
<!--
从开始时间开始,一个月为一个分片,到达结束时间之后,会重复开始分片插入
配置表的 dataNode 的分片,必须和分片规则数量一致,例如 2022-01-01 到 2022-12-31 ,一
共需要12个分片。
-->
分片规则属性含义:
属性 | 描述 |
---|---|
columns | 标识将要分片的表字段 |
algorithm | 指定分片函数与function的对应关系 |
class | 指定该分片算法对应的类 |
dateFormat | 日期格式 |
sBeginDate | 开始日期 |
sEndDate | 结束日期,如果配置了结束日期,则代码数据到达了这个日期的分片后,会重复从开始分片插入 |
3.5.3.9.3、测试
配置完毕后,重新启动MyCat,然后在mycat的命令行中,执行如下SQL创建表、并插入数据,查看数据分布情况。
create table tb_monthpart(
id bigint not null comment 'ID' primary key,
name varchar(100) null comment '姓名',
create_time date null
);
insert into tb_monthpart(id,name ,create_time) values(1,'Tom','2022-01-01');
insert into tb_monthpart(id,name ,create_time) values(2,'Cat','2022-01-10');
insert into tb_monthpart(id,name ,create_time) values(3,'Rose','2022-01-31');
insert into tb_monthpart(id,name ,create_time) values(4,'Coco','2022-02-20');
insert into tb_monthpart(id,name ,create_time) values(5,'Rose2','2022-02-25');
insert into tb_monthpart(id,name ,create_time) values(6,'Coco2','2022-03-10');
insert into tb_monthpart(id,name ,create_time) values(7,'Coco3','2022-03-31');
insert into tb_monthpart(id,name ,create_time) values(8,'Coco4','2022-04-10');
insert into tb_monthpart(id,name ,create_time) values(9,'Coco5','2022-04-30');
3.6、MyCat管理及监控
3.6.1、MyCat原理
在MyCat中,当执行一条SQL语句时,MyCat需要进行SQL解析、分片分析、路由分析、读写分离分析等操作,最终经过一系列的分析决定将当前的SQL语句到底路由到那几个(或哪一个)节点数据库,数据库将数据执行完毕后,如果有返回的结果,则将结果返回给MyCat,最终还需要在MyCat中进行结果合并、聚合处理、排序处理、分页处理等操作,最终再将结果返回给客户端。
而在MyCat的使用过程中,MyCat官方也提供了一个管理监控平台MyCat-Web(MyCat-eye)。Mycat-web 是 Mycat 可视化运维的管理和监控平台,弥补了 Mycat 在监控上的空白。帮 Mycat分担统计任务和配置管理任务。Mycat-web 引入了 ZooKeeper 作为配置中心,可以管理多个节点。Mycat-web 主要管理和监控 Mycat 的流量、连接、活动线程和内存等,具备 IP 白名单、邮件告警等模块,还可以统计 SQL 并分析慢 SQL 和高频 SQL 等。为优化 SQL 提供依据。
3.6.2、MyCat管理
Mycat默认开通2个端口,可以在server.xml中进行修改。
-
8066 数据访问端口,即进行 DML 和 DDL 操作。
-
9066 数据库管理端口,即 mycat 服务管理控制功能,用于管理mycat的整个集群状态
连接MyCat的管理控制台:
mysql -h 192.168.200.210 -p 9066 -uroot -p123456
命令 | 含义 |
---|---|
show @@help | 查看Mycat管理工具帮助文档 |
show @@version | 查看Mycat的版本 |
reload @@config | 重新加载Mycat的配置文件 |
show @@datasource | 查看Mycat的数据源信息 |
show @@datanode | 查看MyCat现有的分片节点信息 |
show @@threadpool | 查看Mycat的线程池信息 |
show @@sql | 查看执行的SQL |
show @@sql.sum | 查看执行的SQL统计 |
3.6.3、MyCat-eye
3.6.3.1、介绍
Mycat-web(Mycat-eye)是对mycat-server提供监控服务,功能不局限于对mycat-server使用。他通过JDBC连接对Mycat、Mysql监控,监控远程服务器(目前仅限于linux系统)的cpu、内存、网络、磁盘。
Mycat-eye运行过程中需要依赖zookeeper,因此需要先安装zookeeper。
3.6.3.2、安装
1).、zookeeper安装
-
A. 上传安装包
zookeeper-3.4.6.tar.gz -
B. 解压
tar -zxvf zookeeper-3.4.6.tar.gz -C /usr/local/
- C. 创建数据存放目录
cd /usr/local/zookeeper-3.4.6/
mkdir data
- D. 修改配置文件名称并配置
cd config
mv zoo_sample.cfg zoo.cfg
- E. 配置数据存放目录
dataDir=/usr/local/zookeeper-3.4.6/data
- F. 启动Zookeeper
bin/zkServer.sh start
bin/zkServer.sh status
2). Mycat-web安装
-
A. 上传安装包
Mycat-web.tar.gz -
B. 解压
tar -zxvf Mycat-web.tar.gz -C /usr/local/
C. 目录介绍
etc ----> jetty配置文件
lib ----> 依赖jar包
mycat-web ----> mycat-web项目
readme.txt
start.jar ----> 启动jar
start.sh ----> linux启动脚本
- D. 启动
sh start.sh
3.6.3.3、访问
http://192.168.200.210:8082/mycat
3.6.3.4、配置
1). 开启MyCat的实时统计功能(server.xml)
<property name="useSqlStat">1</property> <!-- 1为开启实时统计、0为关闭 -->
2). 在Mycat监控界面配置服务地址
3.6.3.5、测试
配置好了之后,我们可以通过MyCat执行一系列的增删改查的测试,然后过一段时间之后,打开mycat-eye的管理界面,查看mycat-eye监控到的数据信息。
A. 性能监控
B. 物理节点
C. SQL统计
D. SQL表分析
E. SQL监控
F. 高频SQL
四、读写分离
4.1、介绍
读写分离,简单地说是把对数据库的读和写操作分开,以对应不同的数据库服务器。主数据库提供写操
作,从数据库提供读操作,这样能有效地减轻单台数据库的压力。
通过MyCat即可轻易实现上述功能,不仅可以支持MySQL,也可以支持Oracle和SQL Server。
4.2、 一主一从
4.2.1、原理
MySQL的主从复制,是基于二进制日志(binlog)实现的。
4.2.2、准备
主机 | 角色 | 用户名 | 密码 |
---|---|---|---|
192.168.200.211 | master | root | 1234 |
192.168.200.212 | slave | root | 1234 |
4.3、一主一从读写分离
MyCat控制后台数据库的读写分离和负载均衡由schema.xml文件datahost标签的balance属性控
制。
4.3.1、schema.xml配置
<!-- 配置逻辑库 -->
<schema name="ANGYAN_RW" checkSQLschema="true" sqlMaxLimit="100" dataNode="dn7"/>
<dataNode name="dn7" dataHost="dhost7" database="angyan" />
<dataHost name="dhost7" maxCon="1000" minCon="10" balance="1" writeType="0" dbType="mysql" dbDriver="jdbc" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<writeHost host="master1" url="jdbc:mysql://192.168.200.211:3306?useSSL=false&serverTimezone=Asia/Shanghai&characterEncoding=utf8" user="root" password="1234" >
<readHost host="slave1" url="jdbc:mysql://192.168.200.212:3306?useSSL=false&serverTimezone=Asia/Shanghai&characterEncoding=utf8" user="root" password="1234" />
</writeHost>
</dataHost>
上述配置的具体关联对应情况如下:
writeHost代表的是写操作对应的数据库,readHost代表的是读操作对应的数据库。 所以我们要想
实现读写分离,就得配置writeHost关联的是主库,readHost关联的是从库。
而仅仅配置好了writeHost以及readHost还不能完成读写分离,还需要配置一个非常重要的负责均衡
的参数 balance,取值有4种,具体含义如下:
参数值 | 含义 |
---|---|
0 | 不开启读写分离机制 , 所有读操作都发送到当前可用的writeHost上 |
1 | 全部的readHost 与 备用的writeHost 都参与select 语句的负载均衡(主要针对于双主双从模式) |
2 | 所有的读写操作都随机在writeHost , readHost上分发 |
3 | 所有的读请求随机分发到writeHost对应的readHost上执行, writeHost不负担读压力 |
所以,在一主一从模式的读写分离中,balance配置1或3都是可以完成读写分离的。
4.3.2、server.xml配置
配置root用户可以访问SHOPPING、ANGYAN以及 ANGYAN_RW逻辑库。
<user name="root" defaultAccount="true">
<property name="password">123456</property>
<property name="schemas">SHOPPING,ANGYAN,ANGYAN_RW</property>
<!-- 表级 DML 权限设置 -->
<!--
<privileges check="true">
<schema name="DB01" dml="0110" >
<table name="TB_ORDER" dml="1110"></table>
</schema>
</privileges>
-->
</user>
4.3.3、测试
配置完毕MyCat后,重新启动MyCat。
bin/mycat stop
bin/mycat start
然后观察,在执行增删改操作时,对应的主库及从库的数据变化。 在执行查询操作时,检查主库及从库对应的数据变化。
4.4、双主双从
4.4.1、介绍
一个主机 Master1 用于处理所有写请求,它的从机 Slave1 和另一台主机 Master2 还有它的从
机 Slave2 负责所有读请求。当 Master1 主机宕机后,Master2 主机负责写请求,Master1 、
Master2 互为备机。架构图如下:
4.4.2、准备
我们需要准备5台服务器,具体的服务器及软件安装情况如下:
编号 | IP | 预装软件 | 角色 |
---|---|---|---|
1 | 192.168.200.210 | MyCat、MySQL | MyCat中间件服务器 |
2 | 192.168.200.211 | MySQL | M1 |
3 | 192.168.200.212 | MySQL | S1 |
4 | 192.168.200.213 | MySQL | M2 |
5 | 192.168.200.214 | MySQL | S2 |
关闭以上所有服务器的防火墙:
- systemctl stop firewalld
- systemctl disable firewalld
4.4.3、搭建
4.4.3.1、主库配置
1). Master1(192.168.200.211)
A. 修改配置文件 /etc/my.cnf
#mysql 服务ID,保证整个集群环境中唯一,取值范围:1 – 2^32-1,默认为1
server-id=1
#指定同步的数据库
binlog-do-db=db01
binlog-do-db=db02
binlog-do-db=db03
# 在作为从数据库的时候,有写入操作也要更新二进制日志文件
log-slave-updates
B. 重启MySQL服务器
systemctl restart mysqld
C. 创建账户并授权
#创建angyan用户,并设置密码,该用户可在任意主机连接该MySQL服务
CREATE USER 'angyan'@'%' IDENTIFIED WITH mysql_native_password BY 'Root@123456';
#为 'angyan'@'%' 用户分配主从复制权限
GRANT REPLICATION SLAVE ON *.* TO 'angyan'@'%';
通过指令,查看两台主库的二进制日志坐标
show master status ;
2). Master2(192.168.200.213)
A. 修改配置文件 /etc/my.cnf
#mysql 服务ID,保证整个集群环境中唯一,取值范围:1 – 2^32-1,默认为1
server-id=3
#指定同步的数据库
binlog-do-db=db01
binlog-do-db=db02
binlog-do-db=db03
# 在作为从数据库的时候,有写入操作也要更新二进制日志文件
log-slave-updates
B. 重启MySQL服务器
systemctl restart mysqld
C. 创建账户并授权
#创建angyan用户,并设置密码,该用户可在任意主机连接该MySQL服务
CREATE USER 'angyan'@'%' IDENTIFIED WITH mysql_native_password BY 'Root@123456';
#为 'angyan'@'%' 用户分配主从复制权限
GRANT REPLICATION SLAVE ON *.* TO 'angyan'@'%';
通过指令,查看两台主库的二进制日志坐标
show master status ;
4.4.3.2、从库配置
1). Slave1(192.168.200.212)
A. 修改配置文件 /etc/my.cnf
#mysql 服务ID,保证整个集群环境中唯一,取值范围:1 – 232-1,默认为1
server-id=2
B. 重新启动MySQL服务器
systemctl restart mysqld
2). Slave2(192.168.200.214)
A. 修改配置文件 /etc/my.cnf
#mysql 服务ID,保证整个集群环境中唯一,取值范围:1 – 232-1,默认为1
server-id=4
B. 重新启动MySQL服务器
systemctl restart mysqld
4.4.3.3、从库关联主库
1). 两台从库配置关联的主库
需要注意slave1对应的是master1,slave2对应的是master2。
A. 在 slave1(192.168.200.212)上执行
CHANGE MASTER TO MASTER_HOST='192.168.200.211', MASTER_USER='angyan',MASTER_PASSWORD='Root@123456', MASTER_LOG_FILE='binlog.000002',MASTER_LOG_POS=663;
B. 在 slave2(192.168.200.214)上执行
CHANGE MASTER TO MASTER_HOST='192.168.200.213', MASTER_USER='angyan',MASTER_PASSWORD='Root@123456', MASTER_LOG_FILE='binlog.000002',MASTER_LOG_POS=663;
C. 启动两台从库主从复制,查看从库状态
start slave;
show slave status \G;
2). 两台主库相互复制
Master2 复制 Master1,Master1 复制 Master2。
A. 在 Master1(192.168.200.211)上执行
CHANGE MASTER TO MASTER_HOST='192.168.200.213', MASTER_USER='angyan',MASTER_PASSWORD='Root@123456', MASTER_LOG_FILE='binlog.000002',MASTER_LOG_POS=663;
B. 在 Master2(192.168.200.213)上执行
CHANGE MASTER TO MASTER_HOST='192.168.200.211', MASTER_USER='angyan',MASTER_PASSWORD='Root@123456', MASTER_LOG_FILE='binlog.000002',MASTER_LOG_POS=663;
C. 启动两台从库主从复制,查看从库状态
start slave;
show slave status \G;
经过上述的三步配置之后,双主双从的复制结构就已经搭建完成了。 接下来,我们可以来测试验证一下。
4.4.4、测试
分别在两台主库Master1、Master2上执行DDL、DML语句,查看涉及到的数据库服务器的数据同步情况。
create database db01;
use db01;
create table tb_user(
id int(11) not null primary key ,
name varchar(50) not null,
sex varchar(1)
)engine=innodb default charset=utf8mb4;
insert into tb_user(id,name,sex) values(1,'Tom','1');
insert into tb_user(id,name,sex) values(2,'Trigger','0');
insert into tb_user(id,name,sex) values(3,'Dawn','1');
insert into tb_user(id,name,sex) values(4,'Jack Ma','1');
insert into tb_user(id,name,sex) values(5,'Coco','0');
insert into tb_user(id,name,sex) values(6,'Jerry','1');
在Master1中执行DML、DDL操作,看看数据是否可以同步到另外的三台数据库中。
在Master2中执行DML、DDL操作,看看数据是否可以同步到另外的三台数据库中。
4.5、双主双从读写分离
4.5.1、配置
MyCat控制后台数据库的读写分离和负载均衡由schema.xml文件datahost标签的balance属性控制,通过writeType及switchType来完成失败自动切换的。
1). schema.xml
配置逻辑库:
<schema name="ANGYAN_RW2" checkSQLschema="true" sqlMaxLimit="100" dataNode="dn7"/>
配置数据节点:
<dataNode name="dn7" dataHost="dhost7" database="db01" />
配置节点主机:
<dataHost name="dhost7" maxCon="1000" minCon="10" balance="1" writeType="0" dbType="mysql" dbDriver="jdbc" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<writeHost host="master1" url="jdbc:mysql://192.168.200.211:3306?useSSL=false&serverTimezone=Asia/Shanghai&characterEncoding=utf8" user="root" password="1234" >
<readHost host="slave1" url="jdbc:mysql://192.168.200.212:3306?useSSL=false&serverTimezone=Asia/Shanghai&characterEncoding=utf8" user="root" password="1234" />
</writeHost>
<writeHost host="master2" url="jdbc:mysql://192.168.200.213:3306?useSSL=false&serverTimezone=Asia/Shanghai&characterEncoding=utf8" user="root" password="1234" >
<readHost host="slave2" url="jdbc:mysql://192.168.200.214:3306?useSSL=false&serverTimezone=Asia/Shanghai&characterEncoding=utf8" user="root" password="1234" />
</writeHost>
</dataHost>
具体的对应情况如下:
属性说明:
- balance=“1”
代表全部的 readHost 与 stand by writeHost 参与 select 语句的负载均衡,简单的说,当双主双从模式(M1->S1,M2->S2,并且 M1 与 M2 互为主备),正常情况下,M2,S1,S2 都参与 select 语句的负载均衡 ;
- writeType
0 : 写操作都转发到第1台writeHost, writeHost1挂了, 会切换到writeHost2上;
1 : 所有的写操作都随机地发送到配置的writeHost上 ;
- switchType
-1: no automatic switching
1: automatic switching
2). user.xml
configures that the root user can also access the logic library ANGYAN_RW2.
<user name="root" defaultAccount="true">
<property name="password">123456</property>
<property name="schemas">SHOPPING,ANGYAN,ANGYAN_RW2</property>
<!-- 表级 DML 权限设置 -->
<!--
<privileges check="true">
<schema name="DB01" dml="0110" >
<table name="TB_ORDER" dml="1110"></table>
</schema>
</privileges>
-->
</user>
4.5.2. Test
Log in to MyCat, test query and update operations, and determine whether read-write separation is possible and whether the read-write separation strategy is correct.
When one of the main databases is down, can it be automatically switched?