Building a high-performance database cluster Part 2: MySQL read-write separation (based on mycat-1.6.7.1)

1. Overview of MyCat

Mycat is database middleware.

scenes to be used:

High availability and Mysql read and write separation

  • Hierarchical storage of business data
  • The large table is split horizontally,
  • Cluster parallel computing
  • Database connection pool
  • Integrate multiple data sources

Install

Download address: http://dl.mycat.org.cn

Before installing Mycat, you need to install Mysql and JDK. The stable version of Mycat is 1.6.7.1.

Download the installation package file (Mycat-xxx-linux.tar.gz), upload it to the Linux server, usually install it under /usr/local, and decompress it

tar -zxvf Mycat-xxx-linux.tar.gz

The installation is successful, enter the mycat directory, and you can see

  • bin: command file
  • catlet: empty, extended
  • conf: Configuration file (server.xml, schema.xml, rule.xml, etc.)
  • lib: dependent jar package

Core idea

Sharding
: We distribute the data stored in the same database into multiple databases to achieve the load effect of a single device. When the database size exceeds 8 million, sharding is required.

Database segmentation:

  • Vertical split: If there is a lot of data due to too many tables, use vertical split to split it into different libraries according to the business;
  • Horizontal split: If the amount of data in a single table is too large, use horizontal split. The order of database and table partitioning should be vertical partitioning first and then horizontal partitioning.

Split vertically:

  • Vertical table splitting: Split a large table into small tables. Generally, there are many fields in the table. Split the fields that are not commonly used, have large data, and have a long length into "extended tables."
  • Vertical branching: splitting different businesses in the system, such as one database for users, one database for products, and another database for orders.
    After splitting, place it on multiple servers. In high-concurrency scenarios, vertical sharding can break through the bottlenecks of IO, number of connections, and stand-alone hardware resources to a certain extent.

Split horizontally:

  • Horizontal table splitting: For a single table with a large amount of data, split it into multiple tables according to certain rules (RANGE, HASH modulus, etc.).
    However, these tables are still in the same database, so database operations at the database level still have IO bottlenecks. Not recommended.
  • Horizontal database and table sharding: Split the data of a single table into multiple servers. Each server has a corresponding database and table, but the data collection in the table is different. Segmentation rules: 1.
    One table from 0 to 10,000, and one table from 10,001 to 20,000; 2. HASH modulus; 3. Geographic area.

The logical library
Mycat is database middleware and can be regarded as a logical library composed of one/multiple database clusters. Then the following logical table, the table for reading and writing data, is the logical table.

ER table
records and all associated parent table records are stored in the same data shard to ensure that data association queries will not crash database operations.

Global table
Similar to the data dictionary table, these redundant data are defined as global tables.

Sharding node
Data is segmented. A large table is divided into different sharding databases. The database where each table shard is located is the sharding node.

Sharded host
There are multiple sharded databases on the same machine. The machine where this sharded node is located is the node host.

Common commands

#启动、停止、查看状态
bin/mycat start
bin/mycat stop
bin/mycat status

#连接命令:连接端口号:8066
mysql -h (IP地址) -P 8066 -u root -p
连接管理命令:连接端口号:9066
mysql -h (IP地址) -P 9066 -u root -p

Related configuration items and their meanings

Configure common system configurations in server.xml

  • charset value utf8 character set
  • useSqlStat value 0 1 turns on real-time statistics 0 turns off
  • sqlExecuteTimeout value 1000 SQL statement timeout time
  • processors value 1,2... specifies the number of threads
  • txIsolation value 1,2,3,4 transaction isolation level
  • serverPort value 8066 uses the port number
  • mangerPort value 9066 management port number

The user tag defines the login Mycat user and permissions:

<user name="mycat用户名2">
    <property name="password">密码</property>
    <property name="schemas">逻辑库名</property>
    <property name="readOnly">true(是否只读)</property>
    <property name="benchmark">1000(连接数,0代表不限制)</property>
    <property name="usingDecrypt">1(是否加密,1加密)</property>
    <!--权限设置-->
    <privileges check="true">
        <schema name="逻辑库名" dml="0000">
            <table name="逻辑表" dml="0000"></table>
            <table name="逻辑表" dml="0000-代表增改查删的权限,1代表有,0没有"></table>
        </schema>
    </privileges>
</user>
# 注意
# 设置了密码加密,需要在/lib目录下执行
java -cp Mycat-server-xxx.release.jar io.maycatutil.DecryptUtil 0:root:密码
# 然后得到加密的密码,然后在配置文件密码改成加密后的密码

The firewall tag defines the firewall:

<firewall>
    <!--白名单-->
    <whitehost>
        <host user="root" host="IP地址"></host>
    </<whitehost>
    <!--黑名单  这个用户有哪些SQl的权限-->
    <blacklist check="true">
        <property name="selelctAllow">true</property>
        <property name="deleteAllow">false</property>
    </<blacklist>
</firewall>

Configure schema.xml
to configure logical libraries, logical tables, shards, and shard nodes.

  • schema logical library, there can be multiple
  • table logical table, its attributes: rult: sharding rule name type: type (global table, ordinary table)
  • dataNode defines data nodes and cannot be repeated
  • dataHost specific database instance, node host
  • database defines the database to which the shard belongs

dataHost label node host:

  • name node name
  • maxCon maximum number of connections
  • minCon minimum number of connections
  • balance load balancing type 0,1,2,3
  • writeType write operation distribution method 0: Read and write operations are all on the first writeHost 1: Randomly send writeHost to
  • switchType database switching test-1.1,2,3

writeHost, readHost tags read and write hosts:

  • host instance host ID
  • url database connection address
  • weight weight
  • usingDecrypt password encryption 0 no 1 yes
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
    <schema name="逻辑库名" checkSQLschema="false" sqlMaxLimit="100">
        <!-- name:这张表进行分库,使用数据节点 -->
        <table name="逻辑表" dataNode="数据节点d1,数据节点d2,数据节点d3" rule="分片规则"></table>
    </schema>
    
    <!-- dataNode:数据结点 dataHost:主机名 database:分库的名字的数据库名字 -->
    <dataNode name="数据节点d1" dataHost="host1" database="数据库1" />
    <dataNode name="数据节点d2" dataHost="host2" database="数据库1" />
    <dataNode name="数据节点d3" dataHost="host3" database="数据库1" />
    
    <dataHost name="host1" maxCon="1000" minCon="10" balance="0" writeType="0" dbType="mysql"       dbDriver="native" switchType="1" slaveThreshold="100" >
        <heartbeat>select user()</heartbeat>
        <writeHost host="hostM1" url="地址1:3306" user="root" password="123123"></writeHost>         </dataHost>
    ...
</mycat:schema>

Configure rule.xml
to define the rules for splitting the table

tableRule tag:

  • name shard name
  • rule fragmentation specific algorithm
  • columns shard column name
  • algorithm algorithm method name

function tag:

  • name algorithm name
  • class concrete class
<tableRule name="mod_rule">
  <rule>
     <columns>customer_id</columns>
     <algorithm>mod-long</algorithm>
  </rule>
   ...
</tableRule>
<!--函数算法,这个算法是根据选中的字段进行取模的规则进行拆分-->
<function name="mod-long" class="io.mycat.route.function.PartitionByMod" >
  <property name="count">2</property>
</function>


The original primary key auto-increment configured in sequence.xml does not meet the unique constraint of the primary key in the cluster. Mycat provides a global sequence to ensure global uniqueness.

  • local file mode
  • Database mode
  • local timestamp
  • other methods
  • self-increasing primary key

Sharding
** architecture evolution: ** Single-machine database - Due to more and more requests, we separate the reading and writing of the database. The host is responsible for writing, and the slave is responsible for reading. The slave library can be expanded horizontally, so more read requests cannot question. When the amount of data increases and there are more and more write requests, it is necessary to use sub-databases and tables to divide the write operations.

**Single database is too large:** The processing capacity of a single database is limited; solution: split into more smaller libraries

**Single table is too large: **CRUD is a problem; index expansion, query timeout; solution: split into multiple tables with smaller data sets.

Methods of sub-database and sub-table:

It is divided into vertical segmentation and horizontal segmentation. If there is a lot of data because there are many tables, use vertical segmentation and split it into different libraries according to the business; if the amount of data in a single table is too large, use horizontal segmentation. The order of database and table partitioning should be vertical partitioning first and then horizontal partitioning.

Vertical splitting
demonstration: When a database has a large amount of data, these tables are sharded, and one database is split into multiple databases, divided into user databases, order databases, information tables...

Step 1:
Prepare 3 database instances, create their own libraries (user library, order library...)
and configure server.xml (configuration user, character set, logical library name)

<user name="mycat用户名">
	<property name="password">密码</property>
	<property name="schemas">逻辑库名</property>
</user>

Step 2:

Configuration schema.xml

<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
	<schema name="逻辑库名" checkSQLschema="false" sqlMaxLimit="100">
		<table name="用户表1" dataNode="d1" primaryKey="主键ID"></table>
		<table name="用户表2" dataNode="d1" primaryKey="主键ID"></table>
		<table name="用户表3" dataNode="d1" primaryKey="主键ID"></table>
		<table name="字典表1" dataNode="dn1,dn2" type="global" ></table>
		
		<table name="订单表1" dataNode="d2" primaryKey="主键ID"></table>
		<table name="订单表2" dataNode="d2" primaryKey="主键ID"></table>
		<table name="订单表3" dataNode="d2" primaryKey="主键ID"></table>
	</schema>
	
	<dataNode name="d1" dataHost="host1" database="用户库" />
	<dataNode name="d2" dataHost="host2" database="订单表" />
	
	<dataHost name="host1" maxCon="1000" minCon="10" balance="0" writeType="0" dbType="mysql"       dbDriver="native" switchType="1" slaveThreshold="100" >
		<heartbeat>select user()</heartbeat>
	   	<writeHost host="hostM1" url="地址1:3306" user="root" password="123123"></writeHost>         </dataHost>
	<dataHost name="host2" maxCon="1000" minCon="10" balance="0" writeType="0" dbType="mysql"       dbDriver="native" switchType="1" slaveThreshold="100" >
		<heartbeat>select user()</heartbeat>
	   	<writeHost host="hostM1" url="地址2:3306" user="root" password="123123"></writeHost>         </dataHost>
</mycat:schema>

Step 3:

Create data in different databases. If there are dictionary tables, create the dictionary tables in different databases to avoid database queries.

Backup command: mysqldump -uroot -pitcast library name table name > file name

Create the backed up dictionary table data in other databases, configure schema.xml, and add a global table (global)

Dictionary table and associated subtable configuration

<schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100" dataNode="dn1">
	<table name="customer" dataNode="dn2" ></table>
	<table name="orders" dataNode="dn1,dn2" rule="mod_rule">
		<!-- childTable:关联子表 primaryKey:子表主键 joinKey:关联字段 parentKey:父表的关联字段-->
		<childTable name="orders_detail" primaryKey="id" joinKey="order_id" parentKey="id" />
	</table>
	
	<!--例如字典表,需要在host1 host2 主键都创建,type:定义global 全局表 -->
	<table name="dict_order_type" dataNode="dn1,dn2" type="global" ></table>
</schema>
<dataNode name="dn1" dataHost="host1" database="数据库1" />
<dataNode name="dn2" dataHost="host2" database="数据库2" />

Start Mycat and the configuration is completed.

Horizontal split
splits the data in the same table to multiple database hosts according to certain rules.

Demonstration: When a table has a large amount of data, these tables are sharded and split into three database hosts.

step 1:

Prepare 3 database instances and create their own libraries (user libraries) respectively.

Configure server.xml (configure user, character set, logical library name)

Step 2:

Configuration schema.xml

<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
	<schema name="逻辑库名" checkSQLschema="false" sqlMaxLimit="100">
		<table name="用户表1" dataNode="d1,d2,d3" primaryKey="主键ID" rule="mod_rule(取模分片)">	
		</table>
	</schema>
	
	<dataNode name="d1" dataHost="host1" database="用户库" />
	<dataNode name="d2" dataHost="host2" database="用户库" />
	<dataNode name="d3" dataHost="host3" database="用户库" />
	
	<dataHost name="host1" maxCon="1000" minCon="10" balance="0" writeType="0" dbType="mysql"       dbDriver="native" switchType="1" slaveThreshold="100" >
		<heartbeat>select user()</heartbeat>
	   	<writeHost host="hostM1" url="地址1:3306" user="root" password="123123"></writeHost>         </dataHost>
	<dataHost name="host2" maxCon="1000" minCon="10" balance="0" writeType="0" dbType="mysql"       dbDriver="native" switchType="1" slaveThreshold="100" >
		<heartbeat>select user()</heartbeat>
	   	<writeHost host="hostM1" url="地址2:3306" user="root" password="123123"></writeHost>         </dataHost>
	<dataHost name="host3" maxCon="1000" minCon="10" balance="0" writeType="0" dbType="mysql"       dbDriver="native" switchType="1" slaveThreshold="100" >
		<heartbeat>select user()</heartbeat>
	   	<writeHost host="hostM1" url="地址3:3306" user="root" password="123123"></writeHost>         </dataHost>
</mycat:schema>

Step 3:
Configure rule.xml

<tableRule name=“mod_rule“>
   <rule>
   	    <columns>customer_id</columns>
   	    <algorithm>mod-long</algorithm>
   </rule>
   ...
</tableRule>

<!--函数算法,这个算法是根据选中的字段进行取模的规则进行拆分-->
<function name=“mod-long” class=“io.mycat.route.function.PartitionByMod” >
    <property name="count">3</property>
</function>

Step 4:

Start Mycat, create a table structure under Mycat, and then other databases will create table structure data.

For testing, we create some data, and then create corresponding data in the corresponding database according to the sharding rules.

Sharding rules:

  • mod-long modulo sharding
  • auto-sharding-long range sharding

Configure autopartition-long.txt file # M=10000 k=1000 0-500M=0 500M-1000M=1 1000M-1500M=2

  • sharding-by-intfile enumerates sharding. This rule is suitable for splitting data by province and state.
<tableRule name=“sharding-by-intfile“>
 <rule>
  <columns>status</columns> 
  <algorithm>hash-int</algorithm>
 </rule> 
</tableRule> 
<function name=“hash-int” class=“io.mycat.route.function.ParitionByFileMap” >
 <property name="mapFile">partition-hash-int.txt</property>
 <property name="type">0</property> 
 <property name="defaultNode">0 (默认的节点)</property>
</function> 
# partition-hash-int.txt 1=0 2=1 3=2 # 根据状态status字段进行拆分,值1为第一个数据库,值2为第二个数据库,值3为第三个数据库
  • auto-sharding-rang-mod range sharding (first perform range sharding, calculate grouping, and then perform modulus within the group)
<tableRule name=“auto-sharding-rang-mod“>
 <rule>
  <columns>id</columns> 
  <algorithm>rang-mod</algorithm> 
 </rule> 
</tableRule> 
<function name=“rang-mod”  class=“io.mycat.route.function.PartitionByRangeMod” >
 <property name="mapFile">autopartition-range-mod.txt</property> 
 <property name="defaultNode">0 (默认的节点)</property> 
</function> 
# autopartition-range-mod.txt 0-500M=1 500M1-200M=2 # M=10000 2是2个节点
  • sharding-by-long-hash fixed sharding hash algorithm
  • sharding-by-prefixpattern string hash modulus range algorithm
  • sharding-by-murmur consistent hash algorithm
# 有效解决分布式数据的拓容问题,均匀的分散到数据库节点上 
<tableRule name=“sharding-by-murmur“>
 <rule>
  <columns>id</columns> 
  <algorithm>murmur</algorithm> 
 </rule> 
</tableRule> 
<function name=“murmur” class=“io.mycat.route.function.PartitionByMurmurHash” >
 <property name="seed">0</property> 
 <property name="count">3(要分片的数据库节点)</property> 
 <property name="virtualBucketTimes">160</property> 
</function>
  • sharding-by-date date sharding algorithm
  • sharding-by-month natural month sharding algorithm

Global sequence:
Once the database is divided, mysql is deployed on different machines, and when new data is added, the primary keys of the tables may be equal. In order to avoid this, a global sequence is set.

  • Local files: Not recommended. Once mycat hangs, local files cannot be accessed.
  • Database method: Use a table in the database to accumulate counts. Mycat will preload a part of the number segment into mycat's memory, so that most of the read and write sequences are completed in memory. If the number segment in the memory is used up, mycat will ask the database again.
  • Timestamp method: Default, but the timestamp format is long and the performance is not good.
  • Self-generated: Based on the business logic combination, the Java code needs to be modified.

Set database mode:

# 1.创建表
CREATE TABLE MYCAT_SEQUENCE (
	NAME VARCHAR(50) NOT NULL,
	current_value INT NOT NULL,
    increment INT NOT NULL DEFAULT 100, 
    PRIMARY KEY(NAME)
) ENGINE=INNODB;

# 2.创建3个函数
DELIMITER $$
CREATE FUNCTION mycat_seq_currval(seq_name VARCHAR(50)) RETURNS VARCHAR(64)
	DETERMINISTIC  
	BEGIN
	DECLARE retval VARCHAR(64);
	SET retval="-999999999,null";
	SELECT CONCAT(CAST(current_value AS CHAR),",",CAST(increment AS CHAR)) INTO retval FROM
	MYCAT_SEQUENCE WHERE NAME = seq_name;
	RETURN retval;
END $$
DELIMITER;

DELIMITER $$
CREATE FUNCTION mycat_seq_setval(seq_name VARCHAR(50),VALUE INTEGER) RETURNS VARCHAR(64)
	DETERMINISTIC
	BEGIN
	UPDATE MYCAT_SEQUENCE
	SET current_value = VALUE
	WHERE NAME = seq_name;
	RETURN mycat_seq_currval(seq_name);
END $$
DELIMITER;

DELIMITER $$
CREATE FUNCTION mycat_seq_nextval(seq_name VARCHAR(50)) RETURNS VARCHAR(64) 
	DETERMINISTIC
	BEGIN
	UPDATE MYCAT_SEQUENCE
	SET current_value = current_value + increment WHERE NAME = seq_name;
	RETURN mycat_seq_currval(seq_name);
END $$
DELIMITER;

# 3 插入序列的表,初始化MYCAT_SEQUENCE数据
字段1:全局序列名字,字段2:多少号开始,字段3:一次性给多少
SELECT * FROM MYCAT_SEQUENCE
INSERT INTO MYCAT_SEQUENCE(NAME,current_value,increment) VALUES ('ORDERS', 400000,100);

# 4 更改mycat配置:
修改这个文件:sequence_db_conf.properties
把要改的表,如order表改成 = dn1 结点

# 5 更改server.xml文件:
把下面的类型改成1 的类型是数据库方式,更改完进行重启
...
<property name="sequnceHandlerType">1</property>

# 6 执行新增操作 增加要用的序列
insert into `orders`(id,amount,customer_id,order_type) values(
    next value for MYCATSEQ_ORDERS
    ,1000,101,102);

Performance monitoringMycat
-web

Help us with statistical tasks and configuration management tasks. It can count SQL and analyze slow SQL and high-frequency SQL to provide basis for optimizing SQL.

Install Mycat-web
step 1:

Before installing Mycat-web, you need to install JDK and zookeeper. Official address: http://zookeeper.apache.org

Install zookeeper: Download the installation package file (zookeepere-xxx.tar.gz), upload it to the Linux server, usually install it under /usr/local, and unzip it

tar -zxvf zookeeper-xxx.tar.gz

After decompression, create the data directory in the current directory (/usr/loca/zookeeper), switch to the conf directory, modify the configuration file zoo_sample.cfg and rename it to zoo.cfg, and edit the file after modification.

dataDir=/usr/loca/zookeeper/data

Start zookeeper

bin/zkServer.sh start

Step 2:

Install Mycat-web, download address: http://dl.mycat.org.cn

Select the file in the mycat-web directory to download. The downloaded installation package file is uploaded to the Linux server. It is usually installed under /usr/local and decompressed.

tar -zxvf Mycat-web-xxx-linux.tar.gz

After decompression, in the current directory (/usr/loca/mycat-web), if there are multiple, start the program:

sh start.sh

Start the program, visit http://ip address:8082/mycat, and monitor Mycat through this URL.

Step 3:

Configure Mycat, open the URL, menu bar - mycat service management - add (add Mycat to be monitored) Note: Management port 9066 Service port 8066
Insert image description here

2. Configure read-write separation

1. One master and one slave configuration read and write separation

First configure the master-slave replication of the database. For details, see: https://blog.csdn.net/hualinger/article/details/131292136

Step 1 Change the server.xml file:

Set the user name and password of mycat, and set the logical library name of mycat in the schemas part.

...
<user name="mycat用户名">
	<property name="password">密码</property>
	<property name="schemas">TESTDB</property>
</user>

Step 2 Change the schema.xml file:

<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">

	<schema name="逻辑库名" checkSQLschema="false" sqlMaxLimit="100">
		<table name="表1" dataNode="dn1" primaryKey="主键ID"></table>
	</schema>
	
	<dataNode name="dn1" dataHost="host1" database="数据库名" />
	
	<!-- dataHost:主机名 balance: 负载均衡类型 0 不开启 1 双主双从 2 随机分发 3 主写从读  -->
	<dataHost name="host1" maxCon="1000" minCon="10" balance="3" writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100" >
		<heartbeat>select user()</heartbeat>
		
		<!-- 配置写主机 host:写主机名字,url:主机的地址 user:主机的用户名 password:主机密码 -->
	   	<writeHost host="hostm1" url="192.168.67.1:3306" user="root" password="123123">
	   		<!-- 配置从机 读库(从库)的配置 -->
	    	<readHost host="hosts1" url="192.168.67.131:3306" user="root" password="123123">
	    	</readHost>
	    </writeHost>                                                                               
	</dataHost>
</mycat:schema>

Step 3 Configure Mycat log:

Configure in /usr/local/mycat/conf and modify log4j2.xml

<asyncRoot level="debug" includeLocation="true">...<asyncRoot>
日志修改为debug模式

Step 4 Start Mycat

# 方式1 - 控制台启动 :去mycat/bin 目录下 执行 mycat console
[root@... bin]# ./mycat console

# 方式2 - 后台启动 :去mycat/bin 目录下 mycat start
[root@... bin]# ./mycat start

If the startup fails, an error domain name resolution failure is reported.

Solution: Modify /etc/hosts and add your machine name after 127.0.0.1. After modification, restart the service service network restart

Started successfully, log in to MyCat:

# 后台管理的窗口:
mysql -uroot -p 密码 -h 192.168.67.131 -P 9066
# 数据窗口窗口:
mysql -uroot -p 密码 -h 地址 -P 8066
# 登录后,查询库
show databases;

After the configuration is completed, perform a demonstration and
check the log information in /usr/local/mycat/log to easily see which database has been used.

tail -f mycat.log  # 查看日志

2. Dual master and dual slave configuration read and write separation

First configure the master-slave replication of the database, 4 databases, 2 hosts, and 2 slaves. For details, see: https://blog.csdn.net/hualinger/article/details/131292136

Step 1 Configure read and write separation

Configure read-write separation through Mycat and change the server.xml file:

Set the user name and password of mycat, and set the logical library name of mycat in the schemas part.

Step 2 Change the schema.xml file:

<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">

	<schema name="逻辑库名" checkSQLschema="false" sqlMaxLimit="100">
		<table name="表1" dataNode="dn1" primaryKey="主键ID"></table>
	</schema>
	
	<dataNode name="dn1" dataHost="host1" database="数据库名" />
	
	<!-- dataHost:主机名 balance: 负载均衡类型 0 不开启 1 双主双从 2 随机分发 3 主写从读  -->
	<!-- writeType 0写操作在第一主机,如果挂掉,会连接第二主机 1随机发送主机上 -->
	<!-- writeType -1不自动切换 1自动切换 2基于心跳状态决定-->
	<dataHost name="host1" maxCon="1000" minCon="10" balance="1" writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100" >
		<heartbeat>select user()</heartbeat>
		
		<!-- 配置写主机 host:写主机名字,url:主机的地址 user:主机的用户名 password:主机密码 -->
	   	<writeHost host="hostm1" url="主机1:3306" user="root" password="123123">
	    	<readHost host="hosts1" url="从机1:3306" user="root" password="123123">
	    	</readHost>
	    </writeHost> 
        
        <writeHost host="hostm2" url="主机2:3306" user="root" password="123123">
	    	<readHost host="hosts2" url="从机2:3306" user="root" password="123123">
	    	</readHost>
	    </writeHost> 
	</dataHost>
</mycat:schema>

Step 4 Configure Mycat log:
Configure in /usr/local/mycat/conf and modify log4j2.xml

...
<asyncRoot level="debug" includeLocation="true">...<asyncRoot>
日志修改为debug模式

Step 5 Start Mycat

# 方式1 - 控制台启动 :去mycat/bin 目录下 执行 mycat console
[root@... bin]# ./mycat console

# 方式2 - 后台启动 :去mycat/bin 目录下 mycat start
[root@... bin]# ./mycat start

The startup is successful, log in to MyCat, the configuration is completed, and the demonstration is performed.
Check whether Mycat's master-slave replication is normal. Host 1 creates data and slave 1. Both host 2 replicated successfully.

Guess you like

Origin blog.csdn.net/hualinger/article/details/131518730