ps:
Concept: horizontal sub-table is in the same database, the data of the same table is split into multiple tables according to certain rules
1. Prepare the environment:
1. Under a single database, sub-table:
2. Database script:
#创建订单库order_db
CREATE DATABASE `order_db` CHARACTER SET 'utf8' COLLATE 'utf8_general_ci';
#在order_db中创建t_order_1、t_order_2表
DROP TABLE IF EXISTS `t_order_1`;
CREATE TABLE `t_order_1` (
`order_id` bigint(20) NOT NULL COMMENT '订单id',
`price` decimal(10, 2) NOT NULL COMMENT '订单价格',
`user_id` bigint(20) NOT NULL COMMENT '下单用户id',
`status` varchar(50) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL COMMENT '订单状态',
PRIMARY KEY (`order_id`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci ROW_FORMAT = Dynamic;
DROP TABLE IF EXISTS `t_order_2`;
CREATE TABLE `t_order_2` (
`order_id` bigint(20) NOT NULL COMMENT '订单id',
`price` decimal(10, 2) NOT NULL COMMENT '订单价格',
`user_id` bigint(20) NOT NULL COMMENT '下单用户id',
`status` varchar(50) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL COMMENT '订单状态',
PRIMARY KEY (`order_id`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci ROW_FORMAT = Dynamic;
Second, pom.xml: (Import the corresponding jar package)
ps:
The version of springboot used here is 2.2.2.RELEASE;
Here sharding-jdbc-spring-boot-starter I use 4.0.0-RC2 integration, the console will report an error:
Failed to configure a DataSource: 'url' attribute is not specified and no embedded datasource could be configured.
Reason: Failed to determine a suitable driver class
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
</dependency>
<dependency>
<groupId>org.mybatis.spring.boot</groupId>
<artifactId>mybatis-spring-boot-starter</artifactId>
<version>2.1.1</version>
</dependency>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>druid-spring-boot-starter</artifactId>
<version>1.1.20</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.47</version>
</dependency>
<dependency>
<groupId>org.apache.shardingsphere</groupId>
<artifactId>sharding-jdbc-spring-boot-starter</artifactId>
<version>4.0.0-RC1</version>
</dependency>
<dependency>
<groupId>com.baomidou</groupId>
<artifactId>mybatis-plus-boot-starter</artifactId>
<version>3.2.0</version>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
</dependencies>
3. application.yml: (important)
#服务端口
server:
port: 56081
#服务名
spring:
application:
name: sharding-jdbc-examples
http:
encoding:
enabled: true
charset: utf-8
force: true
main:
allow-bean-definition-overriding: true
#shardingsphere相关配置
shardingsphere:
datasource:
names: m1 #配置库的名字,随意
m1: #配置目前m1库的数据源信息
type: com.alibaba.druid.pool.DruidDataSource
driverClassName: com.mysql.jdbc.Driver
url: jdbc:mysql://192.168.87.133:3306/order_db?useUnicode=true
username: root
password: 123456
sharding:
tables:
t_order: # 指定t_order表的数据分布情况,配置数据节点
actualDataNodes: m1.t_order_$->{1..2}
tableStrategy:
inline: # 指定t_order表的分片策略,分片策略包括分片键和分片算法
shardingColumn: order_id
algorithmExpression: t_order_$->{order_id % 2 + 1}
keyGenerator: # 指定t_order表的主键生成策略为SNOWFLAKE
type: SNOWFLAKE #主键生成策略为SNOWFLAKE
column: order_id #指定主键
props:
sql:
show: true
#日志打印
logging:
level:
root: info
org.springframework.web: info
com.lucifer.sharding.dao: debug
druid.sql: debug
t_order : Here is the logical table name, not the real table name. The real table name is t_order_1, t_order_2;
actualDataNodes : m1.t_order _ $-> {1..2} where m1 is the library name (set above), which is equivalent to the library name. logical table name_1, library name. logical table name_2 === ===》 Corresponding to two real table names.
shardingColumn : sharding key,
algorithmExpression : t_order _ $-> {order_id% 2 + 1}: Sharding rules. t_order _ $-> {order_id% 2 + 1} in two halves:
(1)-"t_order_logical table name_value, as to whether this value is 1 or 2, is caused by
(2) --- "order_id% 2 + 1", the value of order_id divided by 2 is modulo and then 1.
That is, data with an even number of order_id falls on t_order_1, and an odd number falls on t_order_2.
spring.main.allow-bean-definition-overriding: true: must be set, otherwise it will report an error;
The bean 'dataSource', defined in class path resource [org/apache/shardingsphere/shardingjdbc/spring/boot/SpringBootConfiguration.class], could not be registered. A bean with that name has already been defined in class path resource [com/alibaba/druid/spring/boot/autoconfigure/DruidDataSourceAutoConfigure.class] and overriding is disabled.
Action:
Consider renaming one of the beans or enabling overriding by setting spring.main.allow-bean-definition-overriding=true
The meaning is clear, there are two beans with the same name.
Check the SpringBoot and druid source code, you can see the beans with the same name.
Fourth, the code segment
ps: The code is very simple, mainly for configuration, so I won't go into details here.
controller:
package com.lucifer.sharding.controller;
import com.lucifer.sharding.service.OrderService;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
import javax.annotation.Resource;
/**
* @author Lucifer
*/
@RestController
public class OrderController {
@Resource
private OrderService orderService;
@GetMapping(value = "/add")
public void addOrder() {
orderService.addOrder();
}
@GetMapping(value = "/find")
public void findOrder() {
orderService.findOrder();
}
}
service interface:
package com.lucifer.sharding.service;
/**
* @author Lucifer
*/
public interface OrderService {
/**
* 新增订单
*
*/
void addOrder();
/**
* 查询
*/
void findOrder();
}
Service implementation class:
package com.lucifer.sharding.service.impl;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.lucifer.sharding.dao.OrderDao;
import com.lucifer.sharding.pojo.Order;
import com.lucifer.sharding.service.OrderService;
import org.springframework.stereotype.Service;
import javax.annotation.Resource;
import java.math.BigDecimal;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
/**
* @author Lucifer
*/
@Service
public class OrderServiceImpl implements OrderService {
@Resource
OrderDao orderDao;
@Override
public void addOrder() {
for (int i = 0; i < 10; i++) {
Order order = new Order();
order.setPrice(new BigDecimal(Math.random()));
order.setUserId(new Random().nextLong());
order.setStatus("0");
orderDao.insert(order);
}
}
//执行新增后,将两库的数据各取一条,来测试
@Override
public void findOrder() {
List<Long> list=new ArrayList<>();
list.add(418415166183440384L);
list.add(418417197166100481L);
QueryWrapper<Order> queryWrapper=new QueryWrapper<>();
queryWrapper.in("order_id", list);
orderDao.selectList(queryWrapper);
}
}
Dao layer interface:
package com.lucifer.sharding.dao;
import com.baomidou.mybatisplus.core.mapper.BaseMapper;
import com.lucifer.sharding.pojo.Order;
/**
* @author Lucifer
*/
public interface OrderDao extends BaseMapper<Order> {
}
Entity class:
package com.lucifer.sharding.pojo;
import java.io.Serializable;
import java.math.BigDecimal;
import com.baomidou.mybatisplus.annotation.TableName;
import lombok.Data;
/**
* @author Lucifer
*/
@TableName(value = "t_order")
@Data
public class Order implements Serializable {
/**
* 订单id
*/
private Long orderId;
/**
* 订单价格
*/
private BigDecimal price;
/**
* 下单用户id
*/
private Long userId;
/**
* 订单状态
*/
private String status;
private static final long serialVersionUID = 1L;
}
ps:
What needs to be said is that the annotation @TableName (value = "t_order") of mybatis-plus specifies the table name, and here the logical table name is specified .
Add @MapperScan annotation to SpringBoot startup class .
Five, test
The console prints:
database:
In fact, the approximate logic is that ShardingSphere has generated a value of the primary key order_id for you using the snowflake algorithm, and according to the rules configured in the configuration file to determine which table to insert into, as for the query, it is determined according to your order_id Check which table (ignoring test screenshots).
That is, if the query field is not a shard key, then two tables will be checked; as shown in the figure: