1. Background
5.3.0
Before the version , the configuration methods of , , and ShardingSphere-JDBC
are supported at the same time . Among them, in order to be compatible with the configuration method of Spring, the following problems have been brought to the community:Java API
YAML
Spring Boot Starter
Spring Namespace
- When adding or updating APIs, multiple configuration files need to be adjusted, which requires a lot of work
- The community needs to maintain multiple configuration docs and examples
Spring Bean
Lifecycle management is vulnerable to other dependencies of the project, e.g.PostProcessor
not working properlySpring Boot Starter
AndSpring Namespace
the configuration style isShardingSphere
quite different from the standardYAML
Spring Boot Starter
andSpring Namespace
areSpring
affected by the version, which will bring additional configuration compatibility issues
Based on the above considerations, the community decided to remove all dependencies and configuration support inShardingSphere
.ShardingSphere 5.3.0 Release
Spring
So, Spring Boot
for usersSpring Namespace
who need to use or , how to access and how to upgrade the original users? This article will answer these questions for you.ShardingSphere-JDBC
ShardingSphere
2. Scope of influence
2.1 Maven dependencies
Upgrade to ShardingSphere 5.3.0
or higher version, taking the Spring Boot project as an example, the original related dependencies will become invalid:
<dependency>
<groupId>org.apache.shardingsphere</groupId>
<artifactId>shardingsphere-jdbc-core-spring-boot-starter</artifactId>
<version>${shardingsphere.version}</version>
</dependency>
adjust to:
<dependency>
<groupId>org.apache.shardingsphere</groupId>
<artifactId>shardingsphere-jdbc-core</artifactId>
<version>${shardingsphere.version}</version>
</dependency>
2.2 Custom Algorithm
Removing Spring
a module will also remove AlgorithmProvided
related classes. If the user has used Bean
injection- , it will become invalid after the update. For scenarios that need to be Spring Bean
used , developers need to actively manage them.
2.3 Transaction
@ShardingSphereTransactionType
Annotations used to support method-level transaction declarations will also be removed. If the user needs to change the transaction type at the method level, please use Java API
the method .
A detailed configuration upgrade document will be sorted out later for distributed transactions
2.4 Configuration file
After upgrading 5.3.0
the version , the original Spring Boot Starter
or Spring Namespace
data source configuration will become invalid.
3. Upgrade Guide
3.1 Introduction to ShardingSphere Driver
Starting from 5.1.2
the version , ShardingSphere-JDBC
a native JDBC
driver ShardingSphereDriver
, which can be accessed and used only through configuration without modifying the code. Through this access method, a more unified ShardingSphere-JDBC
and consistent ShardingSphere-Proxy
configuration file format can be reused with only a small amount of modification.
After upgrading to 5.3.x
version , users who use orSpring Boot Starter
are recommended to access via .Spring Namespace
ShardingSphereDriver
ShardingSphere-JDBC
3.2 Upgrade guide using Spring Boot Starter
Take my last article as an example:
Please refer to chapter 2.1
Maven
to replace dependencies
3.2.1 Configuration file upgrade
3.2.1.1 Before upgrade
application.yml
:
server:
port: 8844
spring:
application:
name: @artifactId@
shardingsphere:
# 数据源配置
datasource:
names: ds1,ds2
ds1:
type: com.zaxxer.hikari.HikariDataSource
driver-class-name: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://192.168.0.35:3306/db1?useUnicode=true&characterEncoding=UTF-8&rewriteBatchedStatements=true&allowMultiQueries=true&serverTimezone=Asia/Shanghai
username: root
password: '1qaz@WSX'
ds2:
type: com.zaxxer.hikari.HikariDataSource
driver-class-name: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://192.168.0.46:3306/db2?useUnicode=true&characterEncoding=UTF-8&rewriteBatchedStatements=true&allowMultiQueries=true&serverTimezone=Asia/Shanghai
username: root
password: '1qaz@WSX'
# 定义规则
rules:
sharding:
# 采用自动分片算法
autoTables:
# 取模
t_auto_order_mod:
actualDataSources: ds$->{
1..2}
sharding-strategy:
standard:
sharding-column: order_id
sharding-algorithm-name: auto_order_mod
# 分布式序列策略
key-generate-strategy:
# 自增列名称,缺省表示不使用自增主键生成器
column: order_id
# 分布式序列算法名称
key-generator-name: snowflake
# 散列取模
t_auto_order_hash_mod:
actualDataSources: ds1
sharding-strategy:
standard:
sharding-column: order_id
sharding-algorithm-name: auto_order_hash_mod
# 分布式序列策略
key-generate-strategy:
# 自增列名称,缺省表示不使用自增主键生成器
column: order_id
# 分布式序列算法名称
key-generator-name: snowflake
# 容量范围
t_auto_order_volume_range:
actualDataSources: ds$->{
1..2}
sharding-strategy:
standard:
sharding-column: price
sharding-algorithm-name: auto_order_volume_range
# 分布式序列策略
key-generate-strategy:
# 自增列名称,缺省表示不使用自增主键生成器
column: order_id
# 分布式序列算法名称
key-generator-name: snowflake
# 边界范围
t_auto_order_boundary_range:
actualDataSources: ds$->{
1..2}
sharding-strategy:
standard:
sharding-column: price
sharding-algorithm-name: auto_order_boundary_range
# 分布式序列策略
key-generate-strategy:
# 自增列名称,缺省表示不使用自增主键生成器
column: order_id
# 分布式序列算法名称
key-generator-name: snowflake
# 自动日期间隔
t_auto_order_auto_interval:
actualDataSources: ds$->{
1..2}
sharding-strategy:
standard:
sharding-column: create_time
sharding-algorithm-name: auto_order_auto_interval
# 分布式序列策略
key-generate-strategy:
# 自增列名称,缺省表示不使用自增主键生成器
column: order_id
# 分布式序列算法名称
key-generator-name: snowflake
# 分片算法配置
sharding-algorithms:
# 取模
auto_order_mod:
type: MOD
props:
sharding-count: 6
# 散列取模
auto_order_hash_mod:
type: HASH_MOD
props:
sharding-count: 6
# 容量范围
auto_order_volume_range:
type: VOLUME_RANGE
props:
range-lower: 0
range-upper: 20000
sharding-volume: 10000
# 边界范围
auto_order_boundary_range:
type: BOUNDARY_RANGE
props:
sharding-ranges: 10,15,100,12000,16000
# 自动日期间隔
auto_order_auto_interval:
type: AUTO_INTERVAL
props:
datetime-lower: 2023-05-07 00:00:00
datetime-upper: 2023-05-10 00:00:00
sharding-seconds: 86400
# 分布式序列算法配置(如果是自动生成的,在插入数据的sql中就不要传id,null也不行,直接插入字段中就不要有主键的字段)
keyGenerators:
# 分布式序列算法名称
snowflake:
# 分布式序列算法类型
type: SNOWFLAKE
props:
sql-show: true #显示sql
mybatis:
configuration:
log-impl: org.apache.ibatis.logging.stdout.StdOutImpl
3.2.1.2 After upgrade
application.yml
: Replace the original ShardingSphere
related configuration with ShardingSphereDriver
the configuration item:
server:
port: 8844
spring:
application:
name: @artifactId@
datasource:
driver-class-name: org.apache.shardingsphere.driver.ShardingSphereDriver
url: jdbc:shardingsphere:classpath:shading-auto-tables-algorithm.yaml
mybatis:
configuration:
log-impl: org.apache.ibatis.logging.stdout.StdOutImpl
resources
Create a new configuration file under the directory yaml
, such as: shading-auto-tables-algorithm.yaml
, and rewrite the original configuration content according to User Manual - YAML Configuration:
dataSources:
ds1:
dataSourceClassName: com.zaxxer.hikari.HikariDataSource
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://192.168.0.35:3306/db1?useUnicode=true&characterEncoding=UTF-8&rewriteBatchedStatements=true&allowMultiQueries=true&serverTimezone=Asia/Shanghai
username: root
password: '1qaz@WSX'
connectionTimeoutMilliseconds: 30000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 50
minPoolSize: 1
ds2:
dataSourceClassName: com.zaxxer.hikari.HikariDataSource
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://192.168.0.46:3306/db2?useUnicode=true&characterEncoding=UTF-8&rewriteBatchedStatements=true&allowMultiQueries=true&serverTimezone=Asia/Shanghai
username: root
password: '1qaz@WSX'
connectionTimeoutMilliseconds: 30000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 50
minPoolSize: 1
rules:
- !SHARDING
autoTables:
# 取模
t_auto_order_mod:
actualDataSources: ds$->{
1..2}
shardingStrategy:
standard:
shardingColumn: order_id
shardingAlgorithmName: auto_order_mod
# 分布式序列策略
keyGenerateStrategy:
# 自增列名称,缺省表示不使用自增主键生成器
column: order_id
# 分布式序列算法名称
keyGeneratorName: snowflake
# 散列取模
t_auto_order_hash_mod:
actualDataSources: ds1
shardingStrategy:
standard:
shardingColumn: order_id
shardingAlgorithmName: auto_order_hash_mod
# 分布式序列策略
keyGenerateStrategy:
# 自增列名称,缺省表示不使用自增主键生成器
column: order_id
# 分布式序列算法名称
keyGeneratorName: snowflake
# 容量范围
t_auto_order_volume_range:
actualDataSources: ds$->{
1..2}
shardingStrategy:
standard:
shardingColumn: price
shardingAlgorithmName: auto_order_volume_range
# 分布式序列策略
keyGenerateStrategy:
# 自增列名称,缺省表示不使用自增主键生成器
column: order_id
# 分布式序列算法名称
keyGeneratorName: snowflake
# 边界范围
t_auto_order_boundary_range:
actualDataSources: ds$->{
1..2}
shardingStrategy:
standard:
shardingColumn: price
shardingAlgorithmName: auto_order_boundary_range
# 分布式序列策略
keyGenerateStrategy:
# 自增列名称,缺省表示不使用自增主键生成器
column: order_id
# 分布式序列算法名称
keyGeneratorName: snowflake
# 自动日期间隔
t_auto_order_auto_interval:
actualDataSources: ds$->{
1..2}
shardingStrategy:
standard:
shardingColumn: create_time
shardingAlgorithmName: auto_order_auto_interval
# 分布式序列策略
keyGenerateStrategy:
# 自增列名称,缺省表示不使用自增主键生成器
column: order_id
# 分布式序列算法名称
keyGeneratorName: snowflake
# 分片算法配置
shardingAlgorithms:
# 取模
auto_order_mod:
type: MOD
props:
sharding-count: 6
# 散列取模
auto_order_hash_mod:
type: HASH_MOD
props:
sharding-count: 6
# 容量范围
auto_order_volume_range:
type: VOLUME_RANGE
props:
range-lower: 0
range-upper: 20000
sharding-volume: 10000
# 边界范围
auto_order_boundary_range:
type: BOUNDARY_RANGE
props:
sharding-ranges: 10,15,100,12000,16000
# 自动日期间隔
auto_order_auto_interval:
type: AUTO_INTERVAL
props:
datetime-lower: "2023-05-07 00:00:00"
datetime-upper: "2023-05-10 00:00:00"
sharding-seconds: 86400
# 分布式序列算法配置(如果是自动生成的,在插入数据的sql中就不要传id,null也不行,直接插入字段中就不要有主键的字段)
keyGenerators:
# 分布式序列算法名称
snowflake:
# 分布式序列算法类型
type: SNOWFLAKE
props:
sql-show: true
Four. Conclusion
This upgrade has greatly ShardingSphere-JDBC
reduced ShardingSphere-Proxy
the difference in configuration between and ShardingSphere-JDBC
, ShardingSphere
laid a solid foundation for the smooth transition of users to the cluster architecture, and taken a solid step in API
standardizing and improving configuration compatibility.
For ShardingSphere
new users, ShardingSphereDriver
the unique configuration method can also reduce the intrusive configuration and make it easier to get started.