[Microservices] springboot adapts to the design and implementation of multiple data sources

Table of contents

1. Problem background

1.1 mysql read and write separation

1.2 Adapt to various types of databases

1.3 Multiple data sources

2. Adapt to multiple data source scenarios and problems

2.1 Support quick switching of other data sources

2.2 Minimal transformation at the code level

2.3 Data Migration Issues

2.4 Cross-database transaction issues

3. Multi-data source adaptation solution

3.1 Make your own wheels

3.2 Based on providerId method

3.3 Based on dynamic-datasource method

3.3.1 Introduction to dynamic-datasource

3.4 Custom SDK embedding method

4. Case operation demonstration

4.1 Pre-preparation

4.2 Adaptation scheme based on providerId

4.2.1 Import project dependencies

4.2.2 Three core configuration files

4.2.3 providerId core configuration class

4.2.4 Custom test interface

4.2.5 mybatis implementation

4.2.6 Effect Demonstration

4.3 Adaptation scheme based on dynamic-datasource

4.3.1 Import basic dependencies

4.3.2 Core configuration files

4.3.3 dao interface

4.3.4 Custom test interface

4.3.5 Interface effect test

4.3.5 Notes on using dynamic-datasource

4.4 Adaptation scheme based on SDK

4.4.1 Common configuration class

4.4.2 Release jar

4.4.3 Import SDK into other modules

4.4.4 Simulation effect verification

Five, written at the end of the text


1. Problem background

As the business develops and changes, the mode of connecting a single data source or a single type of database in your springboot project may need to be adjusted. For example, in the following scenarios, you may need to adapt to multiple data sources.

1.1 mysql read and write separation

For example, if your project uses a mysql database at the beginning, it is enough to connect to a mysql database instance in the project. Later, as the business develops and grows, a single database can no longer carry high-traffic reads and writes, so the database needs to be expanded. At this time, your project may need to implement the read-write separation mode, that is, the configuration of connecting multiple data sources.

1.2 Adapt to various types of databases

In recent years, more and more projects have begun to pay attention to data security issues, so domestic databases have begun to rise in the past two years. If your project uses mysql at the beginning, follow-up projects will adapt to market regulatory requirements or customer needs. Gradually introduce other databases. At this time, your project needs to be compatible with multiple types of databases at the same time, that is, your project needs to be compatible with mysql, oracle, pg, etc. at the same time.

1.3 Multiple data sources

In your project, you may encounter such a situation that the data returned by an interface comes from multiple different database instances, or from the aggregation of mysql and oracle data at the same time. In this case, you need your The project supports the configuration of connecting multiple data sources at the same time, which is a relatively common scenario in some tool projects.

2. Adapt to multiple data source scenarios and problems

According to the above different business forms, specific to the actual business, it can be subdivided into different usage scenarios. Here are some common multi-data source adaptation scenarios based on actual experience.

2.1 Support quick switching of other data sources

For example, your system originally uses mysql as the underlying data storage, but the project needs to be replaced with pg or other domestic databases. In this case, this requires your project to have a certain ability to dynamically switch the underlying storage. Imagine, in This project needs mysql, other projects need PG, what is the fastest solution? No problem, quickly switch data sources through dynamic parameter configuration.

2.2 Minimal transformation at the code level

An unavoidable topic in the adaptation process is the need to make partial adjustments to the existing project or architecture. This is unavoidable. Of course, in the specific implementation process, you also need to look at the technical architecture of your project. If it is based on springboot+mybatis With this set of technology stacks, the amount of transformation is relatively controllable. A better way is to develop an external plug-in or SDK, and then connect all modules in the microservice uniformly, which is more suitable for those platform-type projects with many microservices.

2.3 Data Migration Issues

When a stable production project needs to be migrated from mysql to pg or other databases, it is supported at the technical level and can be switched, but what about those historical data? You can't just let it go, this is a problem that must be considered when adapting to multiple data sources.

2.4 Cross-database transaction issues

When the program only operates mysql, the program code can use mysql's own transaction mechanism to ensure data security. Now if your project needs to connect to multiple data sources at the same time, for example, an interface needs to operate mysql and pg at the same time, I am afraid that only use Mysql transactions are not easy to use, which is also a problem that needs to be solved in adapting to multiple data sources.

3. Multi-data source adaptation solution

Take the familiar technology stack of springboot+mysql+mybatis as an example, which is also the basic framework of many microservice projects. Combining with some of the problems mentioned above, several practical solutions are listed below.

3.1 Make your own wheels

As the name implies, it is to encapsulate components that adapt to multiple data sources by itself. The most common way is: custom annotation + AOP to dynamically switch between multiple data sources (there are many reference materials on the Internet for this approach). Multi-data source adaptation .

For example, maybe your project needs to connect to two data sources, DS1 and DS2 at the same time. DS1 is used as the Master master library, and DS2 is used as the Slave slave library. When executing SQL, the SQL statement can be routed to the specified database instance for execution.

3.2 Based on providerId method

ProviderId is a built-in way of mybatis framework when parsing SQL statements. The simple understanding is that data sources can be dynamically distinguished through providerId. The following supplements are made about providerId:

  • databaseId attribute: If databaseIdProvider is configured, MyBatis will load all statements without databaseId or matching the current databaseId;
  • If there are statements with or without them, the ones without will be ignored. Add, modify and delete all have this attribute;

3.3 Based on dynamic-datasource method

dynamic-datasource is a very practical third-party component for dynamic switching of multiple data sources, and it is also a good solution. The following points are introduced about dynamic-datasource;

3.3.1 Introduction to dynamic-datasource

Official document: document address

dynamic-datasource-spring-boot-starter is a starter that quickly integrates multiple data sources based on springboot.

feature:

  • Support data source grouping, suitable for multiple scenarios purely multi-database read-write separation one master multi-slave mixed mode;
  • Support database sensitive configuration information encryption ENC();
  • Support independent initialization of table structure schema and database database for each database;
  • Support no data source startup, support lazy loading data source (create connection when needed);
  • Support custom annotations, need to inherit DS (3.2.0+);
  • Provide and simplify the fast integration of Druid, HikariCp, BeeCp, Dbcp2;
  • Provide integration solutions for Mybatis-Plus, Quartz, ShardingJdbc, P6sy, Jndi and other components;
  • Provide a custom data source source scheme (such as loading from the database);
  • Provide a solution to dynamically increase and remove data sources after the project is started;
  • Provide a pure read-write separation solution in the Mybatis environment;
  • Provides a solution for parsing data sources using spel dynamic parameters. Built-in spel, session, header, support customization;
  • Supports nested switching of multi-layer data sources. (ServiceA >>> ServiceB >>> ServiceC);
  • Provide a distributed transaction solution based on seata;
  • Provides a local multi-data source transaction solution. Attachment: cannot be mixed with native spring transactions;
     

several agreements

  • This framework only does the core thing of switching data sources, and does not limit your specific operations. You can do any CRUD after switching data sources;
  • All data source headers separated by underscore_ in the configuration file are the group names, and data sources with the same group name will be placed under one group;
  • The switch data source can be a group name or a specific data source name. When the group name is switched, the load balancing algorithm is used to switch;
  • The default data source name is  master , you can  spring.datasource.dynamic.primary modify it by;
  • Annotations on methods take precedence over annotations on classes;
  • DSSupports inheritance on abstract classes DS, but does not support inheritance on interfacesDS;

3.4 Custom SDK embedding method

For those platform-level projects with many microservices, in order to reduce the workload of each application adaptation as much as possible, you can consider uniformly packaging the SDK adapted to multiple data sources, and then import the SDK into each microservice, and then make a few changes at the code level. This is also a relatively common practice in the industry. For example, when you upgrade a certain middleware version in a project, you are almost indifferent.

4. Case operation demonstration

4.1 Pre-preparation

For the convenience of demonstration in the following code, two database environments need to be prepared, namely mysql and postgresql, and then create a table tb_user for testing. The table sql is created as follows. Note that the same table is created in mysql and pg respectively;

create table tb_user(
	id varchar(32) NOT NULL,
	user_name varchar(32) NOT NULL,
	email varchar(32) NOT NULL,
	pass_word varchar(32) NOT NULL,
	PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

Insert a piece of data into the same table of mysql and pg respectively

mysql中插入一条数据
insert into tb_user(id,user_name,email,pass_word) values("001","jerry","[email protected]","123456");


pg中插入一条数据
insert into tb_user(id,user_name,email,pass_word) values("001","jerry","[email protected]","1234567");

4.2 Adaptation scheme based on providerId

From a practical point of view, based on providerId is a relatively elegant way, and the code changes during the adaptation process are relatively small. The principle of providerId was briefly introduced above. Let's directly look at the specific code implementation process. The project structure is as follows:

4.2.1 Import project dependencies

Just import the necessary dependencies

     <dependencies>

        <!-- postgresSql -->
        <dependency>
            <groupId>org.postgresql</groupId>
            <artifactId>postgresql</artifactId>
            <version>42.3.2</version>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>

        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
        </dependency>

        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <version>5.1.46</version>
        </dependency>

        <dependency>
            <groupId>com.baomidou</groupId>
            <artifactId>mybatis-plus-boot-starter</artifactId>
            <version>3.1.0</version>
        </dependency>

        <!-- 集成mybatis -->
        <dependency>
            <groupId>org.mybatis.spring.boot</groupId>
            <artifactId>mybatis-spring-boot-starter</artifactId>
            <version>1.3.1</version>
        </dependency>

        <!--fastjson-->
        <dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>fastjson</artifactId>
            <version>1.2.16</version>
        </dependency>

        <!-- druid数据库连接池组件 -->
        <dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>druid-spring-boot-starter</artifactId>
            <version>1.1.10</version>
        </dependency>


    </dependencies>

4.2.2 Three core configuration files

It should be noted that the implementation based on providerId is to meet the needs of fast switching from one database to another database in the production environment, so the switching of data sources is realized here by loading different external configuration files;

application.yml, public configuration

# 切换对应的环境 postgresql mysql
spring:
  profiles:
    active: postgresql

# mybatis配置
mybatis:
  mapper-locations: classpath:mapper/**/*.xml
  type-aliases-package: com.congge.entity
  configuration:
    map-underscore-to-camel-case: true

# showSql 控制台打印sql日志
logging:
  level:
    com:
      valten:
        dao: debug

system:
  sql:
    types: mysql,postgresql

application-mysql.yml, mysql deployment


# 端口
server:
  port: 8081

# 数据源配置
spring:
  datasource:
    hikari:
      jdbc-url: jdbc:mysql://IP:3306/biz-db?&useSSL=false
      driver-class-name: com.mysql.jdbc.Driver
      username: root
      password: 123456

application-postgresql.yml, pg configuration file

# 端口
server:
  port: 8081

# 数据源配置
spring:
  datasource:
    hikari:
      jdbc-url: jdbc:postgresql://IP:5432/biz-db
      driver-class-name: org.postgresql.Driver
      username: postgres
      password: password

4.2.3 providerId core configuration class

The main things this class does are as follows:

  • Create a data source DataSource;
  • Tell mybatis which types of databases it supports, so that in the xml file of mybatis, it can be correctly parsed by the mybatis framework by specifying the databaseId;

import org.apache.ibatis.mapping.DatabaseIdProvider;
import org.apache.ibatis.mapping.VendorDatabaseIdProvider;
import org.mybatis.spring.SqlSessionFactoryBean;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.jdbc.DataSourceBuilder;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
import org.springframework.core.env.Environment;
import org.springframework.core.io.support.PathMatchingResourcePatternResolver;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.util.StringUtils;

import javax.sql.DataSource;
import java.util.*;

@Configuration
public class DbDataSourceConfig {

    @Value("${mybatis.mapper-locations}")
    private String mapperLocations;

    @Autowired
    private Environment environment;

    static Map<String,String> sqlTypeMap = new HashMap<>();
    static {
        sqlTypeMap.put("Oracle", "oracle");
        sqlTypeMap.put("MySQL", "mysql");
        sqlTypeMap.put("PostgreSQL", "postgresql");
        sqlTypeMap.put("DB2", "db2");
        sqlTypeMap.put("SQL Server", "sqlserver");
    }

    @Primary
    @Bean(name = "dataSource")
    @ConfigurationProperties("spring.datasource.hikari")
    public DataSource dataSource() {
        return DataSourceBuilder.create().build();
    }

    @Bean
    public JdbcTemplate jdbcTemplate() {
        return new JdbcTemplate(dataSource());
    }

    @Bean
    public DatabaseIdProvider databaseIdProvider() {
        DatabaseIdProvider databaseIdProvider = new VendorDatabaseIdProvider();
        Properties p = new Properties();
        String sqlTypeCollections = environment.getProperty("system.sql.types");
        if(StringUtils.isEmpty(sqlTypeCollections)){
            sqlTypeMap.forEach((key,val) ->{
                p.setProperty(key, val);
            });
        }else {
            List<String> sqlTypeList = Arrays.asList(sqlTypeCollections.split(","));
            for(String sqlType : sqlTypeList){
                if("mysql".equals(sqlType)){
                    p.setProperty("MySQL", "mysql");
                }else if("oracle".equals(sqlType)){
                    p.setProperty("Oracle", "oracle");
                }else if("postgresql".equals(sqlType)){
                    p.setProperty("PostgreSQL", "postgresql");
                }else if("db2".equals(sqlType)){
                    p.setProperty("DB2", "db2");
                }else if("sqlserver".equals(sqlType)){
                    p.setProperty("SQL Server", "sqlserver");
                }
            }
        }
        databaseIdProvider.setProperties(p);
        return databaseIdProvider;
    }

    @Primary
    @Bean
    public SqlSessionFactoryBean sqlSessionFactoryBean(@Qualifier("dataSource") DataSource dataSource) throws Exception {
        SqlSessionFactoryBean factoryBean = new SqlSessionFactoryBean();
        factoryBean.setDataSource(dataSource);
        factoryBean.setDatabaseIdProvider(databaseIdProvider());
        if(StringUtils.isEmpty(mapperLocations)){
            mapperLocations = "classpath*:*.xml";
        }
        factoryBean.setMapperLocations(new PathMatchingResourcePatternResolver().getResources(mapperLocations));
        return factoryBean;
    }
}

4.2.4 Custom test interface

It should be said that after customizing the configuration class of providerId, what needs to be done in other places is the same as the previous development, that is, normal coding. In order to facilitate project directory management, it is recommended to create multiple xml files when writing Directory, for example, the xml of mysql is in a directory, and pg is in the directory of pg, and there may be adaptation of oracle in the future. Why do you do this? From the adaptation experience, different types of databases still have certain differences in certain sql syntax, but they can be distinguished only by databaseId in xml, and the following interface is added;

@RestController
public class UserController {


    @Autowired
    private UserService userService;

    //localhost:8081/getById?id=001
    @GetMapping("/getById")
    public TbUser getTbUser(String id){
        return userService.getTbUser(id);
    }

}

4.2.5 mybatis implementation

For example, taking mysql as an example, the final query sql above is as follows,

<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd" >
<mapper namespace="com.congge.dao.TbUserMapper">

    <resultMap id="mysqlResultMap" type="com.congge.entity.TbUser">
        <result property="id" column="id"/>
        <result property="userName" column="user_name"/>
        <result property="email" column="email"/>
        <result property="passWord" column="pass_word"/>
    </resultMap>

    <select id="selectById" parameterType="java.lang.String" resultMap="mysqlResultMap" databaseId="mysql">
        select
        *
        from
        tb_user
        where id = #{id}
    </select>

</mapper>

If it is a pg environment, the sql is as follows

<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd" >
<mapper namespace="com.congge.dao.TbUserMapper">

    <resultMap id="pgResultMap" type="com.congge.entity.TbUser">
        <result property="id" column="id"/>
        <result property="userName" column="user_name"/>
        <result property="email" column="email"/>
        <result property="passWord" column="pass_word"/>
    </resultMap>

    <select id="selectById" parameterType="java.lang.String" resultMap="pgResultMap" databaseId="postgresql">
        select
        *
        from
        tb_user
        where id = #{id}
    </select>

</mapper>

4.2.6 Effect Demonstration

After starting the project, first set the following configuration in application.yml to the pg environment

spring:
  profiles:
    active: postgresql

The browser invokes the above-mentioned test interface and sees the following effect

Switch the configuration file to mysql, query again to see the following effect;

Through the above demonstration, the effect of quickly switching between different types of databases in a project based on configuration is achieved.

4.3 Adaptation scheme based on dynamic-datasource

Applicable scenarios, when the project needs to use multiple data sources at the same time, or use multiple data sources in a mixed manner, the project directory structure is as follows

4.3.1 Import basic dependencies

Other dependencies, such as mybatis, mysql, etc., can be consistent with the above;

        <dependency>
            <groupId>com.baomidou</groupId>
            <artifactId>dynamic-datasource-spring-boot-starter</artifactId>
            <version>3.1.0</version>
        </dependency>

4.3.2 Core configuration files

spring:
  datasource:
    dynamic:
      #默认使用的是gp数据库,对应下面gp和mysql,可视情况更改
      primary: mysql
      strict: false
      datasource:
        pg:
          url: jdbc:postgresql://IP:5432/biz-db
          username: postgres
          password: postgres
          driver-class-name: org.postgresql.Driver
          type: com.alibaba.druid.pool.DruidDataSource
        mysql:
          url: jdbc:mysql://IP:3306/biz-db?&useSSL=false
          username: root
          password: root
          driver-class-name: com.mysql.jdbc.Driver
          type: com.alibaba.druid.pool.DruidDataSource

      druid:
        #初始连接数
        initial-size: 1
        #最小连接数
        min-idle: 1
        #最大连接数
        max-active: 100
        #获取连接池超时时间
        max-wait: 60000
        filters: config,stat
        connect-properties: druid.stat.mergeSql=true;druid.stat.logSlowSql=true;druid.stat.slowSqlMillis=500
        filter:
          commons-log:
            enabled: true
            statement-log-enabled: false
            statement-log-error-enabled: true
            statement-executable-sql-log-enable: true

server:
  port: 8082

mybatis:
  mapper-locations: classpath:mybatis/*/*.xml
  type-aliases-package: com.congge.entity
  configuration:
    map-underscore-to-camel-case: true

4.3.3 dao interface

When using this method, since the dynamic-datasource SDK supports annotations to help the program automatically use different data sources, the custom configuration class can be simplified (there are also dynamic-datasource-based methods on the Internet by adding configuration classes), Add a dao interface here, which contains two queries. Different data sources can be distinguished through the @DS annotation. The value in the annotation is the parameter value specified in the configuration file;

import com.baomidou.dynamic.datasource.annotation.DS;
import com.congge.entity.TbUser;
import org.apache.ibatis.annotations.Mapper;
import org.apache.ibatis.annotations.Param;

@Mapper
public interface DbUserMapper {

    @DS(value = "mysql")
    TbUser queryById(@Param("id") String id);

    @DS(value = "pg")
    TbUser getById(@Param("id") String id);

}

4.3.4 Custom test interface


@RestController
public class UserController {

    @Autowired
    private UserService userService;

    //localhost:8081/getById?id=001
    @GetMapping("/getById")
    public TbUser getTbUser(String id){
        return userService.getTbUser(id);
    }

}
@Service
public class UserServiceImpl implements UserService {

    @Autowired
    private DbUserMapper dbUserMapper;

    @Override
    public TbUser getTbUser(String id) {
        return dbUserMapper.getById(id);
    }

}

4.3.5 Interface effect test

After starting the project, call it in the browser and see the following effect

4.3.5 Notes on using dynamic-datasource

When actually using this method for integration and matching, you need to pay attention to the following points

  • If a method needs to operate multiple databases at the same time, it is recommended not to put them in a transaction;
  • In order to facilitate the management of different dao and xml files, subcontracting is recommended;

4.4 Adaptation scheme based on SDK

In a multi-service mode like the SAAS platform, in order to reduce the adaptation cost of upper-layer applications, a general SDK can be considered, which defines some basic rules, such as mybatis xml file scanning path information, data supported by upper-layer applications Source type, etc. Take the above-mentioned providerId mode as an example. In the whole adaptation process, the most important one is the DbDataSourceConfig configuration class. In this class, the following things are mainly done:

  • Initialize the database types supported by default, such as oracle, mysql, postgresql, etc.;
  • At the same time, external parameters are supported to configure the database type. If there are more databases that need to be supported, they can be passed in through parameter configuration;
  • Define the file scanning path of xml, if it is passed in externally, use the external one, otherwise, use the default one;

In fact, the role of the SDK is to reduce repetitive work, and to extract those public, common, or system-level configurations together, so that other projects do not need to provide configuration classes separately after they are introduced. That's what the SDK is for. Based on this idea, we make a separate project module, extract the core logic of DbDataSourceConfig into this module, and finally publish the jar package, which can be referenced by other modules. The module structure is as follows:

4.4.1 Common configuration class

In a real project, the logic to be abstracted in the SDK may need to consider more points, such as whether it is necessary to override certain configuration classes in mybtais, whether it is necessary to do pre-compatibility for SQL syntax in various types of databases, if customers After the SDK is introduced on the client side, how to deal with it not being used according to the agreed specifications, etc., the more thoughtful you are, the better the robustness and scalability of your SDK will be.

@Configuration
public class DbDataSourceConfig {

    @Value("${mybatis.mapper-locations}")
    private String mapperLocations;

    @Autowired
    private Environment environment;

    static Map<String,String> sqlTypeMap = new HashMap<>();
    static {
        sqlTypeMap.put("Oracle", "oracle");
        sqlTypeMap.put("MySQL", "mysql");
        sqlTypeMap.put("PostgreSQL", "postgresql");
        sqlTypeMap.put("DB2", "db2");
        sqlTypeMap.put("SQL Server", "sqlserver");
    }

    @Primary
    @Bean(name = "dataSource")
    @ConfigurationProperties("spring.datasource.hikari")
    public DataSource dataSource() {
        return DataSourceBuilder.create().build();
    }

    @Bean
    public JdbcTemplate jdbcTemplate() {
        return new JdbcTemplate(dataSource());
    }

    @Bean
    public DatabaseIdProvider databaseIdProvider() {
        DatabaseIdProvider databaseIdProvider = new VendorDatabaseIdProvider();
        Properties p = new Properties();
        String sqlTypeCollections = environment.getProperty("system.sql.types");
        if(StringUtils.isEmpty(sqlTypeCollections)){
            sqlTypeMap.forEach((key,val) ->{
                p.setProperty(key, val);
            });
        }else {
            List<String> sqlTypeList = Arrays.asList(sqlTypeCollections.split(","));
            for(String sqlType : sqlTypeList){
                if("mysql".equals(sqlType)){
                    p.setProperty("MySQL", "mysql");
                }else if("oracle".equals(sqlType)){
                    p.setProperty("Oracle", "oracle");
                }else if("postgresql".equals(sqlType)){
                    p.setProperty("PostgreSQL", "postgresql");
                }else if("db2".equals(sqlType)){
                    p.setProperty("DB2", "db2");
                }else if("sqlserver".equals(sqlType)){
                    p.setProperty("SQL Server", "sqlserver");
                }
            }
        }
        databaseIdProvider.setProperties(p);
        return databaseIdProvider;
    }

    @Primary
    @Bean
    public SqlSessionFactoryBean sqlSessionFactoryBean(@Qualifier("dataSource") DataSource dataSource) throws Exception {
        SqlSessionFactoryBean factoryBean = new SqlSessionFactoryBean();
        factoryBean.setDataSource(dataSource);
        factoryBean.setDatabaseIdProvider(databaseIdProvider());
        if(StringUtils.isEmpty(mapperLocations)){
            mapperLocations = "classpath*:*.xml";
        }
        factoryBean.setMapperLocations(new PathMatchingResourcePatternResolver().getResources(mapperLocations));
        return factoryBean;
    }
}

4.4.2 Release jar

This step is relatively simple, so I won't go into details here.

4.4.3 Import SDK into other modules

Introduce the SDK released above into the above biz-diff, and comment out the local DbDataSourceConfig;

4.4.4 Simulation effect verification

After the project starts, try to call the above interface again, and you can get the following effect in the pg environment

Switch mysql again and call the interface to see the following effect

Five, written at the end of the text

Multi-data source adaptation is a very common business in the daily project production process. How to make your project capable of fast switching through minimal transformation is what every development engineer needs to consider. In fact, there may be many solutions in practice , but it needs to be used in combination with its own situation. It is our ultimate goal to complete the transformation with the smallest cost.

Guess you like

Origin blog.csdn.net/zhangcongyi420/article/details/131444677