multiple data sources spring configuration springmvc / springboot

Foreword

Work often encounter scenarios using multiple data sources, I see other people's blog, or too redundant, either mechanically, very chaotic. Based on this, I will be based on experience before and after the company's two projects, sorting out the three solutions, all through product testing and correct.
The first SSM share a small partner, the purpose of which is to achieve comment switched data sources based.
The second configuration is the three kinds of sprinboot. Where:
The second is my configurations based on SSM, put him into a switch-based annotation data source Springboot version.
The third is that we currently adopted plan, we gave up annotation data source configuration, because the business scene, notes aop process can not be directly converted by dao layer, we can only note in the service layer above methods, multi-layered business under circumstances apply, you need to write a separate service for processing data from different sources is obviously not very wise.
In general, each compete, need to look at how to choose a specific scene. The following is a detailed configuration complete.

Configuring multiple data sources based annotation SSM

jdbc.properties

jdbc.url=jdbc:mysql://ip:3306/db?useUnicode=true&characterEncoding=utf-8
jdbc.username=root
jdbc.password=******

jdbc.url.mdm=jdbc:mysql://ip2:3306/db2?useUnicode=true&characterEncoding=utf-8
jdbc.username.mdm=root
jdbc.password.mdm=******

jdbc.url.pki=jdbc:mysql://ip3:3306/db3?useUnicode=true&characterEncoding=utf-8
jdbc.username.pki=root
jdbc.password.pki=******

STEP1. Mybatis configuration data in the source profile (spring-common.xml)

<!-- 1、开始mybatis配置 -->
    <!--读取jdbc配置文件,好处是可以在发布的时候动态替换jdbc配置 
    <context:property-placeholder
        location="classpath:configures/jdbc/jdbc.properties" />-->
    <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
        <property name="locations">
            <list>
                <value>classpath:conf/configures/jdbc/jdbc.properties</value>
            </list>
        </property>
        <property name="ignoreUnresolvablePlaceholders" value="true" /> 
    </bean>

<!-- 配置数据源 使用的是Druid数据源 -->
    <bean name="dataSourceDefault" class="com.alibaba.druid.pool.DruidDataSource"
        init-method="init" destroy-method="close">
        <property name="url" value="${jdbc.url}" />
        <property name="username" value="${jdbc.username}" />
        <property name="password" value="${jdbc.password}" />
        <!-- 初始化连接大小 -->
        <property name="initialSize" value="${jdbc.initialSize}" />
        <!-- 连接池最大使用连接数量 -->
        <property name="maxActive" value="${jdbc.maxActive}" />
        <!-- 连接池最小空闲 -->
        <property name="minIdle" value="${jdbc.minIdle}" />
        <!-- 获取连接最大等待时间 -->
        <property name="maxWait" value="${jdbc.maxWait}" />
        <property name="poolPreparedStatements" value="true" />
        <property name="maxPoolPreparedStatementPerConnectionSize" value="33" />
        <!-- 用来检测有效sql -->
        <property name="validationQuery" value="SELECT 1 FROM DUAL" />
        <property name="testOnBorrow" value="true" />
        <property name="testOnReturn" value="false" />
        <property name="testWhileIdle" value="true" />
        <!-- 配置间隔多久才进行一次检测,检测需要关闭的空闲连接,单位是毫秒 -->
        <property name="timeBetweenEvictionRunsMillis" value="60000" />
        <!-- 配置一个连接在池中最小生存的时间,单位是毫秒 -->
        <property name="minEvictableIdleTimeMillis" value="25200000" />
        <!-- 打开removeAbandoned功能 -->
        <property name="removeAbandoned" value="true" />
        <!-- 1800秒,也就是30分钟 -->
        <property name="removeAbandonedTimeout" value="1800" />
        <!-- 关闭abanded连接时输出错误日志 -->
        <property name="logAbandoned" value="true" />
        <!-- 监控数据库 -->
        <property name="filters" value="mergeStat" />
    </bean>
    <!--  MDM数据源 视图 -->
    <bean name="dataSourceMDM" class="com.alibaba.druid.pool.DruidDataSource"
        init-method="init" destroy-method="close">
        <property name="url" value="${jdbc.url.mdm}" />
        <property name="username" value="${jdbc.username.mdm}" />
        <property name="password" value="${jdbc.password.mdm}" />
        <!-- 初始化连接大小 -->
        <property name="initialSize" value="${jdbc.initialSize}" />
        <!-- 连接池最大使用连接数量 -->
        <property name="maxActive" value="${jdbc.maxActive}" />
        <!-- 连接池最小空闲 -->
        <property name="minIdle" value="${jdbc.minIdle}" />
        <!-- 获取连接最大等待时间 -->
        <property name="maxWait" value="${jdbc.maxWait}" />
        <property name="poolPreparedStatements" value="true" />
        <property name="maxPoolPreparedStatementPerConnectionSize" value="33" />
        <!-- 用来检测有效sql -->
        <property name="validationQuery" value="SELECT 1 FROM DUAL" />
        <property name="testOnBorrow" value="true" />
        <property name="testOnReturn" value="false" />
        <property name="testWhileIdle" value="true" />
        <!-- 配置间隔多久才进行一次检测,检测需要关闭的空闲连接,单位是毫秒 -->
        <property name="timeBetweenEvictionRunsMillis" value="60000" />
        <!-- 配置一个连接在池中最小生存的时间,单位是毫秒 -->
        <property name="minEvictableIdleTimeMillis" value="25200000" />
        <!-- 打开removeAbandoned功能 -->
        <property name="removeAbandoned" value="true" />
        <!-- 1800秒,也就是30分钟 -->
        <property name="removeAbandonedTimeout" value="1800" />
        <!-- 关闭abanded连接时输出错误日志 -->
        <property name="logAbandoned" value="true" />
        <!-- 监控数据库 -->
        <property name="filters" value="mergeStat" />
    </bean>
    <!--  PKI数据源 视图 -->
    <bean name="dataSourcePKI" class="com.alibaba.druid.pool.DruidDataSource"
        init-method="init" destroy-method="close">
        <property name="url" value="${jdbc.url.pki}" />
        <property name="username" value="${jdbc.username.pki}" />
        <property name="password" value="${jdbc.password.pki}" />
        <!-- 初始化连接大小 -->
        <property name="initialSize" value="${jdbc.initialSize}" />
        <!-- 连接池最大使用连接数量 -->
        <property name="maxActive" value="${jdbc.maxActive}" />
        <!-- 连接池最小空闲 -->
        <property name="minIdle" value="${jdbc.minIdle}" />
        <!-- 获取连接最大等待时间 -->
        <property name="maxWait" value="${jdbc.maxWait}" />
        <property name="poolPreparedStatements" value="true" />
        <property name="maxPoolPreparedStatementPerConnectionSize" value="33" />
        <!-- 用来检测有效sql -->
        <property name="validationQuery" value="SELECT 1 FROM DUAL" />
        <property name="testOnBorrow" value="true" />
        <property name="testOnReturn" value="false" />
        <property name="testWhileIdle" value="true" />
        <!-- 配置间隔多久才进行一次检测,检测需要关闭的空闲连接,单位是毫秒 -->
        <property name="timeBetweenEvictionRunsMillis" value="60000" />
        <!-- 配置一个连接在池中最小生存的时间,单位是毫秒 -->
        <property name="minEvictableIdleTimeMillis" value="25200000" />
        <!-- 打开removeAbandoned功能 -->
        <property name="removeAbandoned" value="true" />
        <!-- 1800秒,也就是30分钟 -->
        <property name="removeAbandonedTimeout" value="1800" />
        <!-- 关闭abanded连接时输出错误日志 -->
        <property name="logAbandoned" value="true" />
        <!-- 监控数据库 -->
        <property name="filters" value="mergeStat" />
    </bean>

    <!-- 使用自己实现的数据源实现类MultipleDataSource -->
    <bean id="multipleDataSource" class="com.ai.util.resource.DataSourceRouter" lazy-init="true">
        <!-- 设置默认的数据源 -->
        <property name="defaultTargetDataSource" ref="dataSourceDefault"/>
        <property name="targetDataSources">
            <map>
                <!-- 这个key是对应数据源的别称,通过这个key可以找到对应的数据源,value-ref就是上面数据源的id -->
                <entry key="default" value-ref="dataSourceDefault"/>
                <entry key="mdm" value-ref="dataSourceMDM"/>
                <entry key="pki" value-ref="dataSourcePKI"/>
            </map>
        </property>
    </bean>

STEP2. Code Configuration

Notes DataSource

package com.ai.util.resource;

import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

@Target({ ElementType.TYPE, ElementType.METHOD })
@Retention(RetentionPolicy.RUNTIME)
public @interface DataSource {
    
    String value();
    
}

类DataSourceRouter

package com.ai.util.resource;

import org.springframework.jdbc.datasource.lookup.AbstractRoutingDataSource;

public class DataSourceRouter extends AbstractRoutingDataSource{
    
    @Override    
    protected Object determineCurrentLookupKey() {        
        return HandleDataSource.getDataSource();
    }
}

Class HandleDataSource

package com.ai.util.resource;

public class HandleDataSource {

    private static final ThreadLocal<String> holder = new ThreadLocal<String>();

    public static void setDataSource(String datasource) {
        holder.set(datasource);
    }

    public static String getDataSource() {
        return holder.get();
    }

    public static void clearDataSource() {
        holder.remove();
    }
}

Class DataSourceAspect

package com.ai.util.resource;

import java.lang.reflect.Method;
import java.text.MessageFormat;

import org.aspectj.lang.JoinPoint;
import org.aspectj.lang.annotation.After;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Before;
import org.aspectj.lang.annotation.Pointcut;
import org.aspectj.lang.reflect.MethodSignature;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.context.annotation.EnableAspectJAutoProxy;
import org.springframework.core.annotation.Order;
import org.springframework.stereotype.Component;
import org.springframework.util.StringUtils;


@Aspect
@Component
@Order(1)
@EnableAspectJAutoProxy(proxyTargetClass=true)
public class DataSourceAspect {
    static Logger logger = LoggerFactory.getLogger(DataSourceAspect.class);
    
    /*@Pointcut("execution(* com.ai..*.service..*.*(..))")*/
    @Pointcut("execution(* com.ai.system.service..*.*(..))")
    public void aspect() {}
    
    @Before("aspect()")
    public void before(JoinPoint point) {
        Class<?> target = point.getTarget().getClass();
        MethodSignature signature = (MethodSignature) point.getSignature();
        Method method = signature.getMethod() ;
        DataSource dataSource = null ;
        dataSource = this.getDataSource(target, method) ;
        if(dataSource == null){
            for (Class<?> clazz : target.getInterfaces()) {
                dataSource = getDataSource(clazz, method);
                if(dataSource != null){
                    break ;
                }
            }
        }
        if(dataSource != null && !"".equals(dataSource.value()) ){
            HandleDataSource.setDataSource(dataSource.value());
        }
    }
    
    @After("aspect()")
    public void after(JoinPoint point) {
        HandleDataSource.setDataSource(null);
    }
    
    public DataSource getDataSource(Class<?> target, Method method){
        try {
            Class<?>[] types = method.getParameterTypes();
            Method m = target.getMethod(method.getName(), types);
            if (m != null && m.isAnnotationPresent(DataSource.class)) {
                return m.getAnnotation(DataSource.class);
            }
            if (target.isAnnotationPresent(DataSource.class)) {
                return target.getAnnotation(DataSource.class);
            }
            
        } catch (Exception e) {
            e.printStackTrace();
            logger.error(MessageFormat.format("通过注解切换数据源时发生异常[class={0},method={1}]:"
                    , target.getName(), method.getName()),e)  ;
        }
        return null ;
    }
}

STEP3. Use

service data source using mdm

//终端管控审计查询
@Override
@DataSource("mdm")
public List<TerminalControl> getTerminalControlList() {
    //获取pki所有证书信息
    List<TerminalControl> list = auditDao.selectTerminalControlAuditFromMDM();
    HandleDataSource.setDataSource(null);
    return list;
}

service data source using the pki

//证书审计查看详情页面
@Override
@DataSource("pki")
public List<CertificateAudit> getZssjDataList(CertificateAudit certificateAudit) {
    //获取pki所有证书信息
    List<CertificateAudit> list = auditDao.selectCertificateAuditDataListFromPKI(certificateAudit);
    HandleDataSource.setDataSource(null);
    return list;
}

Without notes default using the default data source

Configuring multiple data sources based annotation springboot

1.yml 配置

spring:
  datasource:
    type: com.zaxxer.hikari.HikariDataSource
    driver-class-name: com.mysql.jdbc.Driver
    url: jdbc:mysql://ip1:3306/db1?characterEncoding=utf-8&useSSL=false&serverTimezone=UTC
    username: root
    password: ******
    hikari:
      minimum-idle: 5
      idle-timeout: 600000
      maximum-pool-size: 10
      auto-commit: true
      pool-name: MyHikariCP
      max-lifetime: 1800000
      connection-timeout: 30000
      connection-test-query: SELECT 1

opcl:
  datasource:
    url: jdbc:mysql://ip2:3306/db2?characterEncoding=utf-8&useSSL=false&serverTimezone=UTC
    username: root
    password: ******

#是否开启多数据源配置
myconfig:
  muti-datasource-open: true

2. two configuration class data sources

package com.ai.api.config.datasource.properties;

import com.alibaba.druid.pool.DruidDataSource;
import com.alibaba.druid.util.JdbcConstants;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.stereotype.Component;

import java.sql.SQLException;

/**
 * @Author yonyong
 * @Description // * <p>数据库数据源配置</p>
 * <p>说明:这个类中包含了许多默认配置,若这些配置符合您的情况,您可以不用管,若不符合,建议不要修改本类,建议直接在"application.yml"中配置即可</p>
 * @Date 2020/3/4 18:03
 **/
@Component
@ConfigurationProperties(prefix = "spring.datasource")
public class DruidProperties {

    private String url;

    private String username;

    private String password;

    private String driverClassName = "com.mysql.jdbc.Driver";

    private Integer initialSize = 10;

    private Integer minIdle = 3;

    private Integer maxActive = 60;

    private Integer maxWait = 60000;

    private Boolean removeAbandoned = true;

    private Integer removeAbandonedTimeout = 180;

    private Integer timeBetweenEvictionRunsMillis = 60000;

    private Integer minEvictableIdleTimeMillis = 300000;

    private String validationQuery = "SELECT 'x'";

    private Boolean testWhileIdle = true;

    private Boolean testOnBorrow = false;

    private Boolean testOnReturn = false;

    private Boolean poolPreparedStatements = true;

    private Integer maxPoolPreparedStatementPerConnectionSize = 50;

    private String filters = "stat";

    public void config(DruidDataSource dataSource) {
        dataSource.setDbType(JdbcConstants.MYSQL);
        dataSource.setUrl(url);
        dataSource.setUsername(username);
        dataSource.setPassword(password);
        dataSource.setDriverClassName(driverClassName);
        dataSource.setInitialSize(initialSize);     // 定义初始连接数
        dataSource.setMinIdle(minIdle);             // 最小空闲
        dataSource.setMaxActive(maxActive);         // 定义最大连接数
        dataSource.setMaxWait(maxWait);             // 获取连接等待超时的时间
        dataSource.setRemoveAbandoned(removeAbandoned); // 超过时间限制是否回收
        dataSource.setRemoveAbandonedTimeout(removeAbandonedTimeout); // 超过时间限制多长

        // 配置间隔多久才进行一次检测,检测需要关闭的空闲连接,单位是毫秒
        dataSource.setTimeBetweenEvictionRunsMillis(timeBetweenEvictionRunsMillis);
        // 配置一个连接在池中最小生存的时间,单位是毫秒
        dataSource.setMinEvictableIdleTimeMillis(minEvictableIdleTimeMillis);
        // 用来检测连接是否有效的sql,要求是一个查询语句
        dataSource.setValidationQuery(validationQuery);
        // 申请连接的时候检测
        dataSource.setTestWhileIdle(testWhileIdle);
        // 申请连接时执行validationQuery检测连接是否有效,配置为true会降低性能
        dataSource.setTestOnBorrow(testOnBorrow);
        // 归还连接时执行validationQuery检测连接是否有效,配置为true会降低性能
        dataSource.setTestOnReturn(testOnReturn);
        // 打开PSCache,并且指定每个连接上PSCache的大小
        dataSource.setPoolPreparedStatements(poolPreparedStatements);
        dataSource.setMaxPoolPreparedStatementPerConnectionSize(maxPoolPreparedStatementPerConnectionSize);
        // 属性类型是字符串,通过别名的方式配置扩展插件,常用的插件有:
        // 监控统计用的filter:stat
        // 日志用的filter:log4j
        // 防御SQL注入的filter:wall
        try {
            dataSource.setFilters(filters);
        } catch (SQLException e) {
            e.printStackTrace();
        }
    }

    public String getUrl() {
        return url;
    }

    public void setUrl(String url) {
        this.url = url;
    }

    public String getUsername() {
        return username;
    }

    public void setUsername(String username) {
        this.username = username;
    }

    public String getPassword() {
        return password;
    }

    public void setPassword(String password) {
        this.password = password;
    }

    public String getDriverClassName() {
        return driverClassName;
    }

    public void setDriverClassName(String driverClassName) {
        this.driverClassName = driverClassName;
    }

    public Integer getInitialSize() {
        return initialSize;
    }

    public void setInitialSize(Integer initialSize) {
        this.initialSize = initialSize;
    }

    public Integer getMinIdle() {
        return minIdle;
    }

    public void setMinIdle(Integer minIdle) {
        this.minIdle = minIdle;
    }

    public Integer getMaxActive() {
        return maxActive;
    }

    public void setMaxActive(Integer maxActive) {
        this.maxActive = maxActive;
    }

    public Integer getMaxWait() {
        return maxWait;
    }

    public void setMaxWait(Integer maxWait) {
        this.maxWait = maxWait;
    }

    public Integer getTimeBetweenEvictionRunsMillis() {
        return timeBetweenEvictionRunsMillis;
    }

    public void setTimeBetweenEvictionRunsMillis(Integer timeBetweenEvictionRunsMillis) {
        this.timeBetweenEvictionRunsMillis = timeBetweenEvictionRunsMillis;
    }

    public Integer getMinEvictableIdleTimeMillis() {
        return minEvictableIdleTimeMillis;
    }

    public void setMinEvictableIdleTimeMillis(Integer minEvictableIdleTimeMillis) {
        this.minEvictableIdleTimeMillis = minEvictableIdleTimeMillis;
    }

    public String getValidationQuery() {
        return validationQuery;
    }

    public void setValidationQuery(String validationQuery) {
        this.validationQuery = validationQuery;
    }

    public Boolean getTestWhileIdle() {
        return testWhileIdle;
    }

    public void setTestWhileIdle(Boolean testWhileIdle) {
        this.testWhileIdle = testWhileIdle;
    }

    public Boolean getTestOnBorrow() {
        return testOnBorrow;
    }

    public void setTestOnBorrow(Boolean testOnBorrow) {
        this.testOnBorrow = testOnBorrow;
    }

    public Boolean getTestOnReturn() {
        return testOnReturn;
    }

    public void setTestOnReturn(Boolean testOnReturn) {
        this.testOnReturn = testOnReturn;
    }

    public Boolean getPoolPreparedStatements() {
        return poolPreparedStatements;
    }

    public void setPoolPreparedStatements(Boolean poolPreparedStatements) {
        this.poolPreparedStatements = poolPreparedStatements;
    }

    public Integer getMaxPoolPreparedStatementPerConnectionSize() {
        return maxPoolPreparedStatementPerConnectionSize;
    }

    public void setMaxPoolPreparedStatementPerConnectionSize(Integer maxPoolPreparedStatementPerConnectionSize) {
        this.maxPoolPreparedStatementPerConnectionSize = maxPoolPreparedStatementPerConnectionSize;
    }

    public String getFilters() {
        return filters;
    }

    public void setFilters(String filters) {
        this.filters = filters;
    }

    public Boolean getRemoveAbandoned() {
        return removeAbandoned;
    }

    public void setRemoveAbandoned(Boolean removeAbandoned) {
        this.removeAbandoned = removeAbandoned;
    }

    public Integer getRemoveAbandonedTimeout() {
        return removeAbandonedTimeout;
    }

    public void setRemoveAbandonedTimeout(Integer removeAbandonedTimeout) {
        this.removeAbandonedTimeout = removeAbandonedTimeout;
    }
}
package com.ai.api.config.datasource.properties;

import com.alibaba.druid.pool.DruidDataSource;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.PropertySource;
import org.springframework.stereotype.Component;

/**
 * @Author yonyong
 * @Description //多数据源的配置
 * @Date 2020/3/4 18:03
 * @Param
 * @return
 **/
@Component
@ConfigurationProperties(prefix = "opcl.datasource")
public class MutiDataSourceProperties {

    private String url;

    private String username;

    private String password;

    public void config(DruidDataSource dataSource) {
        dataSource.setUrl(url);
        dataSource.setUsername(username);
        dataSource.setPassword(password);
    }

    public String getUrl() {
        return url;
    }

    public void setUrl(String url) {
        this.url = url;
    }

    public String getUsername() {
        return username;
    }

    public void setUsername(String username) {
        this.username = username;
    }

    public String getPassword() {
        return password;
    }

    public void setPassword(String password) {
        this.password = password;
    }
}

3. Notes and other

package com.ai.api.config.datasource;

import java.lang.annotation.*;
/**
 * @Author yonyong
 * @Description //多数据源标识
 * @Date 2020/3/4 17:12
 * @Param
 * @return
 **/
@Inherited
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.METHOD})
public @interface DataSource {
    String name() default "";
}
package com.ai.api.config.datasource;

public class DataSourceContextHolder {

    private static final ThreadLocal<String> contextHolder = new ThreadLocal<String>();

    /**
     * @param dataSourceType 数据库类型
     * @Description: 设置数据源类型
     */
    public static void setDataSourceType(String dataSourceType) {
        contextHolder.set(dataSourceType);
    }

    /**
     * @Description: 获取数据源类型
     */
    public static String getDataSourceType() {
        return contextHolder.get();
    }

    /**
     * @Description: 清除数据源类型
     */
    public static void clearDataSourceType() {
        contextHolder.remove();
    }
}
package com.ai.api.config.datasource;

public interface DSEnum {
    String DATA_SOURCE_DEFAULT = "dataSourceCore";         //核心数据源
    String DATA_SOURCE_OPCL = "dataSourceOpcl";            //opcl数据源
}
package com.ai.api.config.datasource;

import org.springframework.jdbc.datasource.lookup.AbstractRoutingDataSource;

public class DynamicDataSource extends AbstractRoutingDataSource {

    @Override
    protected Object determineCurrentLookupKey() {
        return DataSourceContextHolder.getDataSourceType();
    }
}

4. The core section

package com.ai.api.config.datasource;

import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.Signature;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Pointcut;
import org.aspectj.lang.reflect.MethodSignature;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.core.Ordered;
import org.springframework.stereotype.Component;

import java.lang.reflect.Method;


/**
 * @Author yonyong
 * @Description //多数据源切换的aop
 * @Date 2020/3/4 18:02
 * @Param
 * @return
 **/
@Aspect
@Component
@ConditionalOnProperty(prefix = "myconfig", name = "muti-datasource-open", havingValue = "true")
public class MultiSourceExAop implements Ordered {

    private Logger log = LoggerFactory.getLogger(MultiSourceExAop.class);

    @Pointcut(value = "@annotation( com.ai.api.config.datasource.DataSource)")
    private void cut() {

    }

    @Around("cut()")
    public Object around(ProceedingJoinPoint point) throws Throwable {

        Signature signature = point.getSignature();
        MethodSignature methodSignature = null;
        if (!(signature instanceof MethodSignature)) {
            throw new IllegalArgumentException("该注解只能用于方法");
        }
        methodSignature = (MethodSignature) signature;

        Object target = point.getTarget();
        Method currentMethod = target.getClass().getMethod(methodSignature.getName(), methodSignature.getParameterTypes());

        DataSource datasource = currentMethod.getAnnotation(DataSource.class);
        if (datasource != null) {
            DataSourceContextHolder.setDataSourceType(datasource.name());
            log.debug("设置数据源为:" + datasource.name());
        } else {
            DataSourceContextHolder.setDataSourceType(DSEnum.DATA_SOURCE_DEFAULT);
            log.debug("设置数据源为:dataSourceDefault");
        }
        try {
            return point.proceed();
        } finally {
            log.debug("清空数据源信息!");
            DataSourceContextHolder.clearDataSourceType();
        }
    }


    /**
     * aop的顺序要早于spring的事务
     */
    @Override
    public int getOrder() {
        return 1;
    }

}

5.mybatis configuration, is a must

package com.ai.api.config.datasource;

import com.ai.api.config.datasource.properties.DruidProperties;
import com.ai.api.config.datasource.properties.MutiDataSourceProperties;
import com.alibaba.druid.pool.DruidDataSource;
import org.mybatis.spring.annotation.MapperScan;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.transaction.annotation.EnableTransactionManagement;

import java.sql.SQLException;
import java.util.HashMap;

@Configuration
@EnableTransactionManagement(order = 2)
@MapperScan(basePackages = {"com.ai.api.mapper"})
public class MybatisConfig {

    @Autowired
    DruidProperties druidProperties;

    @Autowired
    MutiDataSourceProperties mutiDataSourceProperties;

    /**
     * 核心数据源
     */
    private DruidDataSource defaultDataSource() {
        DruidDataSource dataSource = new DruidDataSource();
        druidProperties.config(dataSource);
        return dataSource;
    }

    /**
     * 另一个数据源
     */
    private DruidDataSource opclDataSource() {
        DruidDataSource dataSource = new DruidDataSource();
        druidProperties.config(dataSource);
        mutiDataSourceProperties.config(dataSource);
        return dataSource;
    }

    /**
     * 单数据源连接池配置
     */
    @Bean
    @ConditionalOnProperty(prefix = "myconfig", name = "muti-datasource-open", havingValue = "false")
    public DruidDataSource singleDatasource() {
        return defaultDataSource();
    }

    /**
     * 多数据源连接池配置
     */
    @Bean
    @ConditionalOnProperty(prefix = "myconfig", name = "muti-datasource-open", havingValue = "true")
    public DynamicDataSource mutiDataSource() {

        DruidDataSource defaultDataSource = defaultDataSource();
        DruidDataSource opclDataSource = opclDataSource();

        try {
            defaultDataSource.init();
            opclDataSource.init();
        } catch (SQLException sql) {
            sql.printStackTrace();
        }

        DynamicDataSource dynamicDataSource = new DynamicDataSource();
        HashMap<Object, Object> hashMap = new HashMap<>();
        hashMap.put(DSEnum.DATA_SOURCE_DEFAULT, defaultDataSource);
        hashMap.put(DSEnum.DATA_SOURCE_OPCL, opclDataSource);
        dynamicDataSource.setTargetDataSources(hashMap);
        dynamicDataSource.setDefaultTargetDataSource(defaultDataSource);
        return dynamicDataSource;
    }
}

3. Non Annotation based configuration springboot configuring multiple data sources

application.yml

# DataSource Config
hikari:
  eoms:
     jdbc-url: jdbc:mysql://ip:3306/db?characterEncoding=utf-8&useSSL=false&serverTimezone=UTC
     username: root
     password: ******
     driver-class-name: com.mysql.cj.jdbc.Driver
     type: com.zaxxer.hikari.HikariDataSource
     minimum-idle: 5
     idle-timeout: 600000
     maximum-pool-size: 10
     auto-commit: true
     pool-name: MyHikariCPOfEoms
     max-lifetime: 1800000
     connection-timeout: 30000
     connection-test-query: SELECT 1
     validation-timeout: 5000
  opcl:
     jdbc-url: jdbc:mysql://ip2:3306/db2?characterEncoding=utf-8&useSSL=false&serverTimezone=UTC
     username: root
     password: ******
     driver-class-name: com.mysql.cj.jdbc.Driver
     type: com.zaxxer.hikari.HikariDataSource
     minimum-idle: 5
     idle-timeout: 600000
     maximum-pool-size: 10
     auto-commit: true
     pool-name: MyHikariCPOfOpcl
     max-lifetime: 1800000
     connection-timeout: 30000
     connection-test-query: SELECT 1
     validation-timeout: 5000

STEP1 to each data source to create a new profile

eoms

package com.ai.api.config.datasource;

import org.apache.ibatis.session.SqlSessionFactory;
import org.mybatis.spring.SqlSessionFactoryBean;
import org.mybatis.spring.SqlSessionTemplate;
import org.mybatis.spring.annotation.MapperScan;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.jdbc.DataSourceBuilder;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
import org.springframework.core.io.support.PathMatchingResourcePatternResolver;
import org.springframework.jdbc.datasource.DataSourceTransactionManager;

import javax.sql.DataSource;

@Configuration
@MapperScan(basePackages = "com.ai.api.mapper.eoms",
        sqlSessionTemplateRef = "eomsSqlSessionTemplate")
public class EomsDataSourceConfig {

    /*生成数据源*/
    @Bean(name = "eomsDs")
    @ConfigurationProperties(prefix = "hikari.eoms")
    @Primary
    public DataSource createDataSource() {
        return DataSourceBuilder.create().build();
    }

    /*创建 SqlSessionFactory*/
    @Bean(name = "eomsSqlSessionFactory")
    @Primary
    public SqlSessionFactory createSqlSessionFactory(@Qualifier("eomsDs") DataSource dataSource) throws Exception {
        SqlSessionFactoryBean bean = new SqlSessionFactoryBean();
        bean.setDataSource(dataSource);
        bean.setMapperLocations(new PathMatchingResourcePatternResolver().getResources("classpath:mapper/eoms/*.xml"));
        return bean.getObject();
    }

    /*配置事务管理*/
    @Bean(name = "eomsTransactionManager")
    @Primary
    public DataSourceTransactionManager createTransactionManager(@Qualifier("eomsDs") DataSource dataSource) {
        return new DataSourceTransactionManager(dataSource);
    }

    /*SqlSessionTemplate是MyBatis-Spring的核心。这个类负责管理MyBatis的SqlSession,调用MyBatis的SQL方法*/
    @Bean(name = "eomsSqlSessionTemplate")
    @Primary
    public SqlSessionTemplate createSqlSessionTemplate(@Qualifier("eomsSqlSessionFactory") SqlSessionFactory sqlSessionFactory) throws Exception {
        return new SqlSessionTemplate(sqlSessionFactory);
    }

}

opcl

package com.ai.api.config.datasource;

import org.apache.ibatis.session.SqlSessionFactory;
import org.mybatis.spring.SqlSessionFactoryBean;
import org.mybatis.spring.SqlSessionTemplate;
import org.mybatis.spring.annotation.MapperScan;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.jdbc.DataSourceBuilder;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.io.support.PathMatchingResourcePatternResolver;
import org.springframework.jdbc.datasource.DataSourceTransactionManager;

import javax.sql.DataSource;

@Configuration
@MapperScan(basePackages = "com.ai.api.mapper.opcl",
        sqlSessionTemplateRef = "opclSqlSessionTemplate")
public class OpclDataSourceConfig {

    @Bean(name = "opclDs")
    @ConfigurationProperties(prefix = "hikari.opcl")
    public DataSource createDataSource() {
        return DataSourceBuilder.create().build();
    }

    @Bean(name = "opclSqlSessionFactory")
    public SqlSessionFactory createSqlSessionFactory(@Qualifier("opclDs") DataSource dataSource) throws Exception {
        SqlSessionFactoryBean bean = new SqlSessionFactoryBean();
        bean.setDataSource(dataSource);
        bean.setMapperLocations(new PathMatchingResourcePatternResolver().getResources("classpath:mapper/opcl/*.xml"));
        return bean.getObject();
    }

    @Bean(name = "opclTransactionManager")
    public DataSourceTransactionManager createTransactionManager(@Qualifier("opclDs") DataSource dataSource) {
        return new DataSourceTransactionManager(dataSource);
    }

    @Bean(name = "opclSqlSessionTemplate")
    public SqlSessionTemplate createSqlSessionTemplate(@Qualifier("opclSqlSessionFactory") SqlSessionFactory sqlSessionFactory) throws Exception {
        return new SqlSessionTemplate(sqlSessionFactory);
    }
}

. STEP2 newly corresponding two interfaces xml file stored separately in the two packages dao layer mapper folder: eoms opcl folders and folders (with the upper class in the arranged, stored in the two folders )

STEP3. When creating a new mapper interface need to add a corresponding comment

eoms

package com.ai.api.mapper.eoms;

import com.ai.api.model.BatchHandlerRecords;
import com.ai.api.model.GateWay;
import com.ai.api.model.TerminalControl;
import org.springframework.beans.factory.annotation.Qualifier;
import com.ai.api.model.UserAction;
import org.apache.ibatis.annotations.Param;

import java.util.List;

/**
 * @Author yonyong
 * @Description //审计管理
 * @Date 2020/3/3 10:40
 * @Param
 * @return
 **/
@Qualifier("eomsSqlSessionTemplate")
public interface AuditMapper {
    //查询总记录数
    Integer selectTerminalControlCount(@Param("TerminalControl")TerminalControl TerminalControl);
}

opcl

package com.ai.api.mapper.opcl;

import org.springframework.beans.factory.annotation.Qualifier;

import java.util.Map;

@Qualifier("opclSqlSessionTemplate")
public interface LoginMapper {
    Map<String,Object> getAdmin(int id);
}

Guess you like

Origin www.cnblogs.com/yonyong/p/12426948.html