Why do you need read-write separation?
When the project is getting bigger and bigger and the concurrency is getting bigger and bigger, the pressure of a single database server must be increasing, and eventually the database will become the bottleneck of performance, and when there is more and more data, the query will be more time-consuming, of course When the database data is too large, you can use database sub-database and sub-table, and when the database pressure is too large, you can also use Redis and other caching technologies to reduce the pressure, but any technology is not a panacea. Use, and this article mainly introduces how to speed up database reading through read-write separation
Method to realize
There are many ways to implement read-write separation, but many of them need to configure the master-slave replication of the database. Of course, there may be some that do not need to be configured, but I don't know.
method one
Database middleware implementation, such as Mycat and other database middleware, for the project itself, there is only one data source, which is to link to Mycat, and then Mycat chooses which library to obtain data from according to the rules
Method 2
Configure multiple data sources in the code, and control which data source to use through the code. This article mainly introduces this method.
The pros and cons of read-write separation
advantage
1. Reduce the pressure on database reading, especially for some real-time report applications that require a lot of calculations.
2. Enhance data security. One advantage of read-write separation is that data is backed up in near real-time. The data can be infinitely close to the main database
. 3. High availability can be achieved. Of course, the read-write separation cannot be achieved by configuring the read-write separation.
shortcoming
1. Increase the cost, the cost of one database server and multiple databases are definitely
different
Although the cost of reading is reduced, the cost of writing is not reduced at all. After all, there are still slave libraries that have been requesting data from the main library.
MySQL master-slave replication configuration
MySQL master-slave configuration is the basic condition for realizing read-write separation. For the specific realization of MySQL master-slave replication, please refer to my previous article on MySQL master-slave replication, based on log (binlog)
Data source configuration
spring:
application:
name: separate
master:
url: jdbc:mysql://192.168.1.126:3307/test?useUnicode=true&characterEncoding=utf8&emptyStringsConvertToZero=true username: root password: 123456 driver_class_namel: com.mysql.jdbc.Driver type: com.alibaba.druid.pool.DruidDataSource max-active: 20 initial-size: 1 min-idle: 3 max-wait: 600 time-between-eviction-runs-millis: 60000 min-evictable-idle-time-millis: 300000 test-while-idle: true test-on-borrow: false test-on-return: false poolPreparedStatements: true slave: url: jdbc:mysql://192.168.1.126:3309/test?useUnicode=true&characterEncoding=utf8&emptyStringsConvertToZero=true username: test password: 123456 driver_class_namel: com.mysql.jdbc.Driver type: com.alibaba.druid.pool.DruidDataSource max-active: 20 initial-size: 1 min-idle: 3 max-wait: 600 time-between-eviction-runs-millis: 60000 min-evictable-idle-time-millis: 300000 test-while-idle: true test-on-borrow: false test-on-return: false poolPreparedStatements: true
Two data sources are configured in the file, the master is the writing library, and the slave is the reading library. In order to prevent writing to the slave, the user of the slave only has read permission. Because the data source needs to be dynamically set in the code, the data source needs to inherit the AbstractRoutingDataSource.
/**
* 动态数据源
* @author Raye
* @since 2016年10月25日15:20:40
*/
public class DynamicDataSource extends AbstractRoutingDataSource { private static final ThreadLocal<DatabaseType> contextHolder = new ThreadLocal<DatabaseType>(); @Override protected Object determineCurrentLookupKey() { return contextHolder.get(); } public static enum DatabaseType { Master, Slave } public static void master(){ contextHolder.set(DatabaseType.Master); } public static void slave(){ contextHolder.set(DatabaseType.Slave); } public static void setDatabaseType(DatabaseType type) { contextHolder.set(type); } public static DatabaseType getType(){ return contextHolder.get(); } }
contextHolder is a thread variable, because each request is a thread, so by this way to distinguish which library is used
determineCurrentLookupKey is the method of the rewritten AbstractRoutingDataSource, mainly to determine which data source key should be used currently, because there are multiple data saved in AbstractRoutingDataSource The source is saved by way of Map
instantiated data source
/**
* Druid的DataResource配置类
* @author Raye
* @since 2016年10月7日14:14:18
*/
@Configuration
@EnableTransactionManagement public class DataBaseConfiguration implements EnvironmentAware { private RelaxedPropertyResolver propertyResolver1; private RelaxedPropertyResolver propertyResolver2; public DataBaseConfiguration(){ System.out.println("#################### DataBaseConfiguration"); } public void setEnvironment(Environment env) { this.propertyResolver1 = new RelaxedPropertyResolver(env, "spring.master."); this.propertyResolver2 = new RelaxedPropertyResolver(env, "spring.slave."); } public DataSource master() { System.out.println("注入Master druid!!!"); DruidDataSource datasource = new DruidDataSource(); datasource.setUrl(propertyResolver1.getProperty("url")); datasource.setDriverClassName(propertyResolver1.getProperty("driver-class-name")); datasource.setUsername(propertyResolver1.getProperty("username")); datasource.setPassword(propertyResolver1.getProperty("password")); datasource.setInitialSize(Integer.valueOf(propertyResolver1.getProperty("initial-size"))); datasource.setMinIdle(Integer.valueOf(propertyResolver1.getProperty("min-idle"))); datasource.setMaxWait(Long.valueOf(propertyResolver1.getProperty("max-wait"))); datasource.setMaxActive(Integer.valueOf(propertyResolver1.getProperty("max-active"))); datasource.setMinEvictableIdleTimeMillis(Long.valueOf(propertyResolver1.getProperty("min-evictable-idle-time-millis"))); try { datasource.setFilters("stat,wall"); } catch (SQLException e) { e.printStackTrace(); } return datasource; } public DataSource slave() { System.out.println("Slave druid!!!"); DruidDataSource datasource = new DruidDataSource(); datasource.setUrl(propertyResolver2.getProperty("url")); datasource.setDriverClassName(propertyResolver2.getProperty("driver-class-name")); datasource.setUsername(propertyResolver2.getProperty("username")); datasource.setPassword(propertyResolver2.getProperty("password")); datasource.setInitialSize(Integer.valueOf(propertyResolver2.getProperty("initial-size"))); datasource.setMinIdle(Integer.valueOf(propertyResolver2.getProperty("min-idle"))); datasource.setMaxWait(Long.valueOf(propertyResolver2.getProperty("max-wait"))); datasource.setMaxActive(Integer.valueOf(propertyResolver2.getProperty("max-active"))); datasource.setMinEvictableIdleTimeMillis(Long.valueOf(propertyResolver2.getProperty("min-evictable-idle-time-millis"))); try { datasource.setFilters("stat,wall"); } catch (SQLException e) { e.printStackTrace(); } return datasource; } @Bean public DynamicDataSource dynamicDataSource() { DataSource master = master(); DataSource slave = slave(); Map<Object, Object> targetDataSources = new HashMap<Object, Object>(); targetDataSources.put(DynamicDataSource.DatabaseType.Master, master); targetDataSources.put(DynamicDataSource.DatabaseType.Slave, slave); DynamicDataSource dataSource = new DynamicDataSource(); dataSource.setTargetDataSources(targetDataSources);// 该方法是AbstractRoutingDataSource的方法 dataSource.setDefaultTargetDataSource(master); return dataSource; } }
There are a total of 3 data sources, a master, a slave, and a dynamic data source, which are stored in the master and slave. In order to prevent spring injection exceptions, both master and slave are actively instantiated, not managed by spring
dataSource.setDefaultTargetDataSource(master);
It is configured. If the default data source of which data source is currently used is not configured, it was originally intended to configure the slave, but because of things, the configured master
Mybatis configuration
/**
* MyBatis的配置类
*
* @author Raye
* @since 2016年10月7日14:13:39
*/
@Configuration
@AutoConfigureAfter({ DataBaseConfiguration.class }) @Slf4j public class MybatisConfiguration { @Bean(name = "sqlSessionFactory") @Autowired public SqlSessionFactory sqlSessionFactory(DynamicDataSource dynamicDataSource) { SqlSessionFactoryBean bean = new SqlSessionFactoryBean(); bean.setDataSource(dynamicDataSource); try { SqlSessionFactory session = bean.getObject(); MapperHelper mapperHelper = new MapperHelper(); //特殊配置 Config config = new Config(); //具体支持的参数看后面的文档 config.setNotEmpty(true); //设置配置 mapperHelper.setConfig(config); // 注册自己项目中使用的通用Mapper接口,这里没有默认值,必须手动注册 mapperHelper.registerMapper(Mapper.class); //配置完成后,执行下面的操作 mapperHelper.processConfiguration(session.getConfiguration()); return session; } catch (Exception e) { e.printStackTrace(); } return null; } @Bean(name = "sqlSessionTemplate") @Autowired public SqlSessionTemplate sqlSessionTemplate(SqlSessionFactory sqlSessionFactory) { return new SqlSessionTemplate(sqlSessionFactory); } @Bean public MapperScannerConfigurer scannerConfigurer(){ MapperScannerConfigurer configurer = new MapperScannerConfigurer(); configurer.setSqlSessionFactoryBeanName("sqlSessionFactory"); configurer.setSqlSessionTemplateBeanName("sqlSessionTemplate"); configurer.setBasePackage("wang.raye.**.mapper"); configurer.setMarkerInterface(Mapper.class); return configurer; } }
MybatisConfiguration is mainly the configuration of sqlSessionFactory and sqlSessionTemplate, as well as the configuration of Mybatis' extension framework Mapper. If Mapper is not required, you can configure scannerConfigurer without it.
Thing configuration
@Configuration
@EnableTransactionManagement
@Slf4j
@AutoConfigureAfter({ MybatisConfiguration.class })
public class TransactionConfiguration extends DataSourceTransactionManagerAutoConfiguration {
@Bean @Autowired public DataSourceTransactionManager transactionManager(DynamicDataSource dynamicDataSource) { log.info("事物配置"); return new DataSourceTransactionManager(dynamicDataSource); } }
There is a pit in the configuration of things. Once the thing is turned on, it seems that the thread execution will be switched, so the currently configured data source will not be used, but the default data source will be obtained, so the only way to do this is by setting the default data source to master
AOP cut in to set the data source
/**
* 数据源的切入面
*
*/
@Aspect
@Component
@Slf4j
public class DataSourceAOP {
@Before("execution(* wang.raye.separate.service..*.select*(..)) || execution(* wang.raye.separate.service..*.get*(..))") public void setReadDataSourceType() { DynamicDataSource.slave(); log.info("dataSource切换到:slave"); } @Before("execution(* wang.raye.separate.service..*.insert*(..)) || execution(* wang.raye.separate.service..*.update*(..)) || execution(* wang.raye.separate.service..*.delete*(..)) || execution(* wang.raye.separate.service..*.add*(..))") public void setWriteDataSourceType() { DynamicDataSource.master(); log.info("dataSource切换到:master"); } }
This configuration is based on the method name and can be configured according to your own situation
You can also use annotations to actively switch and create two annotation classes, a Master and a Slave Master.class
/**
* 使用主库的注解
*/
public @interface Master { }
Slave.class
/**
* 使用读库的注解
*/
public @interface Slave { }
AOP cut-in modification
/**
* 数据源的切入面
*
*/
@Aspect
@Component
@Slf4j
public class DataSourceAOP {
@Before("(@annotation(wang.raye.separate.annotation.Master) || execution(* wang.raye.separate.service..*.insert*(..)) || " + "execution(* wang.raye.separate.service..*.update*(..)) || execution(* wang.raye.separate.service..*.delete*(..)) || " + "execution(* wang.raye.separate.service..*.add*(..))) && !@annotation(wang.raye.separate.annotation.Slave) -") public void setWriteDataSourceType() { DynamicDataSource.master(); log.info("dataSource切换到:master"); } @Before("(@annotation(wang.raye.separate.annotation.Slave) || execution(* wang.raye.separate.service..*.select*(..)) || execution(* wang.raye.separate.service..*.get*(..))) && !@annotation(wang.raye.separate.annotation.Master)") public void setReadDataSourceType() { DynamicDataSource.slave(); log.info("dataSource切换到:slave"); } }
Note: This AOP entry rule only contains basic specifications. If you want to use it normally, you need to extend the rules and simple service layer code
/**
* 用户相关业务接口实现类
*/
@Service
@Slf4j
public class UserServiceImpl implements UserService { @Autowired private UserMapper mapper; @Master @Override public List<User> selectAll() { return mapper.selectAll(); } @Override public boolean addUser(User user) { return mapper.insertSelective(user) > 0; } @Override public boolean updateUser(User user) { return mapper.updateByPrimaryKey(user) > 0; } @Override public boolean deleteByid(int id) { return mapper.deleteByPrimaryKey(id) > 0; } @Transactional(rollbackFor = Exception.class ) @Override public boolean insertAndUpdate(User user){ log.info("当前key:"+ DynamicDataSource.getType().name()); int count = 0; count += mapper.insertSelective(user); user = null; user.getId(); count += mapper.updateByPrimaryKey(user); return count > 1; } }
All methods here will use the master source. If the Master annotation of selectAll is removed, then selectAll will use the slave data source. The insertAndUpdate method mainly tests whether it is written to the Master data source and whether it is rolled back normally when using things.
source code
The specific code can be seen directly in my demo project read- write separation demo