A culture and education as well as how you use Mybatis Plugin Druid Filer rewritten SQL

Background
work occasionally encounter need to unify the revision of SQL, for example, the following table structure:

The CREATE TABLE test_user( idint (. 11) the NOT NULL the AUTO_INCREMENT, accountVARCHAR (70) the NOT NULL the COMMENT 'account number', user_nameVARCHAR (60) the NOT NULL the COMMENT 'name', ageint (. 11) the NOT NULL the COMMENT 'aged', sex'bit (. 1) the NOT NULL COMMENT 'gender: 0 male, 1 female', create_timetimestamp the NOT NULL the DEFAULT '2019-01-01 00:00:00' COMMENT 'creation time', update_timetimestamp the NOT NULL the DEFAULT CURRENT_TIMESTAMP ON uPDATE CURRENT_TIMESTAMP COMMENT 'update', PRIMARY KEY ( id), UNIQUE KEY uk_account( account)) = ENGINE the InnoDB the DEFAULT the CHARSET = UTF8 the COMMENT = 'user information table';
assume the following Mapper SQL:

insert into test_user(account, user_name, age, sex, create_time)values (‘test1’, ‘test_user_1’, 1, 0, now())on duplicate key update user_name = ‘test_user_1’, age = 1, sex = 0;

A culture and education as well as how you use Mybatis Plugin Druid Filer rewritten SQL

Service codes in the layer is equal to 1 is determined by the number of rows Mapper Effect returned successfully to identify whether the SQL execution. However, if a database record values ​​and field values ​​provided in duplicate key update exactly the same, it does not execute mysql update, so the number of rows in a JDBC impact will return to zero, resulting in a logic error Service layer.

The solution is simple, just add update_time = now in the duplicate key update () can be, but if this statement is widespread, then the easiest way is to be achieved through SQL Rewrite.

Design & Selection
when modifying SQL
system Mybatis as ORM, alibaba druid as a database connection pool.

Mybatis provides a plugin mechanism to modify the SQL, for example Mybatis-PageHelper is to use the plugin to add paging mechanism to modify SQL statements and Count.

Druid offers Filter mechanism to modify the SQL, for example EncodingConvertFilter is to use the Filter mechanism to implement the transcoding before the actual execution.

Since both can do more than modify SQL, so we choose to have the changes when? In fact, the two and no significant difference between the pros and cons, and I personally look at the following two ways:

Different portability. For example JDBC connection pool using the DBCP Hikari or, when more appropriate in this modification Mybatis layer, in turn, if the selected frame is Hibernate ORM druid is more suitable.
Different workloads. ORM and because of different degree of abstraction JDBC code rewriting work led to the implementation of different levels are quite different, the workload is much less than for rewriting the rewritable Druid JDBC-based layer of ORM based Mybatis layer, because the lower-level JDBC, to be considered more, for example, or execution mode is PreparedStatment Statement, CallableStatement or the like, it is necessary to cover all of these rewriting, is rewritten irrespective ORM layer so fine.

SQL Parser selection
to rewrite the SQL, first of all have to parse SQL, SQL semantic analysis to determine whether the need to rewrite and rewrite what part, and lexical analysis has always been a very time-consuming, so SQL Parser framework is very important. Java ecosystem in the more popular SQL Parser are the following:

fdb-sql-parser is being FoundationDB before Apple acquired open-source SQL Parser, currently unmaintained.
jsqlparser JavaCC is based on open-source SQL Parser, General SQL Parser is a Java implementation version.
It is an open source Apache calcite dynamic data management framework, which includes parsing SQL, SQL verification, query optimization, generates SQL query functions and data connections, often used to provide a large capacity data SQL tools, e.g. Hive, Flink like. calcite good support for standard SQL, but support for traditional relational data dialect poor.
Alibaba alibaba druid is an open source JDBC database connection pool, but it was born the idea of monitoring have allowed the natural ability of SQL Parser. Its own Wall Filer, StatFiler are all based on SQL Parser parsing of AST. And supports a variety of database dialects.
In fact, when it comes to SQL Rewrite, we can easily think of database middleware sub-library sub-table, so we choose SQL Parser can refer to those well-known database middleware. Apache Sharding Sphere (formerly Dangdang Sharding-JDBC), Mycat are currently a large number of open-source database middleware domestic use, both of which use alibaba druid of SQL Parser module, and Mycat also open their comparative analysis in the selection of Mycat new parser routing selection and analysis results .docx.

Note: Apache Sharding Sphere instead developed its own SQL Parser in the 1.5.x version, the reason is because Sharding Sphere does not need the full SQL AST, therefore switch to self-development of SQL Parser SQL resolve to reduce the expense of the integrity of upgrade points library sub-table efficiency, detailed in depth understanding Sharding-JDBC: to do the most lightweight database middle layer.

A culture and education as well as how you use Mybatis Plugin Druid Filer rewritten SQL

In summary, we can rest assured that the choice of alibaba druid provided by SQL Parser, the only question is how to use the druid SQL Parser. druid officials have not detailed API documentation on SQL Parser and Visitor's (Tucao look again imperfect domestic open source projects on documentation and code comments, druid source basically no comment), so we can only other relevant documents, and has been some Visitor reference, the following is a druid all official documentation on SQL Parser and Visitor's:

Parser SQL
MySQL SQL Parser
Druid_SQL_AST
WallVisitor
configuration -WallFilter
EvalVisitor
SchemaStatVisitor
ExportParameterVisitor_demo_cn
ParameterizedOutputVisitor
SQL_Format
SQL_Parser_Demo_visitor (custom Vistor)
SQL_Parser_Parameterize
SQL_RemoveCondition_demo
SQL_Schema_Repository
TableMapping_cn
how to modify the SQL add conditions

Demo
realized Mybatis Plugin and Druid Filter Demo in two modes, the function is very simple, duplicate that in the opening of the insert ... on key updatesql in plus update_time = now ().

Demo address mybatis-plugin-or-druid-filter-rewrite-sql.

Mysql in the use of simulation Demo in H2, H2 built table statement reference src / test / resources / schema-h2.sql.

Plugin MyBatis
Plugin code src / main / java / com / github / larva / zhang / problems / SimpleRewriteSqlMybatisPlugin.java.

@Slf4j@Intercepts({@Signature(type = Executor.class, method = “update”, args = {MappedStatement.class, Object.class})})public class SimpleRewriteSqlMybatisPlugin implements Interceptor { private final SimpleAppendUpdateTimeVisitor visitor = new SimpleAppendUpdateTimeVisitor(); @Override public Object intercept(Invocation invocation) throws Throwable { Object[] args = invocation.getArgs(); MappedStatement mappedStatement = (MappedStatement) args[0]; SqlCommandType sqlCommandType = mappedStatement.getSqlCommandType(); if (sqlCommandType != SqlCommandType.INSERT) { // 只处理insert return invocation.proceed(); } BoundSql boundSql = mappedStatement.getBoundSql(args[1]); String sql = boundSql.getSql(); List sqlStatements = SQLUtils.parseStatements(sql, JdbcConstants.MYSQL); if (CollectionUtils.isNotEmpty(sqlStatements)) { for (SQLStatement sqlStatement :sqlStatements) { sqlStatement.accept(visitor); } } if (visitor.getAndResetRewriteStatus()) { // 改写了SQL,需要替换MappedStatement String newSql = SQLUtils.toSQLString(sqlStatements, JdbcConstants.MYSQL); log.info(“rewrite sql, origin sql: [{}], new sql: [{}]”, sql, newSql); BoundSql newBoundSql = new BoundSql(mappedStatement.getConfiguration(), newSql, boundSql.getParameterMappings(), boundSql.getParameterObject()); // copy原始MappedStatement的各项属性 MappedStatement.Builder builder = new MappedStatement.Builder(mappedStatement.getConfiguration(), mappedStatement.getId(), new WarpBoundSqlSqlSource(newBoundSql), mappedStatement.getSqlCommandType()); builder.cache(mappedStatement.getCache()).databaseId(mappedStatement.getDatabaseId()) .fetchSize(mappedStatement.getFetchSize()) .flushCacheRequired(mappedStatement.isFlushCacheRequired()) .keyColumn(StringUtils.join(mappedStatement.getKeyColumns(), ‘,’)) .keyGenerator(mappedStatement.getKeyGenerator()) .keyProperty(StringUtils.join(mappedStatement.getKeyProperties(), ‘,’)) .lang(mappedStatement.getLang()).parameterMap(mappedStatement.getParameterMap()) .resource(mappedStatement.getResource()).resultMaps(mappedStatement.getResultMaps()) .resultOrdered(mappedStatement.isResultOrdered()) .resultSets(StringUtils.join(mappedStatement.getResultSets(), ‘,’)) .resultSetType(mappedStatement.getResultSetType()).statementType(mappedStatement.getStatementType()) .timeout(mappedStatement.getTimeout()).useCache(mappedStatement.isUseCache()); MappedStatement newMappedStatement = builder.build(); // 将新生成的MappedStatement对象替换到参数列表中 args[0] = newMappedStatement; } return invocation.proceed();} / ** * then added to generate the proxy class {@link InterceptorChain} * in * {@link Executor} Mybatis dependence of the following components: *

    *
  1. {@Link StatementHandler} is responsible for creating JDBC {@link java.sql.Statement} Object
  2. *
  3. {@Link ParameterHandler} responsible for the actual parameters filled JDBC {@link java.sql.Statement} object
  4. *
  5. {@Link ResultSetHandler} responsible JDBC {@link java.sql.Statement # execute (String)} * Returns the process {@link java.sql.ResultSet}
  6. *
* Since this Plugin so only take effect only Executor Agent {@link Executor} objects * * @param target * @return * / @Override public Object plugin (Object target) {if (target instanceof Executor) {return Plugin.wrap (target , this);} return target; } @Override public void setProperties (Properties properties) {} static class WarpBoundSqlSqlSource implements SqlSource {private final BoundSql boundSql; public WarpBoundSqlSqlSource (BoundSql boundSql) {this.boundSql = boundSql;} @Override public BoundSql getBoundSql (Object parameterObject) {return boundSql; }}}
add the instance to the Plugin simply declare Mybatis Configuration Bean Interceptor list to use, with reference to src / test / java / com / github / larva / zhang / problems / mybatis / TestMybatisPluginRewriteSqlConfig.java.

@Bean    @Scope(scopeName = ConfigurableBeanFactory.SCOPE_PROTOTYPE)    public org.apache.ibatis.session.Configuration mybatisConfiguration() {        org.apache.ibatis.session.Configuration configuration = new org.apache.ibatis.session.Configuration();        // 各项属性设置        ...        // 使用Mybatis Plugin机制改写SQL        configuration.addInterceptor(mybatisPlugin());        return configuration;    }        @Bean    public SimpleRewriteSqlMybatisPlugin mybatisPlugin() {        return new SimpleRewriteSqlMybatisPlugin();    }

A culture and education as well as how you use Mybatis Plugin Druid Filer rewritten SQL

Druid Filter
Filter代码是src/main/java/com/github/larva/zhang/problems/SimpleRewriteSqlDruidFilter.java。

@Slf4jpublic class SimpleRewriteSqlDruidFilter extends FilterAdapter { private final SimpleAppendUpdateTimeVisitor visitor = new SimpleAppendUpdateTimeVisitor(); @Override public boolean statement_execute(FilterChain chain, StatementProxy statement, String sql) throws SQLException { String dbType = chain.getDataSource().getDbType(); List sqlStatements = SQLUtils.parseStatements(sql, dbType); sqlStatements.forEach(sqlStatement -> sqlStatement.accept(visitor)); if (visitor.getAndResetRewriteStatus()) { // 改写了SQL,需要替换 String newSql = SQLUtils.toSQLString(sqlStatements, dbType); log.info(“rewrite sql, origin sql: [{}], new sql: [{}]”, sql, newSql); return super.statement_execute(chain, statement, newSql); } return super.statement_execute(chain, statement, sql); } @Override public PreparedStatementProxy connection_prepareStatement(FilterChain chain, ConnectionProxy connection, String sql, int autoGeneratedKeys) throws SQLException { List sqlStatements = SQLUtils.parseStatements(sql, JdbcConstants.MYSQL); sqlStatements.forEach(sqlStatement -> sqlStatement.accept(visitor)); if (visitor.getAndResetRewriteStatus()) { // 改写了SQL,需要替换 String newSql = SQLUtils.toSQLString(sqlStatements, JdbcConstants.MYSQL); log.info(“rewrite sql, origin sql: [{}], new sql: [{}]”, sql, newSql); return super.connection_prepareStatement(chain, connection, newSql, autoGeneratedKeys); } return super.connection_prepareStatement(chain, connection, sql, autoGeneratedKeys); }}
The Filter in support of the implementation of the Statement and PreparedStatement two modes of SQL Rewrite, but the lack of support for other types of SQL.

Compared to Mybatis Plugin bad point is that no matter what SQL is required to go through SQL Parser parsing AST, of course, this is also possible in prepareStatement_execute rewriting SQL rather than connection_prepareStatement stage.

prepareStatement_execute rewrite stage and need to be rebuilt PreparedStatementProxy reset JdbcParameters, this stage of rewriting SQL connection_prepareStatement off than to trouble.

When using only added at the Druid DataSource instance declaration to the Filter list to usage type Druid's WallFilter. Reference src / test / java / com / github / larva / zhang / problems / druid / DruidFilterRewriteSqlConfig.java.

@Bean(initMethod = "init", destroyMethod = "close")    public DruidDataSource dataSource(@Value("${spring.datasource.url}") String url,            @Value("${spring.datasource.username}") String username,            @Value("${spring.datasource.password}") String password) throws SQLException {        DruidDataSource druidDataSource = new DruidDataSource();        // 各项属性设置        ...        // 添加改写SQL的Filter        druidDataSource.setProxyFilters(Collections.singletonList(simpleRewriteSqlDruidFilter()));        return druidDataSource;    }        @Bean    public FilterAdapter simpleRewriteSqlDruidFilter() {        return new SimpleRewriteSqlDruidFilter();    }

Druid Visitor
can see from the above code and Filter Plugin, the actual SQL is rewritten to the src / main / java / com / github / larva / zhang / problems / SimpleAppendUpdateTimeVisitor.java.

@Slf4jpublic class SimpleAppendUpdateTimeVisitor extends MySqlASTVisitorAdapter { private static final ThreadLocal REWRITE_STATUS_CACHE = new ThreadLocal<>(); private static final String UPDATE_TIME_COLUMN = “update_time”; @Override public boolean visit(MySqlInsertStatement x) { boolean hasUpdateTimeCol = false; // duplicate key update得到的都是SQLBinaryOpExpr List duplicateKeyUpdate = x.getDuplicateKeyUpdate(); if (CollectionUtils.isNotEmpty(duplicateKeyUpdate)) { for (SQLExpr sqlExpr : duplicateKeyUpdate) { if (sqlExpr instanceof SQLBinaryOpExpr && ((SQLBinaryOpExpr) sqlExpr).conditionContainsColumn(UPDATE_TIME_COLUMN)) { hasUpdateTimeCol = true; break; } } if (!hasUpdateTimeCol) { // append update time column String tableAlias = x.getTableSource().getAlias(); StringBuilder setUpdateTimeBuilder = new StringBuilder(); if (!StringUtils.isEmpty (tableAlias)) {setUpdateTimeBuilder.append (tableAlias) .append () '.';} setUpdateTimeBuilder.append (UPDATE_TIME_COLUMN) .append ( "= now ()"); SQLExpr sqlExpr = SQLUtils.toMySqlExpr (setUpdateTimeBuilder.toString () ); duplicateKeyUpdate.add (sqlExpr); // record the overwriting state REWRITE_STATUS_CACHE.set (Boolean.TRUE);}} return super.visit (x);} / ** * returns the overwriting state and reset state override * * @return overwriting state, {@ code true} represents rewritten, {@ code false} represents an overwrite * / public boolean getAndResetRewriteStatus () {boolean rewriteStatus = Optional.ofNullable (REWRITE_STATUS_CACHE.get ()). orElse ( Boolean.FALSE); // reset rewrite status REWRITE_STATUS_CACHE.remove (); return rewriteStatus;}}toString ()); duplicateKeyUpdate.add (sqlExpr); // record the overwriting state REWRITE_STATUS_CACHE.set (Boolean.TRUE);}} return super.visit (x);} / ** * returns the overwriting state and reset weight write status * * @return overwriting state, {@ code true} represents rewritten, {@ code false} represents an overwrite * / public boolean getAndResetRewriteStatus () {boolean rewriteStatus = Optional.ofNullable (REWRITE_STATUS_CACHE.get ()) .orElse (Boolean.FALSE); // reset rewrite status REWRITE_STATUS_CACHE.remove (); return rewriteStatus;}}toString ()); duplicateKeyUpdate.add (sqlExpr); // record the overwriting state REWRITE_STATUS_CACHE.set (Boolean.TRUE);}} return super.visit (x);} / ** * returns the overwriting state and reset weight write status * * @return overwriting state, {@ code true} represents rewritten, {@ code false} represents an overwrite * / public boolean getAndResetRewriteStatus () {boolean rewriteStatus = Optional.ofNullable (REWRITE_STATUS_CACHE.get ()) .orElse (Boolean.FALSE); // reset rewrite status REWRITE_STATUS_CACHE.remove (); return rewriteStatus;}}
Finally, if you feel good article, it would be forwarded to focus the next ~

Published 36 original articles · won praise 1 · views 639

Guess you like

Origin blog.csdn.net/WANXT1024/article/details/103992301