[Interpretation of druid source code] - what does a query sql experience in druid?

This article has participated in the "Newcomer Creation Ceremony" event to start the road of creating gold nuggets.

druid -- what does a query sql experience in druid?

Druid's connection pool configuration has the configuration of PreparedStatementCache, which solves the problem that SQL statements can be precompiled and stored in the PreparedStatement object, and this object is stored in PreparedStatementCache, which can bypass database compilation for oracle. The improvement, but for mysql, it is not so obvious.

This article interprets the executeQuery method in the DruidPooledPreparedStatement class, trying to understand how to do preprocessing, how to execute SQL, and how to monitor how to obtain data during SQL execution. The DruidPooledPreparedStatement class implements the executeQuery method. The most important thing in this method is the sentence ResultSet rs = stmt.executeQuery(). stmt is the class object of the PreparedStatementProxyImpl class.

DruidPooledPreparedStatement class diagram

insert image description here

Source code analysis

//构造方法 核心是:连接池和预处理的持有者
public DruidPooledPreparedStatement(DruidPooledConnection conn, PreparedStatementHolder holder) throws SQLException{
    super(conn, holder.statement);
    this.stmt = holder.statement;
    this.holder = holder;
    this.sql = holder.key.sql;
		// 配置项中是否打开属性poolPreparedStatements
    pooled = conn.getConnectionHolder().isPoolPreparedStatements();
    // Remember the defaults

    if (pooled) {
      //如果打开了这个属性
        try {
          	//获取最大字段大小
            defaultMaxFieldSize = stmt.getMaxFieldSize();
        } catch (SQLException e) {
            LOG.error("getMaxFieldSize error", e);
        }

        try {
          //获取最大行
            defaultMaxRows = stmt.getMaxRows();
        } catch (SQLException e) {
            LOG.error("getMaxRows error", e);
        }

        try {
          //获取查询超时时间
            defaultQueryTimeout = stmt.getQueryTimeout();
        } catch (SQLException e) {
            LOG.error("getMaxRows error", e);
        }

        try {
          //取数方向
            defaultFetchDirection = stmt.getFetchDirection();
        } catch (SQLException e) {
            LOG.error("getFetchDirection error", e);
        }

        try {
          //取数大小
            defaultFetchSize = stmt.getFetchSize();
        } catch (SQLException e) {
            LOG.error("getFetchSize error", e);
        }
    }

    currentMaxFieldSize = defaultMaxFieldSize;
    currentMaxRows = defaultMaxRows;
    currentQueryTimeout = defaultQueryTimeout;
    currentFetchDirection = defaultFetchDirection;
    currentFetchSize = defaultFetchSize;
}
复制代码

Execute query executeQuery sequence diagram

insert image description here

Execute query executeQuery source code

@Override
public ResultSet executeQuery() throws SQLException {
  //check 连接
    checkOpen();
		//执行查询的次数++
    incrementExecuteQueryCount();
   //sql 事务记录
    transactionRecord(sql);
   //oracle设置行预取
    oracleSetRowPrefetch();
	// 执行前 running状态变更
    conn.beforeExecute();
    try {
      //实际执行 PreparedStatementProxyImpl 的查询代码详解见下面
        ResultSet rs = stmt.executeQuery();

        if (rs == null) {
            return null;
        }
				//连接池返回结果封装
        DruidPooledResultSet poolableResultSet = new DruidPooledResultSet(this, rs);
      //添加结果集跟踪 用于监控
        addResultSetTrace(poolableResultSet);

        return poolableResultSet;
    } catch (Throwable t) {
        errorCheck(t);

        throw checkException(t);
    } finally {
      //更新连接的running状态
        conn.afterExecute();
    }
}
复制代码

preparedStatement_executeQuery

The implementation of the executeQuery method by the PreparedStatementProxyImpl class, which calls the createChain() method of the parent class StatementProxyImpl, preparedStatement_executeQuery

@Override
public ResultSet executeQuery() throws SQLException {
    firstResultSet = true;

    updateCount = null;
    lastExecuteSql = sql;
    lastExecuteType = StatementExecuteType.ExecuteQuery;
    lastExecuteStartNano = -1L;
    lastExecuteTimeNano = -1L;
		// 调用父类createChain 返回FilterChainImpl对象内容,执行FilterChain的preparedStatement_executeQuery方法
    return createChain().preparedStatement_executeQuery(this);
}
复制代码

FilterChainImpl

The return value of this method is a filter chain class FilterChainImpl class object, the FilterChainImpl class

public FilterChainImpl createChain() {
  //获取FilterChainImpl对象
    FilterChainImpl chain = this.filterChain;
    if (chain == null) {
        chain = new FilterChainImpl(this.getConnectionProxy().getDirectDataSource());
    } else {
        this.filterChain = null;
    }

    return chain;
}
复制代码

FilterEventAdapter class diagram

insert image description here

FilterEventAdapter source code

When the preparedStatement_executeQuery method of the FilterChainImpl class is executed, this method of the nextFilter filter class is executed first.

@Override
public ResultSetProxy preparedStatement_executeQuery(PreparedStatementProxy statement) throws SQLException {
    if (this.pos < filterSize) {
      // 执行过滤器的方法 SQL监控的过滤器类(FilterEventAdapter)
        return nextFilter().preparedStatement_executeQuery(this, statement);
    }

    ResultSet resultSet = statement.getRawObject().executeQuery();
    if (resultSet == null) {
        return null;
    }
    return new ResultSetProxyImpl(statement, resultSet, dataSource.createResultSetId(),
            statement.getLastExecuteSql());
}
复制代码

The filter class (FilterEventAdapter) of SQL monitoring saves monitoring data during SQL execution. Explains the source of druid monitoring data.

//FilterEventAdapter
//这个类的很巧妙的之处就是采用了设计模式中的模版方法,FilterEventAdapter作为父类实现通用的处理,子类继承这个实现具体的个性话的业务,很适合在实际业务场景中进行业务抽象模型的时候使用这种设计思路
@Override
public ResultSetProxy preparedStatement_executeQuery(FilterChain chain, PreparedStatementProxy statement)
                                                                                                         throws SQLException {
    try {
        //sql实际执行之前 调用的是 如果子类是Log Filter的时候:组装sql执行的日志  如果是Stat Filter则记录对应的监控参数
        statementExecuteQueryBefore(statement, statement.getSql());

        ResultSetProxy resultSet = chain.preparedStatement_executeQuery(statement);

        if (resultSet != null) {
            //子类中Log Filter的方法组装sql执行的日志 or Stat Filter则记录对应的监控参数
            statementExecuteQueryAfter(statement, statement.getSql(), resultSet);
            //子类中Log Filter的方法组装sql执行的日志 or Stat Filter则记录对应的监控参数
            resultSetOpenAfter(resultSet);
        }

        return resultSet;
    } catch (SQLException error) {
        statement_executeErrorAfter(statement, statement.getSql(), error);
        throw error;
    } catch (RuntimeException error) {
        statement_executeErrorAfter(statement, statement.getSql(), error);
        throw error;
    } catch (Error error) {
        statement_executeErrorAfter(statement, statement.getSql(), error);
        throw error;
    }
}
复制代码

Summarize

Today, I mainly focus on how the sql of the query is executed in druid, and if it is monitored, how is it recorded. Through the study a few days ago and today, I understand the true meaning of "druid is born for monitoring". Monitoring throughout the design runs through all processing such as spikes, number of connections, sql execution time, etc. When executing sql specifically, the relevant data of the record monitoring is intercepted by means of Filter. Tomorrow, I plan to interpret the source code of StatFilter for specific monitoring.

Guess you like

Origin juejin.im/post/7147298686094016520