Druid阅读(五)client调用查询过程分析

目录

1 查询示例

2 获取连接

3 获取PreparedStatement

4 执行查询

5 关闭连接


1 查询示例

用Mybatis定义了一个Mapper接口,定义了一个简单的查询方法,代码如下:

public interface DruidTestMapper {
    @Select("SELECT * FROM druid_test")
    List<DruidTest> getAll();
}

 定义一个Service调用mapper查询,调用代码逻辑如下:

@Service
public class DruidTestService {

    @Autowired
    private DruidTestMapper druidTestMapper;

    public List<DruidTest> getAll(){
        return druidTestMapper.getAll();
    }
}

创建一个单元测试,调用service查询,调用代码逻辑如下:

@SpringBootTest
@RunWith(SpringRunner.class)
class DruidDemoApplicationTests {

	@Autowired
	private DruidTestService druidTestService;

	@Test
	void test() {
		System.out.println(druidTestService.getAll().toString());
	}

}

2 获取连接

进入DruidDataSource.getConnection()方法,打上断点,截图如下:

 
 

 debug执行DruidDemoApplicationTests.test()方法,已进入断点如下:

 简单查看调用栈,发现spring获取连接会调用Druid的getConnection方法,截图如下:

 getConnection(long maxWaitMillis)源码分析如下:

public DruidPooledConnection getConnection(long maxWaitMillis) throws SQLException {
        // 初始化,如果已初始化会立即返回,前面文章已分析 
        init();
        // 过滤链执行,这里先不做分析
        if (filters.size() > 0) {
            FilterChainImpl filterChain = new FilterChainImpl(this);
            return filterChain.dataSource_connect(this, maxWaitMillis);
        } else {
            // 获取连接
            return getConnectionDirect(maxWaitMillis);
        }
    }

getConnectionDirect(long maxWaitMillis)源码分析如下: 

public DruidPooledConnection getConnectionDirect(long maxWaitMillis) throws SQLException {
        int notFullTimeoutRetryCnt = 0;
        for (;;) {
            // handle notFullTimeoutRetry
            DruidPooledConnection poolableConnection;
            try {
                // 再次调用一个方法创建连接
                poolableConnection = getConnectionInternal(maxWaitMillis);
            } catch (GetConnectionTimeoutException ex) {
                if (notFullTimeoutRetryCnt <= this.notFullTimeoutRetryCount && !isFull()) {
                    notFullTimeoutRetryCnt++;
                    if (LOG.isWarnEnabled()) {
                        LOG.warn("get connection timeout retry : " + notFullTimeoutRetryCnt);
                    }
                    continue;
                }
                throw ex;
            }
            // testOnborrow连接检测
            if (testOnBorrow) {
                boolean validate = testConnectionInternal(poolableConnection.holder, poolableConnection.conn);
                if (!validate) {
                    if (LOG.isDebugEnabled()) {
                        LOG.debug("skip not validate connection.");
                    }

                    discardConnection(poolableConnection.holder);
                    continue;
                }
            } else {
                if (poolableConnection.conn.isClosed()) {
                    discardConnection(poolableConnection.holder); // 传入null,避免重复关闭
                    continue;
                }
                // testWhileIdel连接检测
                if (testWhileIdle) {
                    final DruidConnectionHolder holder = poolableConnection.holder;
                    long currentTimeMillis             = System.currentTimeMillis();
                    long lastActiveTimeMillis          = holder.lastActiveTimeMillis;
                    long lastExecTimeMillis            = holder.lastExecTimeMillis;
                    long lastKeepTimeMillis            = holder.lastKeepTimeMillis;

                    if (checkExecuteTime
                            && lastExecTimeMillis != lastActiveTimeMillis) {
                        lastActiveTimeMillis = lastExecTimeMillis;
                    }

                    if (lastKeepTimeMillis > lastActiveTimeMillis) {
                        lastActiveTimeMillis = lastKeepTimeMillis;
                    }

                    long idleMillis                    = currentTimeMillis - lastActiveTimeMillis;

                    long timeBetweenEvictionRunsMillis = this.timeBetweenEvictionRunsMillis;

                    if (timeBetweenEvictionRunsMillis <= 0) {
                        timeBetweenEvictionRunsMillis = DEFAULT_TIME_BETWEEN_EVICTION_RUNS_MILLIS;
                    }

                    if (idleMillis >= timeBetweenEvictionRunsMillis
                            || idleMillis < 0 // unexcepted branch
                            ) {
                        boolean validate = testConnectionInternal(poolableConnection.holder, poolableConnection.conn);
                        if (!validate) {
                            if (LOG.isDebugEnabled()) {
                                LOG.debug("skip not validate connection.");
                            }

                            discardConnection(poolableConnection.holder);
                             continue;
                        }
                    }
                }
            }

            if (removeAbandoned) {
                StackTraceElement[] stackTrace = Thread.currentThread().getStackTrace();
                poolableConnection.connectStackTrace = stackTrace;
                poolableConnection.setConnectedTimeNano();
                poolableConnection.traceEnable = true;

                activeConnectionLock.lock();
                try {
                    activeConnections.put(poolableConnection, PRESENT);
                } finally {
                    activeConnectionLock.unlock();
                }
            }

            if (!this.defaultAutoCommit) {
                poolableConnection.setAutoCommit(false);
            }
            // 返回DruidPooledConnection对象的连接,是再次对DruidConnectionHolder对象的包装
            return poolableConnection;
        }
    }

getConnectionInternal(long maxWait)源码(代码做了删减)分析如下:

private DruidPooledConnection getConnectionInternal(long maxWait) throws SQLException {

        DruidConnectionHolder holder;

            try {
                lock.lockInterruptibly();
            } catch (InterruptedException e) {
                connectErrorCountUpdater.incrementAndGet(this);
                throw new SQLException("interrupt", e);
            }

            try {
                if (maxWaitThreadCount > 0
                        && notEmptyWaitThreadCount >= maxWaitThreadCount) {
                    connectErrorCountUpdater.incrementAndGet(this);
                    throw new SQLException("maxWaitThreadCount " + maxWaitThreadCount + ", current wait Thread count "
                            + lock.getQueueLength());
                }

                if (onFatalError
                        && onFatalErrorMaxActive > 0
                        && activeCount >= onFatalErrorMaxActive) {
                    connectErrorCountUpdater.incrementAndGet(this);

                    StringBuilder errorMsg = new StringBuilder();
                    errorMsg.append("onFatalError, activeCount ")
                            .append(activeCount)
                            .append(", onFatalErrorMaxActive ")
                            .append(onFatalErrorMaxActive);

                    if (lastFatalErrorTimeMillis > 0) {
                        errorMsg.append(", time '")
                                .append(StringUtils.formatDateTime19(
                                        lastFatalErrorTimeMillis, TimeZone.getDefault()))
                                .append("'");
                    }

                    if (lastFatalErrorSql != null) {
                        errorMsg.append(", sql \n")
                                .append(lastFatalErrorSql);
                    }

                    throw new SQLException(
                            errorMsg.toString(), lastFatalError);
                }

                connectCount++;

                if (createScheduler != null
                        && poolingCount == 0
                        && activeCount < maxActive
                        && creatingCountUpdater.get(this) == 0
                        && createScheduler instanceof ScheduledThreadPoolExecutor) {
                    ScheduledThreadPoolExecutor executor = (ScheduledThreadPoolExecutor) createScheduler;
                    if (executor.getQueue().size() > 0) {
                        createDirect = true;
                        continue;
                    }
                }

                if (maxWait > 0) {
                    // 根据超时时间从连接池获取一个连接
                    holder = pollLast(nanos);
                } else {
                    // 不根据超时时间从连接池获取一个连接
                    holder = takeLast();
                }

                if (holder != null) {
                    if (holder.discard) {
                        continue;
                    }

                    activeCount++;
                    holder.active = true;
                    if (activeCount > activePeak) {
                        activePeak = activeCount;
                        activePeakTime = System.currentTimeMillis();
                    }
                }
            } catch (InterruptedException e) {
                connectErrorCountUpdater.incrementAndGet(this);
                throw new SQLException(e.getMessage(), e);
            } catch (SQLException e) {
                connectErrorCountUpdater.incrementAndGet(this);
                throw e;
            } finally {
                lock.unlock();
            }

            break;
        }

        holder.incrementUseCount();
        // holder包装成DruidPooledConnection对象返回
        DruidPooledConnection poolalbeConnection = new DruidPooledConnection(holder);
        return poolalbeConnection;
    }

 从连接池获取连接的源码分析如下:

DruidConnectionHolder takeLast() throws InterruptedException, SQLException {
        try {
            while (poolingCount == 0) {
                emptySignal(); // send signal to CreateThread create connection

                if (failFast && isFailContinuous()) {
                    throw new DataSourceNotAvailableException(createError);
                }

                notEmptyWaitThreadCount++;
                if (notEmptyWaitThreadCount > notEmptyWaitThreadPeak) {
                    notEmptyWaitThreadPeak = notEmptyWaitThreadCount;
                }
                try {
                    notEmpty.await(); // signal by recycle or creator
                } finally {
                    notEmptyWaitThreadCount--;
                }
                notEmptyWaitCount++;

                if (!enable) {
                    connectErrorCountUpdater.incrementAndGet(this);
                    if (disableException != null) {
                        throw disableException;
                    }

                    throw new DataSourceDisableException();
                }
            }
        } catch (InterruptedException ie) {
            notEmpty.signal(); // propagate to non-interrupted thread
            notEmptySignalCount++;
            throw ie;
        }

        decrementPoolingCount();
        // 连接池获取最后一个连接
        DruidConnectionHolder last = connections[poolingCount];
        // 已取出的连接置为空
        connections[poolingCount] = null;
        // 返回连接
        return last;
    }

3 获取PreparedStatement

断点打在DruidPooledConnection.prepareStatement(String sql, int resultSetType, int resultSetConcurrency)方法,如下截图所示:

至于为何知道是调用了这个方法,可以在获取连接后debug步骤查看,最终会发现调用了这个方法。

 prepareStatement方法源码分析如下:

public PreparedStatement prepareStatement(String sql, int resultSetType, int resultSetConcurrency)
                                                                                                      throws SQLException {
        // 检查状态
        checkState();
        // 是不是很熟悉,没错,跟DruidConnectionHolder一样的处理,是PreparedStatement的包装
        PreparedStatementHolder stmtHolder = null;
        PreparedStatementKey key = new PreparedStatementKey(sql, getCatalog(), MethodType.M2, resultSetType,
                                                            resultSetConcurrency);

        boolean poolPreparedStatements = holder.isPoolPreparedStatements();

        if (poolPreparedStatements) {
            stmtHolder = holder.getStatementPool().get(key);
        }

        if (stmtHolder == null) {
            try {
                // 通过connection连接获取PreparedStatementHolder
                stmtHolder = new PreparedStatementHolder(key, conn.prepareStatement(sql, resultSetType,
                                                                                    resultSetConcurrency));
                holder.getDataSource().incrementPreparedStatementCount();
            } catch (SQLException ex) {
                handleException(ex, sql);
            }
        }

        initStatement(stmtHolder);
        // 将PreparedStatementHolder包装成DruidPooledPreparedStatement返回
        DruidPooledPreparedStatement rtnVal = new DruidPooledPreparedStatement(this, stmtHolder);

        holder.addTrace(rtnVal);

        return rtnVal;
    }
public PreparedStatement prepareStatement(String sql, int resultSetType, int resultSetConcurrency)
                                                                                                      throws SQLException {
        FilterChainImpl chain = createChain();
        // 创建stmt
        PreparedStatement stmt = chain.connection_prepareStatement(this, sql, resultSetType, resultSetConcurrency);
        recycleFilterChain(chain);
        return stmt;
    }

 connection_prepareStatement源码分析如下:

  public PreparedStatementProxy connection_prepareStatement(
            ConnectionProxy connection,
            String sql,
            int resultSetType,
            int resultSetConcurrency) throws SQLException
    {
        if (this.pos < filterSize) {
            return nextFilter()
                    .connection_prepareStatement(this, connection, sql, resultSetType, resultSetConcurrency);
        }
        // 调用jdbc创建statment
        PreparedStatement statement
                = connection.getRawObject()
                .prepareStatement(sql, resultSetType, resultSetConcurrency);

        if (statement == null) {
            return null;
        }
        // 返回一个代理对象
        return new PreparedStatementProxyImpl(connection, statement, sql, dataSource.createStatementId());
    }

4 执行查询

Druid处理返回Statement后,将会执行handler.query(stmt, resultHandler)方法

按 F7快捷键进入几次将会看到ps.execute()方法 

再次F7就进入了Druid的执行查询方法了

 执行源码查看并分析如下:

  @Override
    public boolean execute() throws SQLException {
        // 检查是否打开状态
        checkOpen();

        incrementExecuteCount();
        transactionRecord(sql);

        oracleSetRowPrefetch();

        conn.beforeExecute();
        try {
            // 调用数据库执行查询
            return stmt.execute();
        } catch (Throwable t) {
            errorCheck(t);

            throw checkException(t);
        } finally {
            conn.afterExecute();
        }
    }

5 关闭连接

查询返回结果后,最后一步关闭连接

 同样按F7最后进入到DruidPooledPreparedStatement的close()方法,Druid执行关闭连接。

Guess you like

Origin blog.csdn.net/he_cha_bu/article/details/121278543