Connection Pool

MySQL :: MySQL Connector/NET Developer Guide :: 4.3 Managing a Connection Pool in Connector/NET https://dev.mysql.com/doc/connector-net/en/connector-net-connections-pooling.html

Resource Usage

Connector/NET runs a background job every three minutes and removes connections from pool that have been idle (unused) for more than three minutes. The pool cleanup frees resources on both client and server side. This is because on the client side every connection uses a socket, and on the server side every connection uses a socket and a thread.

MySQL :: MySQL Connector/Python Developer Guide :: 4.2 Installing Connector/Python from a Binary Distribution https://dev.mysql.com/doc/connector-python/en/connector-python-installation-binary.html

MySQL :: MySQL Connector/Python Developer Guide :: 9.1 Connector/Python Connection Pooling https://dev.mysql.com/doc/connector-python/en/connector-python-connection-pooling.html

cnxpool = mysql.connector.pooling.MySQLConnectionPool(pool_name = "mypool", pool_size = 3, **dbconfig)

Connection Pooling — SQLAlchemy 1.3 Documentation https://docs.sqlalchemy.org/en/13/core/pooling.html?highlight=max_overflow

  • creator – a callable function that returns a DB-API connection object, same as that of Pool.creator.

  • pool_size – The size of the pool to be maintained, defaults to 5. This is the largest number of connections that will be kept persistently in the pool. Note that the pool begins with no connections; once this number of connections is requested, that number of connections will remain. pool_size can be set to 0 to indicate no size limit; to disable pooling, use a NullPool instead.

  • max_overflow – The maximum overflow size of the pool. When the number of checked-out connections reaches the size set in pool_size, additional connections will be returned up to this limit. When those additional connections are returned to the pool, they are disconnected and discarded. It follows then that the total number of simultaneous connections the pool will allow is pool_size + max_overflow, and the total number of “sleeping” connections the pool will allow is pool_size. max_overflow can be set to -1 to indicate no overflow limit; no limit will be placed on the total number of concurrent connections. Defaults to 10.

  • timeout – The number of seconds to wait before giving up on returning a connection. Defaults to 30.

  • use_lifo –

    use LIFO (last-in-first-out) when retrieving connections instead of FIFO (first-in-first-out). Using LIFO, a server-side timeout scheme can reduce the number of connections used during non-peak periods of use. When planning for server-side timeouts, ensure that a recycle or pre-ping strategy is in use to gracefully handle stale connections.

Connection Pooling — SQLAlchemy 1.3 Documentation https://docs.sqlalchemy.org/en/13/core/pooling.html?highlight=max_overflow#pool-use-lifo

Using FIFO vs. LIFO

The QueuePool class features a flag called QueuePool.use_lifo, which can also be accessed from create_engine() via the flag create_engine.pool_use_lifo. Setting this flag to Truecauses the pool’s “queue” behavior to instead be that of a “stack”, e.g. the last connection to be returned to the pool is the first one to be used on the next request. In contrast to the pool’s long- standing behavior of first-in-first-out, which produces a round-robin effect of using each connection in the pool in series, lifo mode allows excess connections to remain idle in the pool, allowing server-side timeout schemes to close these connections out. The difference between FIFO and LIFO is basically whether or not its desirable for the pool to keep a full set of connections ready to go even during idle periods:

engine = create_engine(
    "postgreql://", pool_use_lifo=True, pool_pre_ping=True)

Above, we also make use of the create_engine.pool_pre_ping flag so that connections which are closed from the server side are gracefully handled by the connection pool and replaced with a new connection.

Note that the flag only applies to QueuePool use.

猜你喜欢

转载自www.cnblogs.com/yuanjiangw/p/12930826.html