Flask database connection pool, DBUtils, http connection pool

1. Introduction and use of DBUtils

DBUtils Introduction

DBUtils is a set of Python packages used to manage database"Connection Pool" for "high frequency and high concurrency" ; Database access provides better performance and can automatically manage the creation and release of connection objects. And allows thread-safe wrapping and connection of non-thread-safe database interfaces. This connection can be used in a variety of multi-threaded environments.

Usage scenario: If you are using one of the popular object-relational mappers SQLObject or SQLAlchemy, you do not need DBUtils as they come with their own connection pools. SQLObject 2 (SQL-API) actually borrows some code from DBUtils to separate pooling into a separate layer.

DBUtils provides two external interfaces:

  • PersistentDB: Provides thread-specific database connections and automatically manages connections.
  • PooledDB: Provides database connections that can be shared between threads and automatically manages connections.

In addition, the actual database driver used also depends on it. For example, SQLite database can only use PersistentDB as the connection pool. Download address:http://www.webwareforpython.org/downloads/DBUtils/

Use DBUtils database connection pool

Anso:pip install DBUtils

Example: MySQLdb module usage

The connection pool object is only initialized once, which can generally be ensured as module-level code. PersistentDB connection example:

import DBUtils.PersistentDB

# maxusage 则为一个连接最大使用次数
persist = DBUtils.PersistentDB.PersistentDB(dbpai=MySQLdb,maxusage=1000,**kwargs)

# 获取连接池
conn = persist.connection() 

# 关闭连接池
conn.close()  

The parameter dbpai specifies the database module used, which is compatible with DB-API. The following are database modules that support the DB-API 2 specification

pip install pymysql(mysql)
pip install pymssql(sqlserver)
pip install cx_Oracle(oracle)
pip install phoenixdb(hbase)
pip install sqlite3(sqlite3 python自带)

DBUtils only provides connection pool management, and the actual database operations are still completed by the target database module that complies with the DB-API 2 standard.

Example: pymysql module usage

The usage method of PooledDB is the same as that of PersistentDB, but the parameters are different.

  • dbapi: database interface
  • mincached: the number of empty connections opened at startup
  • maxcached: the maximum number of available connections in the connection pool
  • maxshared: The maximum number of shareable connections in the connection pool
  • maxconnections: the maximum number of allowed connections
  • blocking: Whether to block when the maximum number is reached
  • maxusage: the maximum number of reuses for a single connection
  • setsession: Used to prepare the session passed to the database, such as ["set name UTF-8"].

conn = pooled.connection()
cur = conn.cursor()
cur.execute(sql)
res = cur.fetchone()
cur.close()   # 或者 del cur
conn.close()  # 或者 del conn

import pymysql
from dbutils.pooled_db import PooledDB

# 定义连接参数
pool = PooledDB(
    creator=pymysql,
    maxconnections=6,
    mincached=2,
    maxcached=5,
    blocking=True,
    host='localhost',
    user='root',
    passwd='123456',
    db='mydb',
    port=3306,
    charset='utf8mb4'
)


def main():
    # 从连接池获取连接
    conn = pool.connection()
    cursor = conn.cursor()

    # 执行 SQL 语句
    sql = "SELECT * FROM students"
    cursor.execute(sql)
    result = cursor.fetchall()

    # 处理查询结果
    for row in result:
        print(row)

    # 关闭游标和连接
    cursor.close()
    conn.close()


if __name__ == '__main__':
    main()

Example: Object-oriented using DBUtils

"""
使用DBUtils数据库连接池中的连接,操作数据库
"""
import json
import pymysql
import datetime
from DBUtils.PooledDB import PooledDB
import pymysql


class MysqlClient(object):
    __pool = None;

    def __init__(self, mincached=10, maxcached=20, maxshared=10, maxconnections=200, blocking=True,
                 maxusage=100, setsession=None, reset=True,
                 host='127.0.0.1', port=3306, db='test',
                 user='root', passwd='123456', charset='utf8mb4'):
        """

        :param mincached:连接池中空闲连接的初始数量
        :param maxcached:连接池中空闲连接的最大数量
        :param maxshared:共享连接的最大数量
        :param maxconnections:创建连接池的最大数量
        :param blocking:超过最大连接数量时候的表现,为True等待连接数量下降,为false直接报错处理
        :param maxusage:单个连接的最大重复使用次数
        :param setsession:optional list of SQL commands that may serve to prepare
            the session, e.g. ["set datestyle to ...", "set time zone ..."]
        :param reset:how connections should be reset when returned to the pool
            (False or None to rollback transcations started with begin(),
            True to always issue a rollback for safety's sake)
        :param host:数据库ip地址
        :param port:数据库端口
        :param db:库名
        :param user:用户名
        :param passwd:密码
        :param charset:字符编码
        """

        if not self.__pool:
            self.__class__.__pool = PooledDB(pymysql,
                                             mincached, maxcached,
                                             maxshared, maxconnections, blocking,
                                             maxusage, setsession, reset,
                                             host=host, port=port, db=db,
                                             user=user, passwd=passwd,
                                             charset=charset,
                                             cursorclass=pymysql.cursors.DictCursor
                                             )
        self._conn = None
        self._cursor = None
        self.__get_conn()

    def __get_conn(self):
        self._conn = self.__pool.connection();
        self._cursor = self._conn.cursor();

    def close(self):
        try:
            self._cursor.close()
            self._conn.close()
        except Exception as e:
            print(e)

    def __execute(self, sql, param=()):
        count = self._cursor.execute(sql, param)
        print(count)
        return count

    @staticmethod
    def __dict_datetime_obj_to_str(result_dict):
        """把字典里面的datatime对象转成字符串,使json转换不出错"""
        if result_dict:
            result_replace = {k: v.__str__() for k, v in result_dict.items() if isinstance(v, datetime.datetime)}
            result_dict.update(result_replace)
        return result_dict

    def select_one(self, sql, param=()):
        """查询单个结果"""
        count = self.__execute(sql, param)
        result = self._cursor.fetchone()
        """:type result:dict"""
        result = self.__dict_datetime_obj_to_str(result)
        return count, result

    def select_many(self, sql, param=()):
        """
        查询多个结果
        :param sql: qsl语句
        :param param: sql参数
        :return: 结果数量和查询结果集
        """
        count = self.__execute(sql, param)
        result = self._cursor.fetchall()
        """:type result:list"""
        [self.__dict_datetime_obj_to_str(row_dict) for row_dict in result]
        return count, result

    def execute(self, sql, param=()):
        count = self.__execute(sql, param)
        return count

    def begin(self):
        """开启事务"""
        self._conn.autocommit(0)

    def end(self, option='commit'):
        """结束事务"""
        if option == 'commit':
            self._conn.autocommit()
        else:
            self._conn.rollback()


if __name__ == "__main__":
    mc = MysqlClient()
    sql1 = 'SELECT * FROM shiji  WHERE  id = 1'
    result1 = mc.select_one(sql1)
    print(json.dumps(result1[1], ensure_ascii=False))

    sql2 = 'SELECT * FROM shiji  WHERE  id IN (%s,%s,%s)'
    param = (2, 3, 4)
    print(json.dumps(mc.select_many(sql2, param)[1], ensure_ascii=False))

No connection pool required

import MySQLdb
conn= MySQLdb.connect(host='localhost',user='root',passwd='pwd',db='myDB',port=3306)  
#import pymysql
#conn = pymysql.connect(host='localhost', port='3306', db='game', user='root', password='123456', charset='utf8')
cur=conn.cursor()
SQL="select * from table1"
r=cur.execute(SQL)
r=cur.fetchall()
cur.close()
conn.close()

Use connection pool

import MySQLdb
from DBUtils.PooledDB import PooledDB
#5为连接池里的最少连接数
pool = PooledDB(MySQLdb,5,host='localhost',user='root',passwd='pwd',db='myDB',port=3306) 

# 以后每次需要数据库连接就是用connection()函数获取连接就好了
conn = pool.connection()  
cur=conn.cursor()
SQL="select * from table1"
r=cur.execute(SQL)
r=cur.fetchall()
cur.close()
conn.close()

Multithreading using connection pool

import sys
import threading
import MySQLdb
import DBUtils.PooledDB

connargs = { "host":"localhost", "user":"user1", "passwd":"123456", "db":"test" }
def test(conn):
    try:
        cursor = conn.cursor()
        count = cursor.execute("select * from users")
        rows = cursor.fetchall()
        for r in rows: pass
    finally:
        conn.close()
        
def testloop():
    print ("testloop")
    for i in range(1000):
        conn = MySQLdb.connect(**connargs)
        test(conn)
        
def testpool():
    print ("testpool")
    pooled = DBUtils.PooledDB.PooledDB(MySQLdb, **connargs)
    for i in range(1000):
        conn = pooled.connection()
        test(conn)
        
def main():
    t = testloop if len(sys.argv) == 1 else testpool
    for i in range(10):
        threading.Thread(target = t).start()
        
if __name__ == "__main__":
    main()

Although the test method is not very rigorous, the performance improvement brought by DBUtils can still be felt from the test results. Of course, we can also reuse an unclosed Connection in testloop(), but this is not suitable for actual development situations.

2. Flask configuration, blueprint, database connection pool, context principle

Flask configuration file, blueprint, database connection pool, context principle:https://www.cnblogs.com/yunweixiaoxuesheng/p/8418135.html

Configuration

Method 1, use dictionary configuration

app.config['SESSION_COOKE_NAME'] = 'session_liling'

Method two, import files and set

from flask import Flask

app = Flask(__name__)

app.config.from_pyfile('settings.py')    # 引用settings.py中的AAAA
print(app.config['AAAA'])        # 123

# settings.py
AAAA = 123

Method three, use environment variable settings, recommended

from flask import Flask

app = Flask(__name__)

import us

os.environ['FLASK-SETTINGS'] = 'settings.py'

app.config.from_envvar('FLASK-SETTINGS')

Method 4: Import and use through object mode. Different configurations can be selected according to different environments. It is recommended to use

from flask import Flask

app = Flask(__name__)

app.config.from_object('settings.BaseConfig')
print(app.config['NNNN'])  # 123


# settings.py

class BaseConfig(object): # Public configuration
    NNNN = 123


class TestConfig(object):
    DB = '127.0.0.1'


class DevConfig(object):
    DB = '192.168.1.1'


class ProConfig(object):
    DB = '47.18.1.1'

Different files reference configuration

from flask import Flask,current_app

app = Flask(__name__)

app.secret_key = 'adfadsfhjkhakljsdfh'

app.config.from_object('settings.BaseConfig')

@app.route('/index')
def index():
    print(current_app.config['NNNN'])
    return 'xxx'

if __name__ == '__main__':
    app.run()

instance_path、instance_relative_config

from flask import Flask,current_app

app = Flask(__name__,instance_path=None,instance_relative_config=False)
# 默认 instance_relative_config = False 那么 instance_relative_config和instance_path都不会生效
# instance_relative_config=True,instance_path才会生效,app.config.from_pyfile('settings.py')将会失效
# 配置文件找的路径,按instance_path的值作为配置文件路径
# 默认instance_path=None,None会按照当前路径下的instance文件夹为配置文件的路径
# 如果设置路径,按照设置的路径查找配置文件。

app.config.from_pyfile('settings.py')

@app.route('/index')
def index():
    print(current_app.config['NNNN'])
    return 'xxx'


if __name__ == '__main__':
    app.run()

blueprint

Allocate the directory structure of the application, generally suitable for small and medium-sized enterprises

Code:

# crm/__init__.py
# 创建flask项目,用蓝图注册不同的模块

from flask import Flask
from .views import account
from .views import order

app = Flask(__name__)

app.register_blueprint(account.account)
app.register_blueprint(order.order)

------------------------------------------------------------------------
# manage.py
# 启动文件
import crm

if __name__ == '__main__':
    crm.app.run()

------------------------------------------------------------------------
# crm/views/account.py
# 视图函数模块,Blueprint,将函数引入app
from flask import Blueprint

account = Blueprint('account',__name__,url_prefix='/xxx')

@account.route('/login')
def login():
    return 'Login'

Flask database connection pool

https://www.cnblogs.com/TheLand/p/9178305.html

ORM (Object Relational Mapping)

ORM (Object-Relational Mapping) is a programming technology used to establish mapping relationships between relational databases and object-oriented programming languages. It allows developers to use an object-oriented approach to manipulate databases without directly writing or executing SQL queries.

ORM provides an abstraction layer that maps database tables into objects, and provides a set of methods and tools to facilitate database addition, deletion, modification, and query operations. Developers can represent and manipulate data by using objects and methods without having to worry about the underlying SQL statements and database details.

Common ORM frameworks include:

  • SQLAlchemy: It is a library for Python to operate databases. It can perform ORM mapping. It is a powerful Python ORM framework that supports a variety of database backends and provides advanced query functions and transaction management features. It is designed for efficient and high-performance database access and implements a complete enterprise-level persistence model. SQLAlchemy's philosophy is that the size and performance of a SQL database is more important than the collection of objects; and the abstraction of the collection of objects is more important than the tables and rows.
  • Flask-SQLAlchemy: Flask-SQLAlchemy is a SQLAlchemy extension integrated with the Flask framework, which simplifies the process of using SQLAlchemy for database operations in Flask applications. It provides a simple yet powerful set of tools and features that make interacting with databases easier and more efficient.
  • Django ORM: The ORM that comes with the Django framework provides a simple and easy-to-use interface, supports multiple database backends, and has powerful query and model association functions.
  • Hibernate: The most popular ORM framework in the Java field, providing mapping and management between Java objects and relational databases.

The benefits of using an ORM include:

  • Improve development efficiency: ORM provides an object-oriented programming interface, allowing developers to perform database operations more quickly and reducing the workload of writing and debugging SQL statements.
  • Cross-database platforms: ORM frameworks often support multiple database backends, allowing developers to easily switch or use different database systems simultaneously.
  • Database abstraction and security: ORM hides underlying database details, provides a layer of abstraction that helps maintain and manage database structures, and provides security protections such as parameter binding and preventing SQL injection.
  • Better maintainability and testability: Using an ORM can improve the readability and maintainability of your code, making it easier to conduct unit and integration testing.

Note: ORM cannot solve all database problems. In some cases, complex queries and performance requirements may require direct use of native SQL. Therefore, carefully select and use an appropriate ORM framework based on specific needs and scenarios.

Why use database connection pool

  • Multiple connections: If the connection pool is not used, the database must be connected for each operation. If the number of connections is too many, the database will consume too many resources. If the number is too large, the database will Overloaded, causing the program to run slowly.
  • Single connection: Create a connection globally in the program, causing the program to always use one connection, avoiding problems caused by repeated connections. But when using multiple threads, locking is required. This becomes serial and cannot achieve concurrency

Solution:

  • Method 1: Create a link for each thread (implemented based on local threads. thread.local). Each thread uses its own database link independently. Closing the thread is not a real shutdown. When this thread calls again, it will still use The initially created link is not closed until the thread terminates. If there are more threads, many connections will still be created.
  • Method 2: Create a connection pool to provide connections for all threads, obtain them when used, and put them back into the connection pool after use. Assume that the maximum number of links is 10, which is actually a list. When you pop one, they will append one. All links in the link pool are linked in a queued manner. All links in the link pool can be reused and shared, which not only achieves concurrency but also prevents too many links.

Based on DBUtils database connection pool

The database connection pool avoids having to connect to the database for every operation. Always use one connection. Multi-threading will also cause problems. It can be locked, but it becomes serial.

import pymysql
import threading
from threading import RLock

LOCK = RLock()
CONN = pymysql.connect(
    host='127.0.0.1',
    port=3306,
    user='root',
    password='123',
    database='ok1',
    charset='utf8'
)


def task(arg):
    with LOCK:
        cursor = CONN.cursor()
        cursor.execute('select * from book')
        result = cursor.fetchall()
        cursor.close()
        print(result)


for i in range(10):
    t = threading.Thread(target=task, args=(i,))
    t.start()

Data isolation between threads

"Local threads" can achieve data isolation between threads. Ensure that each thread has only its own copy of data and will not affect others during operation. Even if it is multi-threaded, its own values ​​​​are isolated from each other.

import threading
import time

# 本地线程对象
local_values = threading.local()


def func(num):
    """
    # 第一个线程进来,本地线程对象会为他创建一个
    # 第二个线程进来,本地线程对象会为他创建一个
    {
        线程1的唯一标识:{name:1},
        线程2的唯一标识:{name:2},
    }
    :param num: 
    :return: 
    """
    local_values.name = num  # 4
    # 线程停下来了
    time.sleep(2)
    # 第二个线程: local_values.name,去local_values中根据自己的唯一标识作为key,获取value中name对应的值
    print(local_values.name, threading.current_thread().name)


for i in range(5):
    th = threading.Thread(target=func, args=(i,), name='线程%s' % i)
    th.start()

Mode 1: Each thread creates a connection

creates each connection based on the threading.local implementation.
Each thread creates a connection and the thread is not actually closed.
When calling the thread again, the original connection will still be used.
The connection will not be closed until the thread actually terminates.

from DBUtils.PersistentDB import PersistentDB
import pymysql

POOL = PersistentDB(
    creator=pymysql,  # 使用链接数据库的模块
    maxusage=None,  # 一个链接最多被重复使用的次数,None表示无限制
    setsession=[],  # 开始会话前执行的命令列表。如:["set datestyle to ...", "set time zone ..."]
    ping=0,
    # ping MySQL服务端,检查是否服务可用。# 如:0 = None = never, 1 = default = whenever it is requested, 2 = when a cursor is created, 4 = when a query is executed, 7 = always
    closeable=False,
    # 如果为False时, conn.close() 实际上被忽略,供下次使用,再线程关闭时,才会自动关闭链接。如果为True时, conn.close()则关闭链接,那么再次调用pool.connection时就会报错,因为已经真的关闭了连接(pool.steady_connection()可以获取一个新的链接)
    threadlocal=None,  # 本线程独享值得对象,用于保存链接对象,如果链接对象被重置
    host='127.0.0.1',
    port=3306,
    user='root',
    password='123',
    database='pooldb',
    charset='utf8'
)


def func():
    # conn = SteadyDBConnection()
    conn = POOL.connection()
    cursor = conn.cursor()
    cursor.execute('select * from tb1')
    result = cursor.fetchall()
    cursor.close()
    conn.close() # 不是真的关闭,而是假的关闭。 conn = pymysql.connect()   conn.close()

    conn = POOL.connection()
    cursor = conn.cursor()
    cursor.execute('select * from tb1')
    result = cursor.fetchall()
    cursor.close()
    conn.close()

import threading

for i in range(10):
    t = threading.Thread(target=func)
    t.start()

Mode 2: Thread reuse connection pool (recommended)

Create a connection pool to provide connections for all threads. Threads obtain connections when they use them, and put them back into the connection pool after use.
Threads continuously reuse connections in the connection pool.

import time
import pymysql
import threading
from DBUtils.PooledDB import PooledDB, SharedDBConnection

POOL = PooledDB(
    creator=pymysql,  # 使用链接数据库的模块
    maxconnections=6,  # 连接池允许的最大连接数,0和None表示不限制连接数
    mincached=2,  # 初始化时,链接池中至少创建的空闲的链接,0表示不创建

    maxcached=5,  # 链接池中最多闲置的链接,0和None不限制
    maxshared=3,
    # 链接池中最多共享的链接数量,0和None表示全部共享。
    # PS: 无用,因为pymysql和MySQLdb等模块的 threadsafety都为1,所有值无论设置为多少,
    # _maxcached永远为0,所以永远是所有链接都共享。
    blocking=True,  # 连接池中如果没有可用连接后,是否阻塞等待。True,等待;False,不等待然后报错
    maxusage=None,  # 一个链接最多被重复使用的次数,None表示无限制
    setsession=[],  # 开始会话前执行的命令列表。如:["set datestyle to ...", "set time zone ..."]
    # ping MySQL服务端,检查是否服务可用。
    # 如:0 = None = never, 1 = default = whenever it is requested, 
    # 2 = when a cursor is created, 4 = when a query is executed, 7 = always
    ping=0,
    host='127.0.0.1',
    port=3306,
    user='root',
    password='123456',
    database='flask_test',
    charset='utf8'
)


def func():
    # 检测当前正在运行连接数的是否小于最大链接数,如果不小于则:等待或报raise TooManyConnections异常
    # 否则
    # 则优先去初始化时创建的链接中获取链接 SteadyDBConnection。
    # 然后将SteadyDBConnection对象封装到PooledDedicatedDBConnection中并返回。
    # 如果最开始创建的链接没有链接,则去创建一个SteadyDBConnection对象,
    # 再封装到PooledDedicatedDBConnection中并返回。
    # 一旦关闭链接后,连接就返回到连接池让后续线程继续使用。

    # PooledDedicatedDBConnection
    conn = POOL.connection()

    # print(th, '链接被拿走了', conn1._con)
    # print(th, '池子里目前有', pool._idle_cache, '\r\n')

    cursor = conn.cursor()
    cursor.execute('select * from userinfo')
    result = cursor.fetchall()
    print(result)
    conn.close()

    conn = POOL.connection()

    # print(th, '链接被拿走了', conn1._con)
    # print(th, '池子里目前有', pool._idle_cache, '\r\n')

    cursor = conn.cursor()
    cursor.execute('select * from userinfo')
    result = cursor.fetchall()
    conn.close()


func()

context management

The so-called context is like answering a question based on the context of an exam question.
In a program, it generally refers to the external environment, such as network requests from wsgi, and usually only the above.
The context in flask is used on current_app, session, and request.

flask native thread

from flask import session

try:
    from greenlet import getcurrent as get_ident  # grenlet协程模块
except ImportError:
    try:
        from thread import get_ident
    except ImportError:
        from _thread import get_ident  # get_ident(),获取线程的唯一标识


class Local(object):  # 引用session中的LocalStack下的Local
    __slots__ = ('__storage__', '__ident_func__')  # __slots__该类在外面调用时,只能调用定义的字段,其他的不能调用

    def __init__(self):
        # object.__setattr__为self设置值,等价于self.__storage__ = {}
        # 为父类object中包含的__steattr__方法中的self.__storage__ = {}
        # 由于类内包含__steattr__,self.xxx(对象.xxx)时会自动会触发__steattr__,
        # 当前__steattr__中storage = self.__storage__又会像self.xxx要值,故会造成递归
        # 所以在父类中__steattr__方法赋值,避免self.xxx调用__setattr__造成的递归
        object.__setattr__(self, '__storage__', {})

        object.__setattr__(self, '__ident_func__', get_ident)  # 赋值为协程

    def __iter__(self):
        return iter(self.__storage__.items())

    def __release_local__(self):
        self.__storage__.pop(self.__ident_func__(), None)

    def __getattr__(self, name):
        try:
            return self.__storage__[self.__ident_func__()][name]
        except KeyError:
            raise AttributeError(name)

    def __setattr__(self, name, value):
        ident = self.__ident_func__()  # 获取单钱线程(协程)的唯一标识
        storage = self.__storage__  # {}
        try:
            storage[ident][name] = value  # { 111 : {'stack':[] },222 : {'stack':[] } }
        except KeyError:
            storage[ident] = {name: value}

    def __delattr__(self, name):
        try:
            del self.__storage__[self.__ident_func__()][name]
        except KeyError:
            raise AttributeError(name)


_local = Local()  # flask的本地线程功能,类似于本地线程,如果有人创建Local对象并,设置值,每个线程里一份
_local.stack = []  # _local.stack会调用__setattr__的self.__ident_func__()取唯一标识等

special stack

from flask import session

try:
    from greenlet import getcurrent as get_ident
except ImportError:
    try:
        from thread import get_ident
    except ImportError:
        from _thread import get_ident  # 获取线程的唯一标识 get_ident()


class Local(object):
    __slots__ = ('__storage__', '__ident_func__')

    def __init__(self):
        # self.__storage__ = {}
        # self.__ident_func__ = get_ident
        object.__setattr__(self, '__storage__', {})
        object.__setattr__(self, '__ident_func__', get_ident)

    def __iter__(self):
        return iter(self.__storage__.items())

    def __release_local__(self):
        self.__storage__.pop(self.__ident_func__(), None)

    def __getattr__(self, name):
        try:
            return self.__storage__[self.__ident_func__()][name]
        except KeyError:
            raise AttributeError(name)

    def __setattr__(self, name, value):
        ident = self.__ident_func__()  # 获取当前线程(协程)的唯一标识
        storage = self.__storage__  # {}
        try:
            storage[ident][name] = value  # { 111:{'stack':[] },222:{'stack':[] }  }
        except KeyError:
            storage[ident] = {name: value}

    def __delattr__(self, name):
        try:
            del self.__storage__[self.__ident_func__()][name]
        except KeyError:
            raise AttributeError(name)


_local = Local()
_local.stack = []

Using stack and local in flask

from functools import partial
from flask.globals import LocalStack, LocalProxy

_request_ctx_stack = LocalStack()


class RequestContext(object):
    def __init__(self, environ):
        self.request = environ


def _lookup_req_object(name):
    top = _request_ctx_stack.top
    if top is None:
        raise RuntimeError(_request_ctx_stack)
    return getattr(top, name)


# 实例化了LocalProxy对象,_lookup_req_object参数传递
session = LocalProxy(partial(_lookup_req_object, 'session'))

"""
local = {
    “标识”: {'stack': [RequestContext(),]}
}
"""
_request_ctx_stack.push(RequestContext('c1'))  # 当请求进来时,放入

print(session)  # 获取 RequestContext('c1'), top方法
print(session)  # 获取 RequestContext('c1'), top方法
_request_ctx_stack.pop()  # 请求结束pop

Example:

from functools import partial
from flask.globals import LocalStack, LocalProxy
 
ls = LocalStack()
 
 
class RequestContext(object):
    def __init__(self, environ):
        self.request = environ
 
 
def _lookup_req_object(name):
    top = ls.top
    if top is None:
        raise RuntimeError(ls)
    return getattr(top, name)
 
 
session = LocalProxy(partial(_lookup_req_object, 'request'))
 
ls.push(RequestContext('c1')) # 当请求进来时,放入
print(session) # 视图函数使用
print(session) # 视图函数使用
ls.pop() # 请求结束pop
 
 
ls.push(RequestContext('c2'))
print(session)
 
ls.push(RequestContext('c3'))
print(session)

Flask SQLAlchemy uses connection pooling

Flask SQLAlchemy provides built-in connection pooling functionality that is easy to configure and use.

In the Flask application, we can set the size of the connection pool by configuring the SQLALCHEMY_POOL_SIZE  parameter. The size of the connection pool determines the number of database connections open at the same time. For example, we can set the connection pool size to 10:

app.config['SQLALCHEMY_POOL_SIZE'] = 10

from flask_sqlalchemy import SQLAlchemy

app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql://user:password@localhost/db_name'
db = SQLAlchemy(app)

# 创建模型类
class User(db.Model):
    id = db.Column(db.Integer, primary_key=True)
    name = db.Column(db.String(50))

# 添加数据到数据库
user = User(name='John')
db.session.add(user)
db.session.commit()

# 查询数据
all_users = User.query.all()

# 更新数据
user = User.query.filter_by(name='John').first()
user.name = 'Jane'
db.session.commit()

# 删除数据
user = User.query.filter_by(name='Jane').first()
db.session.delete(user)
db.session.commit()

3. flask http connection pool

Flask itself does not provide built-in HTTP connection pooling functionality, but you can use third-party libraries to implement HTTP connection pooling in Flask. One of the commonly used libraries is urllib3, which provides advanced connection pool management capabilities.

Example: Use urllib3 to create and manage HTTP connection pools in Flask:

Installation: pip install urllib3

from flask import Flask
import urllib3

app = Flask(__name__)

"""
使用 urllib3.PoolManager() 创建了一个连接池管理器对象 http,然后使用 http.request() 方法发送了一个 GET 请求。
您可以根据需要进行配置和自定义,例如设置最大连接数、超时时间、重试策略等。以下是一个示例,展示了如何进行自定义设置:
"""


@app.route('/index_1')
def index_1():
    http = urllib3.PoolManager()
    response = http.request('GET', 'http://api.example.com')
    return response.data


"""
对连接池进行了一些自定义配置,包括最大连接数、每个连接的最大数量、连接和读取的超时时间以及重试策略。
使用 urllib3 可以更好地控制和管理 HTTP 连接,提高 Flask 应用程序的性能和效率。
"""


@app.route('/index_2')
def index_2():
    http = urllib3.PoolManager(
        num_pools=10,  # 最大连接数
        maxsize=100,  # 每个连接的最大数量
        timeout=urllib3.Timeout(connect=2.0, read=5.0),  # 连接和读取的超时时间
        retries=urllib3.Retry(total=3, backoff_factor=0.1, status_forcelist=[500, 502, 503, 504])  # 重试策略
    )
    response = http.request('GET', 'http://api.example.com')
    return response.data

Guess you like

Origin blog.csdn.net/freeking101/article/details/132876026