written in front
Here is a record of how to implement database operations in python.
1. Database connection pool
- The database connection pool can satisfy high concurrent database processing, which is more robust than the original JDBC connection;
- needs to be introduced
from dbutils.pooled_db import PooledDB
; - The meaning of database connection pool configuration is as follows:
creator: the module that uses the linked database;
maxconnections: the maximum number of connections allowed by the connection pool, 0 and None indicate no limit;
mincached: at initialization, the connection pool creates at least idle connections, 0 means not to create;
maxcached: the connection pool is idle The maximum number of connections, 0 and None means no limit;
maxshared: the maximum number of shared connections in the connection pool, 0 and None means all shared, ps: it is actually useless, because the threadsafety in modules such as pymsql and MySQLDB is 1, No matter how many values are set, _maxcahed will always be 0, so it will always be shared by all connections;
blocking: If there is no shared connection available in the connection pool, whether to block and wait, True means to wait, False means not to wait and then report an error; setsession: Before starting a
session Executed command list;
ping: ping Mysql server to check whether the service is available;
- Use
DB_pool.connection()
the database connection object that can obtain multi-thread safety, generally one thread can obtain one; - Release
conn
the object when you are done using it;
import sys
import pymssql
from dbutils.pooled_db import PooledDB
import json
import os
from utils.database_op import DBUtils
if __name__ == '__main__':
'''
启动配置
'''
if len(sys.argv) < 3:
print("error: init argv missed.")
else:
doc_uid = sys.argv[1]
doc_rpath = sys.argv[2]
config_path = sys.argv[3]
print(sys.argv)
# 数据库连接池
DB_pool = PooledDB(
creator=pymssql,
maxconnections=50,
mincached=0,
maxcached=20,
maxshared=0,
blocking=True,
setsession=[],
ping=5,
host=xxx,
port=xxx,
user=xxx,
password=xxx,
database=xxx)
# conn可以从连接池中获取连接
DB_conn = DB_pool.connection()
try:
DBUtils.update_start_process_at(DB_conn, doc_uid)
DBUtils.update_phase(DB_conn, doc_uid, 300)
DBUtils.update_process_id(DB_conn, doc_uid, os.getpid())
DBUtils.update_json_data(DB_conn, doc_uid, pattern_json_data)
DBUtils.update_finish_process_at(DB_conn, doc_uid)
except BaseException as e:
# 打印错误,repr将对象转换成字符串
print('error: ' + repr(e))
print(e)
finally:
# 释放数据库连接
DB_conn.close()
2. Encapsulate database operation tools
- Encapsulating database operations into tool classes can operate the database more concisely;
- The functions in the class are all defined as static functions;
- Get the cursor object first
cursor
, and then execute SQL. If it is an addition, deletion or modification operation, it is necessarycommit()
to make the database effective; commit()
You can submit multiple uncommitted additions, deletions, and modifications at one time;datetime
When the type of the database is updated, it is also updated in the form of a stringstr(datetime.datetime.now())[0:-3]
, and the way to obtain the current time is ;- The json data is also converted into a string to update, and the corresponding database type is
varchar(MAX)
, meaning a variable-length string, with a maximum capacity of 2GB; - If the data type is a string, it must be enclosed in single quotes in SQL ;
- Note that
cursor
it is not thread-safe, and cannot be called in multiple threads at the same time, otherwise it will cause database deadlock, causing all database operations to be blocked; - The writing method of database operation can refer to the blog: Python database operation [3] - SQLServer .
import pymssql
from dbutils.pooled_db import PooledDB
import datetime
import json
class DBUtils:
@staticmethod
def update_status(DB_conn, doc_uid, status):
# 使用 cursor() 方法创建一个游标对象 cursor
DB_cursor = DB_conn.cursor()
try:
DB_sql = "update xxx" + \
" set status = " + str(status) + \
" where uuid = \'" + doc_uid + '\''
print(DB_sql)
# 执行SQL语句
DB_cursor.execute(DB_sql)
DB_conn.commit()
print("Update successfully.")
except Exception as e:
DB_conn.rollback()
# 打印错误,repr将对象转换成字符串
print('error: ' + repr(e))
finally:
DB_cursor.close()
@staticmethod
def update_json_data(DB_conn, doc_uid, json_data):
# 使用 cursor() 方法创建一个游标对象 cursor
DB_cursor = DB_conn.cursor()
# json转string
json_str = json.dumps(json_data, ensure_ascii=False)
try:
DB_sql = "update xxx" + \
" set json_data = \'" + json_str + '\'' + \
" where uuid = \'" + doc_uid + '\''
print(DB_sql)
# 执行SQL语句
DB_cursor.execute(DB_sql)
DB_conn.commit()
print("Update successfully.")
except Exception as e:
DB_conn.rollback()
# 打印错误,repr将对象转换成字符串
print('error: ' + repr(e))
finally:
DB_cursor.close()
@staticmethod
def update_start_process_at(DB_conn, doc_uid):
# 使用 cursor() 方法创建一个游标对象 cursor
DB_cursor = DB_conn.cursor()
try:
DB_sql = "update xxx" + \
" set start_process_at = \'" + str(datetime.datetime.now())[0:-3] + '\'' + \
" where uuid = \'" + doc_uid + '\''
print(DB_sql)
# 执行SQL语句
DB_cursor.execute(DB_sql)
DB_conn.commit()
print("Update successfully.")
except Exception as e:
DB_conn.rollback()
# 打印错误,repr将对象转换成字符串
print('error: ' + repr(e))
finally:
DB_cursor.close()
Supplement 1: Use Snowflake to generate a unique id
- Download the python library:
pip install pysnowflake
- Start the snowflake server side below
- First find the pip installation directory, the command is as follows:
pip show pysnowflake
- Enter the path
c:\users\dell\appdata\roaming\python\python38\Scripts
, double-click to startsnowflake_start_server.exe
; - Then import the library into the program:
from snowflake import client
- Used in the program as follows:
client.get_guid()
- In addition, if you are using the conda environment, it seems that you don’t need
snowflake_start_server.exe
to . You can directly execute the following command in cmd to start the server:
snowflake_start_server