Asynchronous task queue arq in Python

introduction

I’m writing things with sanic recently. All codes that involve IO blocking need to use aio modules. Fortunately, the asyncio ecosystem has developed fairly well in recent years. There should be all of them
. The login/registration business in recent business involves a lot Complicated (involving invitations), requiring operations such as unlocking, sending SMS, etc. It is very cumbersome to come up with such a module, and it will be fine to add a sliding verification in the future.
Therefore, I want to make a module similar to celery to decouple tasks, but currently celery does not currently support asynchronous (the official will support asynchronous in celery5).
Therefore, I found a python implementation of the arq module, which has been applied in the production environment, and the effect is not bad~

The official introduction is as follows:

  • Non-blocking
  • Delayed execution, timed tasks, retry mechanism
  • fast
  • elegant
  • small

First install it:

$ pip install arq

Then, let’s quickly understand how to use it~

Simple to use

First look at the code written below

import asyncio
from arq import create_pool
from arq.connections import RedisSettings
'''
遇到问题没人解答?小编创建了一个Python学习交流QQ群:778463939
寻找有志同道合的小伙伴,互帮互助,群里还有不错的视频学习教程和PDF电子书!
'''

async def say_hello(ctx, name) -> None:
    """任务函数

    Parameters
    ----------
    ctx: dict
        工作者上下文

    name: string

    Returns
    -------
    dict
    """
    print(ctx)
    print(f"Hello {name}")


async def startup(ctx):
    print("starting...")


async def shutdown(ctx):
    print("ending...")


async def main():
    # 创建
    redis = await create_pool(RedisSettings(password="root123456"))
    # 分配任务
    await redis.enqueue_job('say_hello', name="liuzhichao")


# WorkerSettings定义了创建工作时要使用的设置,
# 它被arq cli使用
class WorkerSettings:
    # 队列使用 `redis` 配置, 可以配置相关参数
    # 例如我的密码是 `rooot123456`
    redis_settings = RedisSettings(password="root123456")
    # 被监听的函数
    functions = [say_hello]
    # 开启 `worker` 运行
    on_startup = startup
    # 终止 `worker` 后运行
    on_shutdown = shutdown


if __name__ == '__main__':
    loop = asyncio.get_event_loop()
    loop.run_until_complete(main())

1. Let's see how we run it next

$ arq tasks.WorkerSettings
Maybe you can see 

10:56:25: Starting worker for 1 functions: say_hello
10:56:25: redis_version=4.0.1 mem_usage=32.00M clients_connected=6 db_keys=19189
starting...

2. Run the tasks.py file

$ python3 tasks.py
Maybe you can see  

11:01:04:   0.29s → 5a5ac0edd5ad4b318b9848637b1ae800:say_hello(name='liuzhichao')
{
    
    'redis': <ArqRedis <ConnectionsPool [db:0, size:[1:10], free:1]>>, 'job_id': '5a5ac0edd5ad4b318b9848637b1ae800', 'job_try': 1, 'enqueue_time': datetime.datetime(2019, 5, 23, 3, 1, 4, 570000), 'score': 1558580464570}
Hello liuzhichao
11:01:04:   0.00s ← 5a5ac0edd5ad4b318b9848637b1ae800:say_hello ● 

3. Then this simple task is executed, is it particularly simple ~

Timed task

#! /usr/bin/env python
# -*- coding: utf-8 -*-
'''
遇到问题没人解答?小编创建了一个Python学习交流QQ群:778463939
寻找有志同道合的小伙伴,互帮互助,群里还有不错的视频学习教程和PDF电子书!
'''
from arq import cron
from arq.connections import RedisSettings


async def run_regularly(ctx):
    # 表示在 10、11、12 分 50秒的时候打印
    print('run job at 26:05, 27:05 and 28:05')


class WorkerSettings:
    redis_settings = RedisSettings(password="root123456")

    cron_jobs = [
        cron(run_regularly, minute={
    
    10, 11, 12}, second=50)
    ]

1. Run it

$ arq tasks.WorkerSettings
If run out of the time,maybe you can see

11:10:25: Starting worker for 1 functions: cron:run_regularly
11:10:25: redis_version=4.0.1 mem_usage=32.00M clients_connected=6 db_keys=19190

11:10:51:   0.51s → cron:run_regularly()
run foo job at 26:05, 27:05 and 28:05
11:10:51:   0.00s ← cron:run_regularly ● 

11:11:51:   0.51s → cron:run_regularly()
run foo job at 26:05, 27:05 and 28:05
11:11:51:   0.00s ← cron:run_regularly ● 

11:12:50:   0.50s → cron:run_regularly()
run foo job at 26:05, 27:05 and 28:05
11:12:50:   0.00s ← cron:run_regularly ● 

Follow this timeline and it will continue to loop indefinitely

Guess you like

Origin blog.csdn.net/sinat_38682860/article/details/108645646