Scheduled task tool apscheduler

Scheduled task tool apscheduler

About APScheduler

Apscheduler is based on a Python timing task framework of Quartz, which implements all the functions of Quart. At present, it provides tasks based on dates, fixed time intervals and corntab types, and can also perform persistent tasks; at the same time it provides a variety of different callers It is convenient for developers to use according to their own needs, and it is also convenient for working with third-party external persistent storage mechanisms such as databases.
Official document: Portal

Fundamental

In general, it is mainly written using python threading Event and Lock. The scheduler in the main loop (main_loop) repeatedly checks whether there is a task that needs to be executed. The check function for completing the task is _process_jobs, which mainly has several steps:

1. Ask each stored jobStore whether there are tasks to be executed due.

due_jobs=jobstore.get_due_jobs(now)

2. Due_jobs is not empty, then calculate the time point each of these jobs needs to run, and submit to submit for task scheduling as soon as the time is up.

run_times = job._get_run_times(now)
...
if run_times:
    try:
        executor.submit_job(job, run_times)

3. In the main loop, if it is called without interruption, and there is actually no job to be executed, this will cause a waste of resources. Therefore, in the program, if every time _process_jobs is dropped, a pre-judgment is made to determine how long the next job (closest to the present) will be executed, and tell main_loop as the return value, then the main loop can be Go to sleep, wait for so long to wake up, and execute the next _process_jobs.
For details, see: _process_jobs function in apscheduler / scheduler / base.py

composition

APScheduler consists of the following four parts:

  1. triggers (trigger) :
    specify the timing of the execution of scheduled tasks. There are three categories of data, interval, and cron.
  2. job stores (memory) :
    You can store timed persistent. Job storage supports mainstream storage mechanisms: such as redis, mongodb, relational databases, memory, etc., which are stored in memory by default.
  3. executors (executors) :
    when the scheduled task is to be executed, the task is executed in a process or thread mode
  4. schedulers (Scheduler) :
    Commonly used BackgroundScheduler (background running) and BlockingScheduler (blocking)

Examples

Code:

#coding=utf-8

from flask import Flask
from flask_apscheduler import APScheduler
from apscheduler.jobstores.redis import RedisJobStore
from flask import request
import os
import time

app = Flask(__name__)
scheduler = APScheduler()

class Config(object):
    JOBS = []
    SCHEDULER_JOBSTORES = {
        'default': RedisJobStore(host = "Your redis host", port = 6379, db = 0, password = 'Your redis password')
    }

    SCHEDULER_EXECUTORS = {
        'default': {'type': 'threadpool', 'max_workers': 20}
    }

    SCHEDULER_JOB_DEFAULTS = {
        'coalesce': False,
        'max_instances': 3
    }

    SCHEDULER_API_ENABLED = True

def job1(a, b):
    print(str(a) + ' ' + str(b)+'   '+time.strftime('%Y-%m-%d %H:%M:%S',time.localtime(time.time())))

def job2(a):
    py='python wx_post_test.py '+a
    os.system(py)

def jobfromparm(**jobargs):
    id = jobargs['id']
    job = scheduler.add_job(func=job1,id=id, args=(1,2),trigger='interval',seconds=10,replace_existing=True) # interval间隔调度,即每隔多久执行一次
    #job = scheduler.add_job(func = job1, id = id, args = (1,2), trigger = 'cron', day_of_week = '0-6', hour = '20', minute = '14',replace_existing=True)  # corn 定时调度,即规定在某一时刻执行
    #job = scheduler.add_job(func = job1, id = id, args = (1, 2), trigger = 'date', run_date = '2019-11-26 16:30:05',replace_existing=True) # data定时调度,即设置后作业只会执行一次,是最基本的调度模式

    return 'sucess'

@app.route('/pause')
def pausejob():
    job_id = request.args.get('job_id')
    scheduler.pause_job(job_id)
    return "Success!"

@app.route('/resume')
def resumejob():
    scheduler.resume_job('job1')
    return "Success!"

@app.route('/addjob', methods=['GET', 'POST'])
def addjob():
    data = request.get_json(force=True)
    print(data)
    job = jobfromparm(**data)
    return 'sucess'

@app.route('/getjobs', methods=['GET', 'POST'])
def getjobs():
    jobs = scheduler.get_jobs()
    return str(jobs)

@app.route('/delete')
def deletejob():
    job_id = request.args.get('job_id')
    scheduler.remove_job(job_id)
    return "Success!"

if __name__ == '__main__':
    #app = Flask(__name__)
    app.config.from_object(Config())

    # it is also possible to enable the API directly
    # scheduler.api_enabled = True
    scheduler = APScheduler()
    scheduler.init_app(app)
    scheduler.start()
    app.run(debug=True)
    

run:

curl -i -X POST -H "'Content-type':'appon/x-www-form-urlencoded', 'charset':'utf-8', 'Accept': 'text/plain'" -d '{"id":"1"}' http://127.0.0.1:5000/addjob

result:
Insert picture description here

Published 24 original articles · praised 2 · 20,000+ views

Guess you like

Origin blog.csdn.net/longjuanfengzc/article/details/103026353