Python Django coroutine error, process pool, thread pool and asynchronous calls, callback mechanism

I. Description of the problem

In Django view function, the introduction  gevent module

import gevent
from gevent import monkey; monkey.patch_all()
from gevent.pool import Pool

 

Start Django error:

MonkeyPatchWarning: Monkey-patching outside the main native thread. Some APIs will not be available. Expect a KeyError to be printed at shutdown.
  from gevent import monkey; monkey.patch_all()
MonkeyPatchWarning: Monkey-patching not on the main thread; threading.main_thread().join() will hang from a greenlet
  from gevent import monkey; monkey.patch_all()

 

The reason is that the implementation of this line  monkey.patch_all ()  error when the code.

 

Since Django can not use coroutines, that I need to use asynchronous execution, how to do?

See below

 

Second, the process pool, thread pool and asynchronous calls, callback mechanism

Process pool, thread pool use cases

Process pool and thread pool use almost the same, just call different modules ~! !

from concurrent.futures Import ProcessPoolExecutor   # process cell module 
from concurrent.futures Import ThreadPoolExecutor   # thread pool module 
Import os, Time, Random

#   The following is an example process pool, thread pool module just change it to 
DEF Talk (name):
     Print ( ' name:% S% S RUN PIS ' % (name, os.getpid ()))
    time.sleep(random.randint(1, 3))

if __name__ == '__main__':
    the pool = ProcessPoolExecutor (. 4)   # Set the thread pool size is equal to the default number of cpu core 
    for I in Range (10 ):
        pool.submit (Talk, ' the process of S% ' % i)   # asynchronous submission (submitted only thread running does not need to wait)

    # Action 1: Process cell inlet 2 can not be submitted to the effect of: waiting process corresponds jion pool is running all finished 
    pool.shutdown (the wait = True)  
     Print ( " main process " )

 

Asynchronous call and synchronous call

concurrent.futures module provides a highly encapsulated asynchronous call interface 
ThreadPoolExecutor: thread pool to provide asynchronous call 
ProcessPoolExecutor: process pool, provides an asynchronous call

 

Synchronous call

from concurrent.futures Import ProcessPoolExecutor   # process pool module 
Import os, Time, Random


# 1, a synchronous call: After submitted task, the task is completed on the spot wait, get the result, and then the next line of code (program cause serial execution) 
DEF Talk (name):
     Print ( ' name:% S% PIS RUN S ' % (name, os.getpid ()))
    time.sleep(random.randint(1, 3))

if __name__ == '__main__':
    pool = ProcessPoolExecutor(4)
    for i in range(10):
        pool.submit (Talk, ' process S% ' % I) .Result ()   # synchronization by Dior, result (), equivalent to join serial 

    pool.shutdown (the wait = True)
     Print ( " main process " )

 

Asynchronous call

from concurrent.futures Import ProcessPoolExecutor   # process pool module 
Import os, Time, Random

def talk(name):
    print('name: %s  pis%s  run' % (name,os.getpid()))
    time.sleep(random.randint(1, 3))

if __name__ == '__main__':
    pool = ProcessPoolExecutor(4)
    for i in range(10):
        pool.submit (Talk, ' process S% ' % I)   # asynchronous call, without waiting 

    pool.shutdown (the wait = True)
     Print ( " main process " )

 

Callback mechanism

Can bind process for each process or thread pool thread pool or a function that automatically triggered when the task is completed the implementation process or thread, and receive task return value as a parameter, the callback function is called

# Parse_page get is a future target obj, require () to get the result with obj.result 
p.submit (here called asynchronously) .add_done_callback (method)

 

Case: Download parsing web page

import time
import requests
from concurrent.futures import ThreadPoolExecutor  # 线程池模块

def get(url):
    print('GET %s' % url)
    Response = requests.get (URL)   # download page 
    the time.sleep (. 3)   # analog network delay 
    return { ' URL ' : URL, ' Content ' : response.text}   # page address and page content

def parse(res):
    res = res.result ()   # ! taken to res [Results] callback parameters need 
    Print ( ' % S% S res IS ' % (res [ ' URL ' ], len (res [ ' Content ' ])) )

if __name__ == '__main__':
    urls = {
        'http://www.baidu.com',
        'http://www.360.com',
        'http://www.iqiyi.com'
    }

    pool = ThreadPoolExecutor(2)
    for i in urls:
        pool.submit (GET, i) .add_done_callback (the parse)   # [callback] After executing thread, with a function

 

 

This article reference links:

https://blog.csdn.net/weixin_42329277/article/details/80741589

 

Guess you like

Origin www.cnblogs.com/xiao987334176/p/11089601.html