Create a process using the pool and thread pool concurrent.futures module

First, the process pool.

When the number of concurrent tasks is far greater than the computer could afford, that is not a one-time open an excessive number of tasks should be considered to limit the number of processes or threads, in order to ensure that the server will not overload and paralysis. At this time there have been process pool and thread pool.

Two, concurrent.futures module introduction

concurrent.futures module provides asynchronous call interface package height

ThreadPoolExecutor: thread pool to provide asynchronous call

ProcessPoolExecutor: process pool, provides an asynchronous call

Both implement the same interface, which is defined by the abstract Executor class

Third, the basic method:

submit(fn, *args, **kwargs): Asynchronous tasks submission

map(func, *iterables, timeout=None, chunksize=1): Submit substituted for loop operation

shutdown(wait=True): The equivalent of process pool pool.close()+pool.join()operation

  • wait = True, wait for the pool to perform all tasks completed after the completion of resource recovery continues
  • wait = False, returns immediately, and not wait for the task execution is completed pool
  • But no matter why wait parameter values, the entire program will wait until all tasks finished
  • and the map must submit before shutdown

result(timeout=None): Get results

add_done_callback(fn):Callback

done(): To determine whether a particular thread to complete

cancle(): Cancel a task

Fourth, the process of pooling code examples --ProcessPoolExecutor

from concurrent.futures import ProcessPoolExecutor
from multiprocessing import current_process
import time

def func(i):
    print(f'进程 {current_process().name} 正在执行任务 {i}')
    time.sleep(1)
    return i**2

if __name__ == '__main__':
    pool = ProcessPoolExecutor(4)  # 进程池只有4个进程
    lt = []
    for i in range(20):  # 假设执行20个任务
        future = pool.submit(func,i)   # func任务要做20次,4个进程负责完成这个20个任务
        # print(future.result())   # 如果没有结果就一直等待拿到结果,导致了所有任务都在串行
        lt.append(future)
    pool.shutdown() # 默认为True,关闭了池的入口,会等待所有的任务执行完,结束阻塞,
    for fu in lt:
        print(fu.result())  # 等待所有的任务都执行完了,一起把返回值打印出来

Fifth, the thread pool code example --ThreadPoolExecutor

from concurrent.futures import ThreadPoolExecutor
from threading import currentThread
import time

def func(i):
    print(f'线程 {currentThread().name} 正在执行任务 {i}')
    time.sleep(1)
    return i**2

if __name__ == '__main__':
    fool = ThreadPoolExecutor(4)  # 线程池里只有4个线程
    lt = []
    for i in range(20):
        futrue = fool.submit(func,i)   # func任务要做20次,4个线程负责完成这20次任务
        lt.append(futrue)
    fool.shutdown()  # 默认为True,关闭了池的入口,会等待所有的任务执行完,结束阻塞,
    for fu in lt:
        print(fu.result())   # 等待所有的任务都执行完了,一起把返回值打印出来

Sixth, the callback function add_done_callback (fn)

Two ways to submit tasks:
Synchronization: Submit a job, you must perform other tasks done (to get the return value), the next line of code to perform
asynchronous: Submit a job, do not wait for execution is over, the next line of code can be executed directly.

ps: the use of processes and threads callback method to write a piece, commented that the use of the process.

from concurrent.futures import ProcessPoolExecutor,ThreadPoolExecutor
from threading import currentThread
from multiprocessing import current_process
import time

def task(i):
    print(f'线程 {currentThread().name} 正在执行任务 {i}')
    # print(f'进程 {current_process().name} 正在执行任务 {i}')
    time.sleep(1)
    return i**2

def parse(futrue):
    # 处理拿到的结果
    print(futrue.result())

if __name__ == '__main__':
    pool = ThreadPoolExecutor(4)  # 线程池里只有4个线程
    # pool = ProcessPoolExecutor(4)  # 进程池里只有4个进程
    lt = []
    for i in range(20):
        futrue = pool.submit(task,i)  # task任务要做20次,分别由四个进程完成这20个任务
        futrue.add_done_callback(parse)
        # 为当前任务绑定一个函数,在当前任务执行结束的时候会触发这个函数
        # 会把futrue对象作为参数传给函数
        # 这个称之为回调函数,处理完了回来就调用这个函数。

Compared with the example above, the thread pool: the role of the callback function, no need to wait until they have finished all the tasks performed print the return value. After each task to print the results, implement a concurrent effectiveness, efficiency has improved.

Guess you like

Origin www.cnblogs.com/guapitomjoy/p/11564397.html