Concurrent multi-threaded programming ~ ~ ~ ~ ~ ~ blocking, non-blocking, synchronous, asynchronous, synchronous calls, asynchronous calls, asynchronous callback call +

A blocking, non-blocking, synchronous, asynchronous

The implementation point:

Blocked: running, encounters IO program hangs, CPU cut away.

Non-blocking: The program did not encounter or encounters IO IO by some means to make cpu forced to run the program.

Submit angle tasks:

Synchronization: Submit a job, since the beginning of the task run until the end of the mission (there may be IO), returns after a return value, and then submitted to the next task.

Asynchronous: once submit multiple tasks, without waiting for the end of the mission will be submitted to the next task.

Two synchronous calls, asynchronous calls

2.1 synchronous call:

from concurrent.futures import ProcessPoolExecutor
import time,random,os
def task(i):
    print(f'{os.getpid()}开始任务')
    time.sleep(random.randint(1,3))
    print(f'{os.getpid()}任务结束')
    return i

if __name__ == '__main__':
    pool = ProcessPoolExecutor()
    for i in range(10):
        obj = pool.submit(task,i)
        print(obj.result())
# obj是一个动态对象,返回的当前的对象状态,有可能在运行,就绪,阻塞,结束.
# obj.result()必须等任务完成后,返回了结果之后,再执行下一个任务

2.2 asynchronous call:

from concurrent.futures import ProcessPoolExecutor
import time,random,os
def task(i):
    print(f'{os.getpid()}开始任务')
    time.sleep(random.randint(1,3))
    print(f'{os.getpid()}任务结束')
    return i

if __name__ == '__main__':
    pool = ProcessPoolExecutor()
    for i in range(10):
        pool.submit(task,i)
    pool.shutdown()
    print("主进程")
# shutdown: 让主进程等待进程池中所有的子进程都结束之后,再执行.在上一个进程池没有完成所有的任务之前,不允许添加新的任务.
# 一个任务是通过一个函数实现的,任务完成了返回值就是函数的返回值.

2.3 How to get the results of the asynchronous call:

One way: uniform recycling results

from concurrent.futures import ProcessPoolExecutor
import time,random,os
def task(i):
    print(f'{os.getpid()}开始任务')
    time.sleep(random.randint(1,3))
    print(f'{os.getpid()}任务结束')
    return i

if __name__ == '__main__':
    pool = ProcessPoolExecutor()
    lst = []
    for i in range(10):
        obj = pool.submit(task,i)
        lst.append(obj)
    pool.shutdown(wait=True)
    for k in lst:
        print(k.result())
# 统一回收结果不能马上收到任何一个已经完成的任务的返回值,只能等所有的任务全部结束后统一回收.

Second way: asynchronous call to the callback function +

Three asynchronous call to the callback function +

from concurrent.futures import ThreadPoolExecutor
import requests

def task(url): # 爬取网站数据
    ret = requests.get(url)
    if ret.status_code == 200:
        return ret.text

def parse(obj):
    print(len(obj.result()))

if __name__ == '__main__':
    url_lst = ['http://www.baidu.com',
        'http://www.JD.com',
        'http://www.JD.com',
        'http://www.JD.com',
        'http://www.taobao.com',
        'https://www.cnblogs.com/jin-xin/articles/7459977.html',
        'https://www.luffycity.com/',
        'https://www.cnblogs.com/jin-xin/articles/9811379.html',
        'https://www.cnblogs.com/jin-xin/articles/11245654.html',
        'https://www.sina.com.cn/',]
    pool = ThreadPoolExecutor(4)

    for url in url_lst:
        obj = pool.submit(task,url)
        obj.add_done_callback(parse)
# 异步调用:站在发布任务的角度(并发),处理的是IO类型
# 回调函数:站在接收结果的角度,按顺序接收每个任务的结果,进行下一步处理,处理的是非IO类型

Guess you like

Origin www.cnblogs.com/lav3nder/p/11802328.html