Synchronous Asynchronous

Blocking asynchronous non-blocking synchronous

Three state processes running: running, ready, blocked

Two point of view of implementation and submit the job

Execution angle

Obstruction: encountered IO program is running, the program hangs, CPU cut away

Non-blocking: The program did not encounter IO, IO encounters but I by some means, let the CPU forced to run my program

The task of submitting angle

Sync: submit a task, the task began to run from until the end of this task (there may be IO) returns a value after the return, I will submit next task

Asynchronous: submit multiple tasks, I will direct the next line of code

How to return the results of the recovery ??

For example to explain what is synchronous and asynchronous

Synchronize:

The first teacher to inform the completion of the task of writing a book, I'm still waiting, after the completion after waiting two or three days to tell the bin, I will publish next task

asynchronous:

Three three tasks directly inform teacher, I was busy with my, until after the completion of three teacher, told me

Analysis of Synchronous Asynchronous Code layer

1. asynchronous call

#  1. 异步调用
#  异步调用返回值如何接收? 未解决
from concurrent.futures import ProcessPoolExecutor,ThreadPoolExecutor
import time
import os
import random
def task(i):
    print(f"{os.getpid()}开始任务")
    time.sleep(random.randint(1,3))
    print(f"{os.getpid()}任务结束")
    return i
if __name__ == '__main__':
    pool = ProcessPoolExecutor ()
    for i in range(10):
        pool.submit(task,i)
    pool.shutdown(wait = True)
    #shutdown 让我的主进程等待进程池中所有的子进程都结束任务后
    #再执行,有点类似于 join
    # 在上一个进程池没有完成所有的任务之前,不允许添加新的任务
    # 一个任务是通过函数实现的,任务完成了他的返回值就是函数的返回值

    print('====主')
    
    结果是一次性给的

2. synchronous call

from concurrent.futures import ProcessPoolExecutor,ThreadPoolExecutor
import time
import random
import os
def task(i):
    print(f"{os.getpid()}开始任务")
    time.sleep(random.randint(1,3))
    print(f"{os.getpid()}任务结束")
    return i
if __name__ == '__main__':
    pool = ProcessPoolExecutor()
    for i in range(10):
        obj = pool.submit(task,i)
        # obj是一个动态对象,返回当前的对象状态,有可能运行中,
        # 可能(就绪阻塞),还可能是结束了
        # obj.result() 必须等到这个任务完成后,返回了结果之后
        # 再执行下一个任务
        print(f"任务结果:{obj.result()}")
    pool.shutdown(wait = True)
    print('===主')
    
    
   结果是一个一个来的

3. How asynchronous return results

Method 1: asynchronous call, unified recovery results

from concurrent.futures import ProcessPoolExecutor,ThreadPoolExecutor
import time
import os
import random
def task(i):
    print(f"{os.getpid()}开始任务")
    time.sleep(random.randint(1,3))
    print(f"{os.getpid()}任务结束")
    return i
if __name__ == '__main__':
    pool = ProcessPoolExecutor()
    l1 = []
    for i in range(10):
        obj = pool.submit(task,i)
        l1.append(obj)
    pool.shutdown(wait = True)
    for i in l1:
        print(i.result())
    print('====主')
## 统一回收结果:我不能马上收到任何一个已经完成的任务的返回值,
# 我只能等到所有的任务全部结束后统一回收

Second way:

Asynchronous callback call +

reptile

The browser works, send a request to the server, the server to verify your request, if correct, give your browser returns a file, the browser receives the file, the file code rendered into beautiful look you see

What reptile ??

  1. Using code simulates a browser, the browser perform workflow for a bunch of source code
  2. The source code for data cleansing to get the data I want
import requests
ret = requests.get('http://www.baidu.com')
if ret.status_code == 200:
    print(ret.text)  #  得到源代码

Version 1:

版本一:

     1. 异步发出10个任务,并发的执行,但是统一的接收所有的任务的返回值.(效率低,不能实时的获取结果)

     2. 分析结果流程是串行,影响效率.

          for res in obj_list:

             print(parse(res.result()))
             
             
       版本一:
         线程池设置4个线程, 异步发起10个任务,每个任务是通过网页获取源码, 并发执行,
         最后统一用列表回收10个任务, 串行着分析源码.(没有对数据进行操作)

Code Analysis

from concurrent.futures import ProcessPoolExecutor,ThreadPoolExecutor
import time
import random
import os
import requests
def task(url):
    ret = requests.get(url)
    if ret.status_code == 200:
        return ret.text

def parse(content):
    return len(content)

if __name__ == '__main__':
    ret = task('http://www.baidu.com')
    print(parse(ret))

    ret = task('http://www.JD.com')
    print(parse(ret))

    ret = task('http://www.taobao.com')
    print(parse(ret))

    ret = task('https://www.cnblogs.com/jin-xin/articles/7459977.html')
    print(parse(ret))
    #  开启线程池,并发并行的执行
    url_list = [
         'http: // www.baidu.com',
            'http://www.JD.com',
            'http://www.JD.com',
            'http://www.JD.com',
            'http://www.taobao.com',
            'https://www.cnblogs.com/jin-xin/articles/7459977.html',
            'https://www.luffycity.com/',
            'https://www.cnblogs.com/jin-xin/articles/9811379.html',
            'https://www.cnblogs.com/jin-xin/articles/11245654.html',
            'https://www.sina.com.cn/',
    ]
    pool = ThreadPoolExecutor(4)
    obj_list = []
    for url in url_list:
        obj = pool.submit(task,url)
        obj_list.append(obj)
    pool.shutdown(wait = True)
    for res in obj_list:
        print(parse(res.result()))

Version two

版本二:

        线程池设置4个线程, 异步发起10个任务,每个任务是通过网页获取源码+数据分析, 并发执行,

         最后将所有的结果展示出来.

         耦合性增强了.

         并发执行任务,此任务最好是IO阻塞,才能发挥最大的效果
from concurrent.futures import ProcessPoolExecutor,ThreadPoolExecutor
import time
import random
import os
import requests
def task(url):
    ret = requests.get(url)
    if ret.status_code == 200:
        return parse(ret.text)
def parse(content):
    return len(content)
if __name__ == '__main__':
    url_list = [
            'http://www.baidu.com',
            'http://www.JD.com',
            'http://www.JD.com',
            'http://www.JD.com',
            'http://www.taobao.com',
            'https://www.cnblogs.com/jin-xin/articles/7459977.html',
            'https://www.luffycity.com/',
            'https://www.cnblogs.com/jin-xin/articles/9811379.html',
            'https://www.cnblogs.com/jin-xin/articles/11245654.html',
            'https://www.sina.com.cn/',
        ]
    pool = ThreadPoolExecutor(4)
    obj_list = []
    for url in url_list:
        obj = pool.submit(task,url)
        obj_list.append(obj)

    pool.shutdown(wait = True)
    for res in obj_list:
        print(res.result())
        
        
2381
100180
100180
100180
143902
47067
2708
21634
48110
571508   

Third Version

from concurrent.futures import ProcessPoolExecutor, ThreadPoolExecutor

import time

import random

import os

import requests
def task(url):

    '''模拟的就是爬取多个源代码 一定有IO操作'''

    ret = requests.get(url)

    if ret.status_code == 200:

        return ret.text
def parse(obj):

    '''模拟对数据进行分析 一般没有IO'''

    print(len(obj.result()))

if __name__ == '__main__':

    # 开启线程池,并发并行的执行

    url_list = [

        'http://www.baidu.com',

        'http://www.JD.com',

        'http://www.JD.com',

        'http://www.JD.com',

        'http://www.taobao.com',

        'https://www.cnblogs.com/jin-xin/articles/7459977.html',

        'https://www.luffycity.com/',

        'https://www.cnblogs.com/jin-xin/articles/9811379.html',

        'https://www.cnblogs.com/jin-xin/articles/11245654.html',

        'https://www.sina.com.cn/',



    ]

    pool = ThreadPoolExecutor(4)
    for url in url_list:

        obj = pool.submit(task, url)

        obj.add_done_callback(parse)



    '''

    线程池设置4个线程, 异步发起10个任务,每个任务是通过网页获取源码, 并发执行,

    当一个任务完成之后,将parse这个分析代码的任务交由剩余的空闲的线程去执行,你这个线程继续去处理其他任务.

    如果进程池+回调: 回调函数由主进程去执行.

    如果线程池+回调: 回到函数由空闲的线程去执行.

    '''



# 异步 回调是一回事儿?

# 异步站在发布任务的角度,

# 站在接收结果的角度: 回调函数 按顺序接收每个任务的结果,进行下一步处理.



# 异步 + 回调:

# 异步处理的IO类型.

# 回调处理非IO

Guess you like

Origin www.cnblogs.com/hualibokeyuan/p/11414125.html