python concurrent ThreadPoolExecutor

Text:
Executor is an abstract class, subclass:

ThreadPoolExecutor and ProcessPoolExecutor, a thread pool, a process pool.

future objects: a completion target point in the future operation.
Submit method can return a future object that directly returns, like a thread function is completed and then the return data to set_result future objects; 

Below realized the difference submit, map and as_completed, the following examples are not using with, you need to call shutdown actual use, or use with

# Function thread of execution 
DEF the Add (N1, N2): 
    V = N1 + N2
     Print ( ' the Add: ' , V, ' , TID: ' , threading.currentThread () ident.) 
    The time.sleep (N1) 
    return V
 # by submit the function to be executed thrown in the thread pool 
# submit a future object is returned directly 
EX = the ThreadPoolExecutor (= max_workers. 3)       # developed up to run N threads 
F1 = ex.submit (the Add, 2,3 ) 
F2 = EX .submit (the Add, 2,2 & )
 Print ( ' main running Thread ' )
 Print(f1.done ())                             # DONE to see the end of the mission did not 
Print (f1.result ())                           # get results, blocking method

 

 Note map method, return is submitted sequence is the same with you. Are ordered

 

# The following is a simple method of using the map NOTE:. Map returns a generator, and is ordered * * 
URLS = [ ' http://www.baidu.com ' , ' http://www.qq.com ' , ' http://www.sina.com.cn ' ]
 DEF get_html (URL):
     Print ( ' Thread ID: ' , threading.currentThread () ident,. ' visited: ' , URL)
     return requests.get (url)             # As used herein, a module requests 
EX = the ThreadPoolExecutor (= max_workers. 3 ) 
res_iter = ex.map (get_html, URLS)         # internal iteration, each open a thread url
for RES in res_iter:                     # At this time will block a thread until completion or abnormal 
    Print ( ' URL:% S, len: D% ' % (res.url, len (res.text)))

 


 

Next, as_completed. This function is submit born, so why then?

 

You always want to come back one way to solve submit by Shashi done it, not again call future.done or use future.result it.

concurrent.futures.as_completed (fs, timeout = None) Returns a generator, it will be blocked in an iterative process,

Until the thread is finished or abnormal, return a Future object is set_result of.

Also note, map method returns are ordered, as_completed is the first complete thread / failure to return

# This is a simple as_completed 
URLS = [ ' http://www.baidu.com ' , ' http://www.qq.com ' , ' http://www.sina.com.cn ' ]
 DEF get_html (URL): 
    the time.sleep ( . 3 )
     Print ( ' Thread ID: ' , threading.currentThread () ident,. ' visited: ' , URL)
     return requests.get (URL)             # As used herein, a module requests 
ex = ThreadPoolExecutor (= max_workers. 3 ) 
F = ex.submit (get_html, URLS [0])           #Submit a task, into the thread pool, ready to execute 
Print ( ' main running Thread ' )
 for Future in as_completed ([F]):         # as_completed () accepts an iterative sequence Future, returns a generator, or the complete Future object returns the exception 
    Print ( ' a task is completed. ' )
     Print (future.result ())
# Complete as_completed example 
# as_completed returns a generator for iteration, once a thread is completed (or failed) returns 
URLS = [ ' http://www.baidu.com ' , ' HTTP: //www.qq. COM ' , ' http://www.sina.com.cn ' ]
 DEF get_html (URL): 
    the time.sleep ( . 1 )
     Print ( ' Thread ID: ' , threading.currentThread () ident,. ' visited: ' , URL)
     return requests.get (URL)             # As used herein, a module requests 
EX = the ThreadPoolExecutor (= max_workers. 3)    #Up to three threads 
future_tasks = [ex.submit (get_html, url) for url in URLS]     # create three future target 
for future in as_completed (future_tasks):        # iterator generator 
    the try : 
        RESP = future.result ()
     the except Exception AS E:
         Print ( ' % S ' % E)
     the else :
         Print ( ' % S% D has bytes! ' % (resp.url, len (resp.text)))
 "" "  
Thread ID: 5160 visit: http: //www.baidu.com
Thread ID: 7752 visit: http: //www.sina.com.cn 
the Thread the above mentioned id: 5928 visit: http://www.qq.com 
! http://www.qq.com/ has 240 668 bytes 
HTTP: It has 2381 bytes //www.baidu.com/! 
https://www.sina.com.cn/ has 577 244 bytes! 
"" "

 

 

 wait a blocking function, the first parameter and as_completed as a future iteration sequence, returns a tuple containing 2 SET, a complete, an unfinished

 

"" " 
The wait example 
parameters: 
    FIRST_COMPLETED When any complete or cancel future, the function will return. 
    
    FIRST_EXCEPTION when any future by proposing an abnormal completion, the function will return if there is no future raises an exception, then it is equivalent to ALL_COMPLETED.. 
    
    ALL_COMPLETED ( default) when the complete or cancel all future, the function will return. 
"" " 
URLS = [ ' http://www.baidu.com ' , ' http://www.qq.com ' , ' HTTP: // www.sina.com.cn ' ]
 DEF get_html (URL): 
    the time.sleep ( . 1 )
     Print ( ' Thread ID: ' ., threading.currentThread () ident, ' visit: ' ,
    url)
    return requests.get (url)             # This uses requests module 
EX = ThreadPoolExecutor (= max_workers 3)    # Up to three threads 
future_tasks = [ex.submit (get_html, url) for url in URLS]     # create three future target 
the try : 
    Result = the wait (future_tasks, return_when = fu.FIRST_COMPLETED) 
    done_set = Result [0]
     for Future in done_set: 
        RESP = future.result ()
         Print ( ' first task is completed page url:% s, len:% d bytes! ' % (resp.url, len(resp.text)))
except Exception as e:
    print('exception :' , e)
 

 

 

Finally, talk about callbacks: add_done_callback (fn), the callback function is in the calling thread after the completion of the call, in the same thread.

 

import os,sys,time,requests,threading
from concurrent import futures
 
 
URLS = [
        'http://baidu.com',
        'http://www.qq.com',
        'http://www.sina.com.cn'
        ]
 
def load_url(url):
    print('tid:',threading.currentThread().ident,',url:',url)
    with requests.get(url) as resp:
        return resp.content
def call_back(obj):
    print('->>>>>>>>>call_back , tid:',threading.currentThread().ident, ',obj:',obj)
 
with futures.ThreadPoolExecutor(max_workers=3) as ex:
    # mp = {ex.submit(load_url,url) : url for url in URLS}
    mp = dict()
    for url in URLS:
        f = ex.submit(load_url,url)
        mp[f] = url
        f.add_done_callback(call_back)
    for f in futures.as_completed(mp):
        url = mp[f]
        try:
            data = f.result()
        except Exception as exc:
            print(exc, ',url:',url)
        else:
            print('url:', url, ',len:',len(data),',data[:20]:',data[:20])
"""
tid: 7128 ,url: http://baidu.com
tid: 7892 ,url: http://www.qq.com
tid: 3712 ,url: http://www.sina.com.cn
->>>>>>>>>call_back , tid: 7892 ,obj: <Future at 0x2dd64b0 state=finished returned bytes>
url: http://www.qq.com ,len: 251215 ,data[:20]: b'<!DOCTYPE html>\n<htm'
->>>>>>>>>call_back , tid: 3712 ,obj: <Future at 0x2de07b0 state=finished returned bytes>
url: http://www.sina.com.cn ,len: 577333 ,data[:20]: b'<!DOCTYPE html>\n<!--'
->>>>>>>>>call_back , tid: 7128 ,obj: <Future at 0x2d533d0 state=finished returned bytes>
url: http://baidu.com ,len: 81 ,data[:20]: b'<html>\n<meta http-eq'
"""

 

Guess you like

Origin www.cnblogs.com/shuai1991/p/11224919.html