Process pool / thread pool coroutine

Process pool / thread pool

Open process open thread needs to consume resources, but the two threads in the case of relatively small resource consumption

So we have to be able to afford within the scope of computer maximize the use of computer

Pool: 
    under safe conditions to ensure maximum utilization of computer hardware computer resource 
    pool is actually Jiangdu the efficiency of the program, but to ensure the security of the computer Yongjian

 

concurrent.futures 模块

= the pool ProcessPoolExecutor () in the process pool brackets can pass parameters to specify the number of processes, the number of default cpu 
the pool = ThreadPoolExecutor () thread pool within the brackets can pass parameters to specify the number of threads, the default cpu * 5 Ge

 

Pond created in the process / thread will not be created once created, start to finish are used in the initial default or custom of those few, thus saving repeated open process / thread resources

pool.submit (fn, * args, ** kwargs) to submit asynchronous tasks within the thread / process to submit

from concurrent.futures Import ProcessPoolExecutor, the ThreadPoolExecutor
 Import OS
 Import Time 
# instantiation process pool 
the pool ProcessPoolExecutor = ()   # number in brackets can pass the parameters specifying the pool, the number of cpu default process pool 
# instantiated thread pool 
# the pool = the ThreadPoolExecutor () 
DEF Task (n-):
     Print ( ' % S running ' % n-, os.getpid ()) 
    the time.sleep ( . 3 )
     Print ( ' % S over ' % n-)
     return n-2 ** IF
 __name__ == ' __main__ ' : 
    pool.submit (Task, 1)   # submit jobs to an asynchronous process pool / thread pool submission 
    Print ( ' primary process / thread ' )

 

result () place waiting tasks return results

# Instantiation process pool 
# the pool ProcessPoolExecutor = () 
# instantiated thread pool 
the pool the ThreadPoolExecutor = (. 5 ) 
DEF Task (n-):
     Print ( ' % S running ' % n-, os.getpid ()) 
    the time.sleep ( . 3 )
     # Print ( '% S over' n-%) 
    return n-2 ** IF the __name__ == ' __main__ ' :
     for I in Range (. 3 ): 
        RES = pool.submit (Task, I)
         Print ( '
 >>>: ' , res.result ())   # situ waiting task returns the result (return result) 
 >>> 
0 running 3664 
>>> : 0
 . 1 3664 running 
>>>:. 1 
2 running 3664 
>>> : 4

 

 

pool.shutdown () to close the pool, waiting to perform all tasks in the pool is completed and then I have to go below the code

pool = ThreadPoolExecutor(5)
​
def task(n):
    print('%s running'%n, os.getpid())
    time.sleep(3)
    # print('%s over'%n)
    return n**2if __name__ == '__main__':
    lists = []
    for i in range(3):
        res = pool.submit(task, i)  # 异步提交任务
        lists.append(res)
    pool.shutdown()  #Close pond, pool wait for all of the task execution is completed and then take the following codes obtained 
    for P in Lists:
         Print ( ' >>>: ' , p.result ())
  >>> 
0 running 12168 
. 1 running 12168 
2 12168 running 
> >> : 0
 >>>: 1 
>>>: 4

 

add_done_callback (func) callback function

# Instantiation process pool 
# the pool ProcessPoolExecutor = () 
# instantiated thread pool 
the pool the ThreadPoolExecutor = (. 5 ) 
DEF Task (n-):
     Print ( ' % S running ' % n-, os.getpid ()) 
    the time.sleep ( . 3 )
     # Print ( '% S over' n-%) 
    return n-2 ** DEF call_back (n-):
     Print ( ' asynchronous submit the task returns the result: ' , n.result ()) IF the __name__ == ' __main__ ' :
     for i

 in the Range (10 ): 
        RES = pool.submit (Task, i) .add_done_callback (call_back) # when submitting tasks, bind a callback function, once the outcome of the task immediately executes the corresponding callback function 
>>> 
0 running 8488 
. 1 running 8488 
2 running 8488 
. 3 running 8488 
. 4 running 8488 
asynchronous submit the task returns the result: 0
 . 5 running 8488 
returns submitted asynchronously task results: . 1 
. 6 running 8488 
returns asynchronous submit the task results: . 4 
. 7 running 8488 
asynchronous task is submitted return: . 9 
. 8 8488 running 
asynchronously submit the task returns the result: 16 
. 9 8488 running 
asynchronously submit the task returns the result: 36 
asynchronous tasks submitted returns the result: 25 
asynchronous tasks submitted returns the result: 64
Asynchronous tasks submission of return: 49 
asynchronous tasks submission of returns the result: 81

 

Coroutine

Process: Resource Unit 
threads: execution unit 
coroutine: to achieve single-threaded concurrency 
concurrency: Switch + save the state 
the CPU is running a task, it will take cut away to perform other tasks in both cases (force control switch with an operating system)
 1 that. mission met with obstruction
 2. the task of computing time is too long

 

Coroutine is in the nature of single-threaded, through code controlled by the user's own task encounters a blocked IO switch to another task to perform, in order to enhance efficiency and give the impression that your operating system does not have any of this thread IO (but switch back and forth), so as to ensure the program runs in a ready state toggle back and forth. 

to achieve it, we need to find a way solutions that meet the saved state before switching and control multiple tasks simultaneously bird ring.

 

gevent module

from GEVENT Import the spawn
 from GEVENT Import Monkey; monkey.patch_all ()
 Import Time
 '' ' 
GEVENT module is not time.sleep like io where 
it is necessary to manually configure monkey.patch_all () 
' '' 
DEF Heng (name):
     Print ( ' S% ' % name, ' Well ' ) 
    the time.sleep ( 2 )
     Print ( ' % S ' % name, ' Well ' ) 
DEF HA (name):
     Print ( ' % S' % Name, ' ha ' ) 
    the time.sleep ( . 3 )
     Print ( ' % S ' % name, ' ha ' ) 
Start = the time.time ()
 # Create Object coroutine spawn one executive function parameter, parameter is a function of two parameters necessary 
g1 = the spawn (Heng, ' X ' ) 
G2 = the spawn (HA, ' Y ' ) 
g1.join ()   # wait g1 end of run 
g2.join ()
 Print (the time.time () - start)

 

 

 

 

 

 

Guess you like

Origin www.cnblogs.com/waller/p/11359834.html