Thread pool, pool process, io model

First, the process thread pool and pool

  What is the pool? Simply put, it is a container, a range

   To maximize the full use of the computer in ensuring the security of computer hardware,

In fact, the pool is to reduce the efficiency of the program, but to ensure the security of computer hardware, but also implements a concurrent effect, now updated to keep up with the speed of hardware development software

Process pool and thread pool

  Less open process open thread needs to consume resources, but compare the two situations thread consumption of resources

  Create a process pool: multiprocess.Pool module

Written imported: from concurrent.futures Import ThreadPoolExecutor, ProcessPoolExecutor
 Pool ([numprocess [, initializer [ , initargs]]]): creates a process pool 

1 numprocess: The number of processes to be created if omitted, the default cpu_count () value of 
2 initializer: to perform each work process starts callable objects, the default is None 
3 initargs: a set of parameters to be passed initializer

Methods: p.apply () p.apply_async () p.colse () p.join ()  

 

Built-in functions = ProcessPoolExecutor the pool () # Create a process pool, do not write the default is the current computer cpu number

 

1, the use of process pool:

from concurrent.futures Import   ThreadPoolExecutor, ProcessPoolExecutor
 Import Time
 Import os 

"" " 
pool created in the process / thread creation will no longer create a 
beginning to the end are so few, originally defined 
so saves repeatedly open process / resource thread 

"" " 

# process pool usage: 
the pool ProcessPoolExecutor = () # create a process pool, do not write the default is the current computer cpu number 

DEF Task (the n-):
     Print (the n-, os.getpid ()) # View the current process ID of 
    the time.sleep (2 )
     return   the n-** 2 DEF call_back (the n-):
     Print ( " asynchronous tasks submission of returns the result: " , n.result ()) "



Asynchronous callback mechanism: When the asynchronous task has submitted returns the result, it will automatically trigger the execution of the callback function " 
IF  __name__ == ' __main__ ' : 
    l_list = []
     for i in the Range (20 ): 
        RES = pool.submit (Task, i) .add_done_callback (call_back) # asynchronous callback 
        " time to submit the task of binding a callback function once the outcome of task execution for immediate callback function " 
        l_list.append (RES) 

>>>>

0 16128
1 41700
2 24856
3 9876
41128. 4
. 5 40068
. 6 19288
. 7 40080
. 8 16128
asynchronous submit the task returns the result: 0
. 9 41700
asynchronous submit the task returns the result: 1
10 24856
Asynchronous tasks submission of returns the result: 4
11 9876

Callback mechanism process pool

 

2, create a thread pool:

Use of the thread pool 

from concurrent.futures Import the ThreadPoolExecutor, ProcessPoolExecutor
 Import Time
 Import OS 

the pool = the ThreadPoolExecutor (. 5) # brackets can pass parameter specifies the number of threads in the thread pool 
# may not pass, do not pass it defaults to the current technology where the operator multiplied by the number of cpu machine. 5 

DEF Task (n-):
     Print (n-, os.getpid ()) 
    the time.sleep ( 2 )
     return   n-2 ** 

t_list = []
 for I in Range (20 is ): 
    RES = the pool .submit (task, i) # to submit the task asynchronous thread pool submission 
    # Print (res.result ()) # still waiting for the results of asynchronous tasks return
    t_list.append (RES) 

pool.shutdown () # close the pond after pond waiting for completion of the code will run down all the tasks performed 
for the p- in t_list:
     Print ( " >>>>: " , p.result ()) 


>>>:

18 is 32 864
. 19 32864
>>>>: 0
>>>>: 1
>>>>: 4
>>>>: 9
>>>>: 16
>>>>: 25
>>>>: 36

 

Second, coroutine

(Programmer imagined, is the case of single-threaded concurrency may be called a coroutine)

1, threaded for concurrency control in the application of the plurality of tasks saved state switching +

Pros: the application level is much higher than the speed of switching operating systems

Cons: Multitasking Once a blockage is not cut, entire threads are blocked in place, other tasks within the thread can not be executed

Process: Resource Unit
threads: execution unit
coroutine: single-threaded concurrency

Concurrent conditions: multi-channel technology: application of spatial, temporal multiplexing (save switching +)

2, the purpose of co-program:
   
want to achieve in a single threaded concurrent
    concurrent refers to a plurality of tasks appear to be running simultaneously
    concurrent = + save state switch

# Serial execution 
Import Time 

DEF func1 ():
     for I in Range (10000000 ): 
        I + 1'd DEF func2 ():
     for I in Range (10000000 ): 
        I + 1'd 
Start = the time.time () 
func1 () 
func2 ( ) 
STOP = the time.time ()   # 1.094691514968872 Print (STOP - Start) # based on yield performed concurrently with a yield in the function generator when the brackets becomes call Import time
 DEF func1 ():
     the while True:
        







yield

def func2():
    g=func1()
    for i in range(10000000):
   i
+1 next(g) start=time.time() func2() stop=time.time() # 1.3715009689331055 print(stop-start)

 : Switching the first case. In a mission encountered io circumstances, cut into two tasks to perform, so that you can take advantage of a blocked task of computing tasks to complete two, upgrade (running and ready state switch back and forth, waiting for the event short obstruction) efficiency this is it.

 IO event of their own through code switching
 feeling to the operating system is that you do not have any of this thread IO
 PS: cheat operating system it mistakenly believe that you have not IO program
  to ensure the program running state and the ready state is switched back and forth
  to enhance the code operating efficiency 

 

3, achieving the handover + saved state will be able to improve the efficiency of it?

 It also points to discuss the situation: when the task is under intensive case io is to enhance the efficiency of
As if the task is computationally intensive but lower efficiency

 The best efficiency, save more resources should be: open multi-threaded multi-process, multi-threaded to open coroutine

 

4, with the switching state can be maintained only yeild (the yield on a saved result), the need to find a tool to identify io module thereby introducing gevent

  Gevent is a third-party library, you can easily implement synchronous or asynchronous concurrent programming by gevent, the main mode is used in gevent Greenlet, it is a form of access Python C extension module lightweight coroutines.

Effects swpn () + saved for switching state monitoring I / O operation is implemented at a single thread concurrency

swpn () built-in return has a return value comes packaged

 

from gevent Import Monkey; monkey.patch_all ()
 # Since the mold is often faster to use, so write recommendation 
from gevent Import spawn
 Import   Time 

# Note gevent module can not automatically identify time.sleep and other circumstances io 
# requires you to manually configure an additional parameter 

DEF Heng ():
     Print ( " right place " ) 
    the time.sleep ( 2 )
     Print ( " only the first wind " ) 

DEF Hei ():
     Print ( " climb watchtower " ) 
    the time.sleep ( 2 )
     Print( " Sitting alone West Wing " )
 DEF haha ():
     Print ( ' WHO are you ' ) 
    the time.sleep ( 2 )
     Print ( ' Why ' ) 

Start = the time.time () 
G1 = the spawn (Heng) # the spawn will detect All tasks 
g2 = spawn (Hei) 
G3 = spawn (haha) 

g1.join () 
g2.join () 
g3.join () 

Print (time.time () - Start)

 Io passing fast switching spawn detection + saved state so that the system does not operate a mistaken io, improve the efficiency of execution

5, using a single thread in the form of concurrency effects ftp

  Using spawn under genvent module () function of automatically detecting operation implemented io

  FTP client:

import socket
from threading import Thread,current_thread

def client(): # 写成函数版
    client = socket.socket()
    client.connect(('127.0.0.1',8080))
    n = 0
    while True:

        data = '%s %s'%(current_thread().name,n)
        client.send(data.encode('utf-8'))
        res = client.recv(1024)
        print(res.decode('utf-8'))
        n += 1

for i in range(400):
    t = Thread(target=client)
    t.start()

FTP server:

from gevent import monkey;monkey.patch_all()
import socket
from gevent import spawn

server = socket.socket()
server.bind(('127.0.0.1',8080))
server.listen(5)

def talk(conn):
    while True:
        try:
            data = conn.recv(1024)
            if len(data) == 0:break
            print(data.decode('utf-8'))
            conn.send (data.upper ())
        the except ConnectionResetError AS E:
             Print (E)
             BREAK 
    conn.Close () 

DEF server1 ():
     the while True: 
        Conn, addr = server.accept () 
        the spawn (Talk, Conn) # custom io detection operation, the communication and acceptance room, achieve rapid switching + saved state, the time interval is very short, it looks like the concurrent 

IF  the __name__ == ' __main__ ' : 
    G1 = the spawn (server1) 
    g1.join ()

 Three, IO model

To better understand the IO model, we need to review in advance at: synchronous, asynchronous, blocking, non-blocking

 Stevens in the article comparing total of five IO Model:

  * Blocking IO blocking IO
    * IO nonblocking a nonblocking IO
    * IO Multiplexing multiplexer IO
    * signal driven IO signal for driving IO
    * IO Asynchronous asynchronous IO

Synchronous Asynchronous: refers to the task of running submitted by:

  Sync: return to the task after task submissions still waiting for the results, do not do anything during the

  Asynchronous: After submitting the task immediately next line of code, return results without waiting for the task, using asynchronous callback mechanism

 

Blocking and non-blocking: refers to the operational state of the program

  Blocking: blocking wait state

  Non-blocking: ready state or run state

 

 

 

 

 

 

 

 

 

  

 

 

 

 

  

 

Guess you like

Origin www.cnblogs.com/Gaimo/p/11359121.html