First, the process pools and thread pools
1. What is the pool
Pool can be a container, it actually reduces the efficiency of the program, but to improve the security of the computer hardware, because the pace of development to keep up with hardware development software
2. What is the role of a pool
The role of the pool is in the computer hardware to ensure the safety of the premise, to maximize the use of computer
3. The process pools and thread pools
Process pool: maximum number of processes that we are allowed to create
Thread Pool: We create a process that allows the maximum number of threads
4. Note:
Open process and open threads consume resources, but the contrast between the two is the minimum to open a thread resources consumed, minimal overhead
The nature of process pool and thread pool is to maximize the use of computers in the computer can withstand the range
Second, create and asynchronous process pool, thread pool callback
1. Use
from concurrent.futures Import ThreadPoolExecutor, ProcessPoolExecutor # create the pool the pool = ThreadPoolExecutor (5) # 5 The number of threads, it may not pass, the default is multiplied by the number of CPU 5 # the pool ProcessPoolExecutor = () # process pool does not pass parameters, default is the current number of cpu DEF task (the n-): Print (the n-) the time.sleep () # submit tasks: asynchronous submit # pool.submit (task, 1) submit the job towards the thread pool # callback mechanism: pool.submit (task , 1) .add_done_callback (callback callback function) # define a list, filed achieve concurrent asynchronous threads t_list = [] # thread pool only five, five of five so connected, similar to the buffer region for I in Range (20 is ): RES = pool.submit (task, I) # RES Task task is the return value Print (res.result ()) # still waiting task returns a result, it becomes complicated serial. t_list.append (RES) # demand, let 20 threads running over, shouting result pool.shutdown () # close the pool, wait for the completion of all tasks, will run down. for T in t_list: Print ( ' >>> ' , t.result ())
2. Thread Pool usage
from concurrent.futures Import the ThreadPoolExecutor, ProcessPoolExecutor Import Time Import OS the pool = the ThreadPoolExecutor (. 5) # default is the number of threads. 5 * CPU DEF task (n-): Print (n-) the time.sleep ( . 3 ) return n-2 ** # to task Add function returns the result '' ' leads to submission tasks: synchronization: after submitting the task, return to their home to wait for the results of the task, do not do anything during the asynchronous: after submission of the task, the task does not wait for the return results (but the result will be another One way to obtain ...) run directly next line of code '' ' # pool.submit (task,. 1) inside the pool # submit to submit the task asynchronous parameters: task function, and the required parameter # Print (' primary. . ') From the results of #, pool.submit (task, 1) is in the form submitted asynchronously. '' ' Thoughts: submit the results of asynchronous task to how to get? '' ' # Create additional thread to see the effect # for i in the Range (20): # pool.submit (Task, i) # thinking: how do we get the asynchronous task of submitting returns the result? '' First of all we try to look at the source code to see submit will submit a return value of f (the object of a class) '' ' # for i in the Range (20): # RES = pool.submit (Task, i) # # print (res) # print to see the return of the object # Print (res.result ()) # print the object of our res result, the method .Result () '' ' the results: the result is None, and the result is a string of print in rows 1 serial reasons: Print (res.result ()) This code indicates the for loop: still waiting task results to synchronous price task concurrency into a serial 2 returns a value of None because synchronization submit the task, we have to wait for the results returned in place of the task, the task is the task function, but the function task does not return results, so that None '' ' " Although we got the above code returns the result but become synchronized form submitted, how we will achieve 20 concurrent threads all start, and then get their results? Solution: First of all start 20 threads, and put a list inside, we list in this cycle, "" " # The above code changes t_list = [] for i in the Range (20 ): RES = pool.submit (Task, i) # Print (res) # print to see the return of the object # Print (res.result ()) # print the object of our res result, the method .Result () t_list.append (res) # for t in t_list : # Print ( '>>>:', t.result ()) # result returned is in order, because we will add to the list of threads which are added sequentially, so the for loop that prints are ordered . "" " Thinking: the above code execution thread and return the results are the result of confusion, and we hope that all the tasks waiting pool of threads executed completely, and then get the results of the method: pool.shutdown () "" " # Above modify the code pool.shutdown () # close the pool, waiting for the pool of tasks executed completely Xiazou for t in t_list: Print ( ' >>>: ' , t.result ())
The code above is achieved through concurrent thread pool, and get the result of the asynchronous tasks submission
3. The process pool usage and asynchronous callback mechanism
from concurrent.futures Import ThreadPoolExecutor, ProcessPoolExecutor Import Time Import os the pool = ProcessPoolExecutor (5) # Create a process pool . "" " theory: Process pond created / thread creation will no longer create a start to finish with all in those first few so saving resources repeatedly open process / thread "" " DEF Task (the n-): Print (the n-, os.getpid ()) # to get the process by os.getpid () id verification theory of time. SLEEP (3 ) return the n-** 2 "" " how to get the result of the asynchronous tasks that are submitted? asynchronous callback mechanism: when the asynchronous task has submitted the results, it will automatically trigger callback execution " "" DEF call_back (the n-): print(' Asynchronous callback results: ' , n.result ()) # where n is the number of results returned task IF the __name__ == ' __main__ ' : for I in Range (20 is ): # to each of the submitted tasks callback bindings function, when the outcome of the task, executes the corresponding callback function res = pool.submit (task, i) .add_done_callback (call_back)
Third, the asynchronous callback mechanism
1. asynchronous callback mechanism
Asynchronous callback mechanism is that we define a callback function, when the asynchronous task has submitted the results returned automatically triggers a callback function
2. The role of asynchronous callback mechanism
The results obtained asynchronous tasks submission
Fourth, coroutine
Process: Resource Unit
Thread: implementation units
Coroutine: single-threaded concurrency
Concurrency:
Save state switching +
PS: looks like at the same time can be called concurrent execution
Coroutine: programmer obscenity entirely out of their own terms
Under single-threaded concurrency
Concurrent conditions:
Multi-channel technology
Multiplexing in space
Multiplexed on time
Save state switching +
1. What is the coroutine
Coroutines is in the single-threaded concurrency
2. What are the conditions to achieve concurrency is
Multi-channel technology
Multiplexed on the multiplexing and time in space
Wherein multiplexed in time, namely: the switching state of preservation +
3. Therefore, we can come to understand coroutine
Programmers themselves through their own code detection program in the IO
IO event of their own through code switch
To the operating system are you feeling this thread does not have any IO
PS:欺骗操作系统 让它误认为你这个程序一直没有IO
从而保证程序在运行态和就绪态来回切换
提升代码的运行效率
4.切换 + 保存状态就一定能提高效率吗
当你的任务是io密集型: 提高效率
当你的任务是计算密集型: 降低效率
5.协程如何实现并发,即协程如何实现:切换 + 保存状态
保存状态:yield 保存上一次的结果,保存状态
切换:碰到 I/O操作切换状态
""" 需要找到一个能够识别IO的一个工具 gevent模块 """ from gevent import monkey;monkey.patch_all() # 由于该模块经常被使用 所以建议写成一行 from gevent import spawn import time """ 注意gevent模块没办法自动识别time.sleep等IO情况 需要你手动再配置一个参数 """
代码示例协程的状态切换
from gevent import monkey; monkey.patch_all() # 为了识别time.sleep() 手动添加参数 from gevent import spawn import time def sing(): print('唱歌') time.sleep(2) print('唱歌') def jump(): print('跳舞') time.sleep(3) print('跳舞') def rap(): print('说唱') time.sleep(5) print('说唱') start = time.time() g1 = spawn(sing) # spawn() 的作用是传入一个函数名,它自动帮你加括号调用 g2 = spawn(jump) g3 = spawn(rap) g1.join() g2.join() g3.join() print(time.time() - start) """ 协程一直在这3个函数之间切换状态,碰到I/O操作就切换去执行别的程序 """
五、协程实现TCP服务端并发
1.服务端
from gevent import monkey;monkey.patch_all() import socket from gevent import spawn server = socket.socket() server.bind(('127.0.0.1',8080)) server.listen(5) def talk(conn): while True: try: data = conn.recv(1024) if len(data) == 0:break print(data.decode('utf-8')) conn.send(data.upper()) except ConnectionResetError as e: print(e) break conn.close() def server1(): while True: conn, addr = server.accept() spawn(talk,conn) if __name__ == '__main__': g1 = spawn(server1) g1.join()
2.客户端
import socket from threading import Thread,current_thread def client(): client = socket.socket() client.connect(('127.0.0.1',8080)) n = 0 while True: data = '%s %s'%(current_thread().name,n) client.send(data.encode('utf-8')) res = client.recv(1024) print(res.decode('utf-8')) n += 1 for i in range(400): t = Thread(target=client) t.start()
六、IO模型
1.异步IO
2.阻塞IO
3.非阻塞IO
4.IO多路复用