pile thread queue python38 1. 2. 4. Event Event 3. HTTP coroutine

review
 1.GIL lock
2. How to avoid affecting the efficiency of the program to bring GIL lock
3. The difference with a custom lock 
4. The process thread pool pool
5 Synchronous Asynchronous 
6. asynchronous callback
 
1.GIL lock
Global interpreter lock, to lock the mutex interpreter 
Why add: CPython memory management in a non-thread-safe, GIL is to protect the data interpreter is not concurrent modification 
After locking problem: it can not lead to multiple threads execute in parallel, reducing efficiency
Of course, can be complicated  
​ 
2. How to avoid affecting the efficiency of the program to bring GIL lock
When will affect the efficiency
If it is computationally intensive tasks, it can not open multiple threads to improve efficiency, not as a thread faster  
In Cpython should use multiple processes to handle the computationally intensive tasks (real increase efficiency in multi Nucleation can)
For example: image processing, speech recognition, the Semantics Recognition (more high)
​   1920*1080   
If the IO-intensive tasks, GIL is basically no impact,  
You should be using multiple threads, that consume a lot of time on the IO operation, but will open the multi-process a waste of resources 
For example: chatting, web pages, etc. 
3. The difference with a custom lock 
They are mutually exclusive lock, but the lock of different resources,  
GIL lock is interpreter resources such: the reference count, operating state 
Custom lock, lock the shared resources of their own   
​ 
4. The process thread pool pool
The pool is container,
A container for storing thread / process 
Also helped me to manage, and create a thread in the destruction, as well as distribution of tasks  
1. Create a pond
2.submit submit jobs 
3.shutdown can be used to complete all the tasks waiting to destroy the pool
 
5 Synchronous Asynchronous 
Synchronization is said to initiate the task must wait for the end of the mandate in place, in order to get results
After initiating asynchronous tasks do not need to wait, you can continue to perform other operations
​ 
Question asynchronous tasks: Task initiator does not know when to end the task if needed to obtain the results do not know when to get results.
Solution:
Task to bind a callback function that will automatically execute when the task is finished 
1. The result can be processed in time
2. The task initiator does not need to wait for the end of the mission, improve efficiency
​ 
Process pool thread pool and asynchronous callback difference
Thread pool callback function. Is to be executed in the sub-thread 
Process pool callback function. Is to be executed in the parent process, not directly inter-process communication, so the callback function, IPC has completed the data back to the parent process
#### asynchronous tasks usually bind a callback function
 
#### asynchronous callback principle:
In launching the task, passing a function as a callback function, the task is completed the function and task execution results as an argument
```python
from threading import Thread
import time
# res = None

call_back DEF (RES):
    Print ( "task results got:% s"% res)
Parser DEF (RES):
    Print ( "task results got:% s"% res)
Task DEF (callback):
    #, Ltd. Free Join RES
    Print ( "RUN")
    the time.sleep (1)
    # # 100 return
    RES = 100 # indicates that the task results
    callback (res) # callback function and pass the task results
 
t = Thread(target=task,args=(parser,))
t.start()
# t.join()
# print(res)
print("over")
```
 
If the callback process also need to consider the issue of data communications, but also the most difficult is the need to let the parent process immediately trigger the execution of the parent process callback function when the results should not get stuck 
 
 
 
 
 Content Today
1. Learn queue thread bunch 
2. Event Event
3. coroutine
4. HTTP
 
1. Learn queue thread bunch 
Joinablequeue use # exactly the same, but the process does not have the IPC
# Last in first out LIFO advanced after a simulated stack ===================================== ====================== 
# LifoQueue

# others are the same except for the order
# lq = LifoQueue ()
# Have priority queue 
# PriorityQueue ========================================== ======================
# may store a comparison of the comparison can be smaller the higher the priority target can use comparison operators custom objects can not be stored
2. Event Event
Event that indicates that something happened, we can go to look for an event and then take some action 
Essentially event is used to communicate between threads, for state synchronization 
 
Case: 
There are two threads, one to start the server for client-to-server link
Provided that the server is started successfully link the client to succeed!
```python
boot_event = Event()
State # boot_event.clear () return the event to False
# boot_event.is_set () Returns the state of the event
# boot_event.wait () waits for the event, is to wait for the event is set to True
# boot_event.set () event is set to True

boot_server DEF ():
    Print ( "Starting server ......")
    the time.sleep (3)
    Print ( "! server started successfully")
    boot_event.set () # tag event has occurred

connect_server DEF ():
    boot_event.wait () # wait for events
    print ( "linked server successfully!")
t1 = Thread(target=boot_server)
t1.start()
t2 = Thread(target=connect_server)
t2.start()
```
 
A case of using the event # crawlers
A total of one hundred full-time memory is to see if the idea could not carry it every time to open 5 
The number with another thread to monitor if the window is smaller than the window continues to open 
 
This is the open window of the task threads to be executed
for i in range(100):
​ event.wait()
A window opens
The number of windows from another thread to monitor the video, when the amount is less than 5 to open a new window
while True:
The number of windows is determined whether the video. 5:
True: The Event.CLEAR ()
​ False: event.set()
 
 
`` Coroutine 3. *****
The purpose of the Association is to implement concurrent processes in a single thread
​ 
Concurrency:
A plurality of tasks running simultaneously looks essentially save state switching +
​  
Generator, the yield can save the current running state of the function  
 
Python `` `
# generator implemented using a plurality of concurrent tasks singlet
Import Time
# DEF func1 ():
# A. 1 =
# for I in Range (10000000):
# = A +. 1
# # Print (" RUN A ")
# the yield
#
# DEF func2 ():
# RES = func1 ()
# A =. 1
# for I in Range (10000000):
# A + =. 1
# # Print ( "B RUN")
# Next (RES)
#
# ST the time.time = ()
# func2 ()
# Print (the time.time () - ST)
 
def func1():
    a = 1
    for i in range(10000000):
        a += 1

def func2():
    a = 1
    for i in range(10000000):
        a += 1

st = time.time()
func1()
func2()
print(time.time() - st)
```
After testing a single-threaded concurrency does not improve performance for compute-intensive tasks in terms of 
For IO operations must be capable of detecting includes the automatic switching operation io and other tasks area, this efficiency 
# greenlet
However, direct use can be complicated yield too chaotic code structure, it is encapsulated greenlet
greenlet manually switched, but can not be detected IO
```python
import greenlet
import time

def task1():
    print("task1 run")
    g2.switch()
    print("task1 over")
    g2.switch()

def task2():
    print("task2 run")
    g1.switch()
    time.sleep(2)
    print("task2 over")
g1 = greenlet.greenlet(task1)
g2 = greenlet.greenlet(task2)
g1.switch()
# g2.switch()
print("主 over")
```
 
# peddled
 
# What is the coroutine
gevent coroutines
Coroutine translated into lightweight threads, also called micro-threads,
It is an application-level task scheduling
# Coroutine contrast thread
Application-level scheduling: Can I IO operation is detected, immediately switch to my other tasks to perform  
If there is enough to perform the task, you can put the CPU time slice fully utilized 
OS-level scheduling: IO encountered operating system will take CPU, which process points to the next is not known 
 
# How to improve efficiency
We use Cpython in the end how to improve efficiency 
There GIL lock in Cpython the cause can not be executed in parallel multi-threaded multi-core advantage lost,
Even if multiple threads can only be opened concurrently, this time can be achieved concurrent use coroutines
Advantages: will not take up more useless resources
Cons: If you are using a computing task coroutine but lower efficiency  
 
 
## Ultimate Shazhao IO-intensive tasks 
Open multiple processes within the system can withstand the range
Open multiple tasks under each coroutine process
 
# scenes to be used:
Cpython of IO-intensive tasks  
For a scenario would have required the use of multi-core parallel, do not open to coroutine 
 
# Monkey patch
Essentially blocking the original code quietly into non-blocking codes 
E.g. Queue.get (block = false)
When performing and fail to get value will throw an exception just catch exceptions and then switch to another task in the event of
Can be achieved encountered IO switching tasks 
 
# Premise is that we have to solve the problem IO-intensive tasks
What is coroutine:
Coroutine is a lightweight thread, also known as micro-threads, is the way the task can be scheduled by the application of its own 
Coroutine essence: the single-threaded concurrency 
Why coroutine:
Scene:
Multithreading can not be executed in parallel in Cpython, the IO-intensive tasks and a large amount of concurrency, can not open more threads,
Resulting in subsequent tasks can not handle, even if the front are waiting for IO
Coroutine: can be performed using a single-threaded concurrency, when a task is in a blocking IO, you can switch to other tasks
You can make full use of CPU time slice, if the task sufficient quantity, can take up CPU until the timeout
 
The final solution
+ + Multi-process single-threaded coroutines
 
If that does not carry:
1. All cluster servers do live are the same
2. Each server distributed live on a dry   
 
 
 
Thread 
 
gevent is the encapsulation of greenlet
Can help us to automatically switch tasks, and after patch can be detected io operations 
In this way it can automatically switch to another task in the face of IO to increase efficiency
 
 
4. HTTP
 
HTTP 

task is in progress for some reason did not complete the task
can be broken then the last transfer position to continue after restarting the program

=========== ======= ========= server
=========== 50
================


1. The client will need to download a file
, enter a file name
determine whether the file has been downloaded
to determine whether the file transfer is complete
state of existence of a task


1. no new task local
file does not exist and the task record does not exist
2. the task is not finished but there is an incomplete local
file already exists but the task of recording marks is not complete
3. task completed a complete local data
file already exists mark the task as completed
4 tasks have been completed but the data are deleted
file does not exist mark the task as complete

the task record?
new tasks need to be added to the record is not complete
download complete revision history to accomplish
what data needs?
task name
the file name
status
Unfinished
Completed
{ "name": False}

dictionary storage tasks require permanent storage



to use
the test file on the server serverFiles
 
 
 
 
 
 
 
​ 
 
 
 
​ 
 
 
 
 
 
 
 
 
 
 
 
 
​  
​   
 
 
 
​  
​ 
 
 
 
    
 
 
​  
 
 
 
 
 
 
 

Guess you like

Origin www.cnblogs.com/llx--20190411/p/10986609.html