Thread queue, events, and coroutine

Several queue thread

Queue are introduced from this module

1, Queue queue (FIFO queue)

from Queue Import Queue 

Q = Queue (maxsize =. 3) # Example yielded queue object 
    # maximum number of data set maxsize queue can hold 
q.put ( " First " ) 
q.put ( " SECOND " ) 
q.put ( " THIRD, " )   # If the queue is full, put clog live, came alive again until the empty 

Print (q.get ())   # First 
Print (q.get ())   # SECOND, 
Print (q.get ())   # THIRD If the queue is empty, GET will stay blocked, and then taken out until one value 
# can be seen from the results, Queue.Queue object is instantiated out FIFO
Queue

2, LifoQueue queue (last-out queue, lifo abbreviation is last in first out)

from Queue Import LifoQueue 

Q = LifoQueue () 

q.put ( " First " ) 
q.put ( " SECOND " ) 
q.put ( " THIRD " ) 

Print (q.get ())   # THIRD 
Print (q.get () )   # SECOND 
Print (q.get ())   # First 
# can be seen from the results, queue.LifoQueue instantiated objects is advanced out of the rear
LifoQueue

3, (can set the priority queue store data) queue PriorityQueue

  - 1, if only to put an element, the element can not be compared without regard to size

  --2, as long as the queue for more than two elements, they must be able to support the comparison between the size of the elements

from Queue Import PriorityQueue 

Q = PriorityQueue () 

q.put (Item = { " A " :}. 3) # It should be noted that, if a plurality of elements in the queue, then the ratio of the size of the elements must be supported between 
q.put ({ " a " : 2 }) 

Print (q.get ())   # a 
Print (q.get ())   # a complete comparison according to the size of the first small 


# fact, we can customize the classes, then to he added than the size of the method, you can compare the size of the
PriorityQueue

Event Event

1. What is event

  Event notification signal indicating a certain thing happened at an event for collaborative work between threads

  Since independent operation is unpredictable state between the threads, the one thread and the other thread between data is not synchronized,

  When a state of another thread using the thread needs to determine their next operation, it is necessary to maintain synchronization between the thread data

2, Event presentation

  event object contains a signal flag by a thread setting, which allows threads to wait before something happens execution

  In the initial case, the signal event flag object is set to false,

  If there is a thread waits for an event object, and the sign of the event object to false, it will remain blocked until the flag is true then perform

3, Event use

Available methods

from Threading Import the Event, the Thread 

E = the Event () 

e.set ()      # mark event object set to True, all blocked thread into the ready state, waiting for the operating system scheduling 
e.is_set ()   # Returns the event object Tagged state 
e.wait (timeout = 2)     # If the event object flag to False, then blocked, True not blocking 
    # which can set the timeout period will be blocked over time is then performed down 
e.clear ()    # the event marks the object again to False

Use Cases

# Requirements are: two tasks concurrently, but in the over task2 have to wait until completion of task1 to perform 
Import Time
 from Threading Import Event, the Thread 

E = Event () 

DEF task1 ():
     Print ( " task1 RUN " ) 
    the time.sleep ( 3 )
     Print ( " task1 over " ) 
    e.set () # task1 finished, set the event object to True, let task2 become a clog in the ready state, waiting for the operating system scheduling 

DEF task2 ():
     Print ( " task2 RUN " ) 
    the time.sleep ( 1  )
    e.wait ()    # Let him be here waiting, knowing that the event object is set to True 
    Print ( " task2 over " ) 
the Thread (target = task1) .start ()     # Here shorthand, to create objects and start a thread merged together 
Thread (target = task2) .start ()

Coroutine *****

Coroutine object is to achieve concurrent 'single thread

Why concurrency within a single thread

  1, so that we can control the cpu to run their own schedule, as long as a cpu come, it will be able to basically run out of time slice, improve efficiency program

  2, when higher than when we concurrency and threads with hardware that can no longer be opened, which is that a coroutine can not be accounted for

    The amount of resources, but also to achieve concurrent method

Single-threaded concurrency features

  1, for compute-intensive tasks, single-threaded concurrency does not improve performance, but will reduce efficiency

  2, for IO operations, can be detected must have IO operations, and automatically switch to another task, so as to improve efficiency

Way to achieve single-threaded concurrency

1, yield save state manual switching + (Learn)

  Concurrent: multiple tasks appear to be run simultaneously, essentially save state switching +

  In the generator, yield can be saved will be a function of the current running state

  So we can simply be achieved by a thread concurrency in yield

  The method is implemented yield IO operations can not be detected

DEF task1 ():
     Print ( " task1 First " )
     yield 
    Print ( " task1 SECOND, " )
     yield 

DEF task2 ():
     Print ( " task2 First " ) 
    . task1 () __next__ ()       # find task1 the first yield will return then come back down implementation 
    Print ( " task2 SECOND, " ) 
    task1 (). __next__ ()       # Note that, next method must be found Builder yield, otherwise an error 

task2 () # final between single-threaded concurrency

2, greenlet package switch (Learn)

  Although the direct use of yield can be achieved concurrently, but the structure of the code is too messy (full yield and next),

  greenlet is manually switched, and the operation can not be detected IO

Import greenlet 

DEF task1 ():
     Print ( " task1 First " ) 
    g2.switch () 
    Print ( " task2 SECOND, " ) 
    g2.switch () 

DEF task2 ():
     Print ( " task2 First " ) 
    g1.switch () 
    Print ( " Task2 SECOND " ) 

g1 = greenlet.greenlet (Task1) 
G2 = greenlet.greenlet (Task2) 

g1.switch () # switch to to g1, then went down the execution is finished, are required to manually switch 
Print ("main over")

3, peddled 协 程

  - 1, what is the coroutine

     gevent is coroutine

     Coroutine is lightweight threads can also be called micro-site

     It is an application-level task scheduling

     It can automatically switch between tasks, but can not detect IO, needs and monkey patch (monkey module) used together

  --2, contrast thread coroutine

    Coroutine is an application-level scheduling, dispatching thread is the operating system level

    Application-level scheduling

      When we detected IO operation, you can immediately switch to my other tasks to perform, if there are enough task execution

      The cpu can fully utilize time slices

    OS-level scheduling

      Encounter IO, the operating system will take away cpu, next time give a thread which depends on the operating system's internal algorithm, we can not decide

    For it is obvious that a process, the use efficiency than coroutine efficiency of multi-line

  --3, how to improve efficiency (usage scenarios)

    In Cpython due to the presence GIL lock, resulting in multi-threaded and can not be parallelized, lost the advantage of multi-core

    Even if multiple threads can only be opened concurrently, then you can use to achieve concurrent coroutines, improve program efficiency

    Advantages: will not take up more useless resources

    Cons: If it is computationally intensive tasks, but reduced the efficiency of the use of coroutines

  --4, with monkey patch usage

Import Time 

from GEVENT Import monkey
 # first introduced monkey class 
monkey.patch_all ()   # can modify some blocking code becomes non-blocking codes (which may be modified specifically point to see inside) 

Import GEVENT    # Import GEVENT 

DEF Task1 ():
     Print ( " task1 First " ) 
    the time.sleep ( 1 )
     Print ( " task1 SECOND, " ) 

DEF task2 ():
     Print ( " task2 First " )
     Print ( " task2 SECOND, ") 

G1 = gevent.spawn (task1)     # create a coroutine 
g2 = gevent.spawn (task2) 

gevent.joinall ([g1, g2]) # Note that the main thread needs to wait for completion of these tasks, otherwise the main thread has ended, the task will not execute 
Print ( " main over " ) # operating results can also be seen between these concurrent tasks are performed, and encounters IO can automatically switch

Guess you like

Origin www.cnblogs.com/hesujian/p/10994684.html