Coroutine event

Introduction:

   Because of GIL lock, so cpython unable to take advantage of multi-core CPU, use only single-core concurrent execution, efficiency is obviously not high.

  For compute-intensive tasks, no action has been able to take up cpu time out ways to improve the efficiency of rice, except to lock, so that multi-core CPU parallel execution;

  For the io-intensive tasks, once the thread will encounter io operation of the CPU to other threads and cut threads to uncertainty, the application can not control, it will reduce the efficiency; but if a thread is able to detect and operate io is set to non-blocking, that does not automatically switch to another task yet, that is, in the single-threaded concurrency           

 

1. threaded concurrency          switching tasks saved state +

  1. Realization Condition: To obtain a single-threaded concurrency found a solution capable of performing switching between two tasks and save state.

  And in python generator to have such a characteristic that we could not use generators to achieve concurrent execution, for example, each call to next will return to execute code generator function, which means that you can switch between tasks, and is based on results of the last run, which means that the generator will automatically save the execution state.

def task1():
    while True:
        yield
        print("task1 run")

def task2():
    g = task1()
    while True:
        next(g)
        print("task2 run")
task2()

Import Time DEF Task1 (): A
= 0 for I in Range ( 10000000 ): A + = I the yield DEF Task2 (): G = Task1 () B = 0 for I in Range ( 10000000 ): B + = . 1 Next ( G) S = the time.time () Task2 () Print ( " concurrent execution time " , the time.time () - S) # single thread computing tasks performed serially two higher efficiency rather than complicated because concurrent need to switch and save def task1(): a = 0 for i in range(10000000): a += i def task2(): b = 0 for i in range(10000000): b += 1 s = time.time() task1() task2() print("串行执行时间",time.time()-s)

  2.greenlet module

    Since the yield switching code is very confusing, if multi-task, can not imagine, so some people were packing to yield, and will have greenlet

DEF Task1 (name): 
    Print ( " % S Task1 runl " % name) 
    . G2 Switch (name) # switches to task 2 
    Print ( " Task1 RUN2 " ) 
    G2. Switch () # switches to task 2 

DEF Task2 (name) : 
    Print ( " % S Task2 runl " % name) 
    . G1 switch () is switched to task #. 1 
    Print ( " Task2 RUN2 " ) 

G1 = greenlet.greenlet (Task1) 
G2 = greenlet.greenlet (Task2) 
. G1 switch ( "Jerry " ) # pass parameters for the task

But greenlet the yield can not be detected io, io will also encounter blocking, so we need to have a io can detect, but also to achieve single-threaded concurrent program, so there geven.monkey (patch)

After not have the ability to detect IO # gevent the need to patch to patch as it can detect IO 
# Note that patch must fight at the top must ensure that it patched before you import the module 
from gevent Import Monkey 
monkey.patch_all () # 

from Threading Import current_thread 
Import GEVENT, Time 


DEF Task1 (): 
    Print (current_thread (), . 1 ) 
    Print ( " Task1 RUN " ) 
    # gevent.sleep ( . 3 ) 
    the time.sleep ( . 3 ) 
    Print ( " Task1 over " ) 

DEF Task2 (): 
    Print (current_thread (), 2 ) 
    Print ( " Task2 RUN " )
    Print ( " task2 over " ) 

# spawn is used to create a task coroutine 
g1 = gevent.spawn (task1) 
g2 = gevent.spawn (task2) 

# tasks to be performed, must ensure that the main thread did not hang because all tasks are coroutine in the implementation of the main line, waiting to join must be called coroutines task 
# g1.join () 
# g2.join () 
# theoretically awaiting execution longest mission on the line, but it is unclear who a long time can all join 

gevent.joinall ( [G1, G2]) 
Print ( " over " )

2. Thread the queue

  Queue: Joinablequeue using exactly the same way as in the process but do not have the IPC

   LifoQueue: last in first out LIFO advanced after a simulated stack

   PriorityQueue: the higher the priority of custom objects have priority queue can be stored in a relatively smaller size of the object can be compared using the comparison operators can not not be stored

from queue import Queue,LifoQueue,PriorityQueue
=====================================================================
q = Queue()
#
# q.put("123")
# q.put("456")
#
# print(q.get())
# print(q.get())
#
# # print(q.get(block=True,timeout=3))
# q.task_done()
# q.task_done()
# q.join()
# print("over")
==========================================================================
lq = LifoQueue()
#
# lq.put("123")
# lq.put("456")
#
# print(lq.get())
# print(lq.get())
====================================================================
class A(object):
    def __init__(self,age):
        self.age = age

    # def __lt__(self, other):
    #     return self.age < other.age
    #
    # def __gt__(self, other):
    #     return self.age > other.age

    def __eq__(self, other):
        return self.age == other.age
a1 = A(50)
a2 = A(50)



print(a1 == a2)

# print(a1 is a1)

# print(pq.get())


# pq = PriorityQueue()
# pq.put("a")
# pq.put("A")
# pq.put("C")

3. The process outlined coroutine

  Definition: Concurrent under a single thread, also called micro-threads, shreds, lightweight thread is a user-state, controlled by the user program scheduling their

  Note: 1.python threads belonging to the kernel level, that is controlled by the operating system scheduler (such as single-threaded execution time is too long or encounter io will be forced to surrender cpu execute permissions, switch to another thread running)

    2. open coroutines within a single-threaded event of io, will be from the application level (rather than the operating system) control switch, in order to improve the efficiency of (non-independent handover and efficiency !!! io operations)

Comparative switching control thread operating system, the user controls the switching in the single-threaded coroutine

Following advantages:

1. coroutine switching overhead is smaller, program-level switching part of the operating system is completely imperceptible, and thus more lightweight

Effect can be achieved concurrently, the maximum use of 2 single-threaded cpu

Shortcomings are as follows:  

1. The nature of coroutines is single-threaded, multi-core can not be used, a program may be more open process, open multiple threads within each process, open coroutine each thread to maximize efficiency
2. coroutine nature It is a single thread, so once coroutine congestion occurs, will block the entire thread

import gevent,sys
from gevent import monkey # 导入monkey补丁
monkey.patch_all() # 打补丁 
import time

print(sys.path)

def task1():
    print("task1 run")
    # gevent.sleep(3)
    time.sleep(3)
    print("task1 over")

def task2():
    print("task2 run")
    # gevent.sleep(1)
    time.sleep(1)
    print("task2 over")

g1 = Gevent.spawn (Task1) 
G2 = gevent.spawn (Task2) 
# gevent.joinall ([G1, G2]) 
g1.join () 
g2.join () 
# execute the above code will find not output any message 
# This is because coroutine tasks are submitted asynchronously, so the main thread will continue down, but once finished the last line of the main thread is over, 
# led task coroutine not come and perform, so this time must join to let the main thread wait coroutine task is finished is the main thread to keep alive 
# follow when using coroutine also need to ensure that the main thread has been alive, if it means the end of the main thread does not need to call join

requires attention:

1. If the main thread to finish the task coroutine will end immediately.

Principle 2.monkey patch is to replace the original method of blocking non-blocking modified method, that is perpetrating a fraud, to achieve automatic switching IO

You must use the corresponding function in the patch and then, to avoid forget, it is recommended to write at the top of most

monke Patch Case:

myjson.py # 
DEF dump (): 
    Print ( " a function is replaced dump " ) 

DEF load (): 
    Print ( " a load function is replaced by " ) 
# test.py 
Import myjson 
Import json 
# patch function 
def monkey_pacth_json ( ): 
    the json.dump = myjson.dump 
    the json.load = myjson.load 
    
# patch 
monkey_pacth_json () 

# test 
the json.dump () 
the json.load () 
# output: 
# dump function is replaced by a 
# is replaced by a load function

4. Event

from Threading Import the Thread, Event 

Import Time 


# is_running = False 
# 
# DEF boot_server (): 
#      , Ltd. Free Join is_running 
# Print ( " Starting server ...... " ) 
# the time.sleep ( 3 ) 
# Print ( " server starts success! " ) 
# is_running = True 
# 
# 
# connect_server DEF (): 
#      the while True: 
#          IF is_running: 
# Print ( " linked server successfully! " )
#              BREAK 
#          the else : 
# the time.sleep ( 0.1 ) 
# Print ( " error server does not start! " ) 
# 
# 
# 
# T1 = the Thread (target = boot_server) 
# t1.start () 
# 
# 
# T2 = the Thread (target = connect_server) 
# t2.start () 






boot_event = event () 

status # boot_event.clear () return the event to False 
# boot_event.is_set () returns the state of the event 
# boot_event.wait () waits for the event, is waiting for an event to be set for the True 
# boot_event. the sET () event is set to True


 
DEF boot_server ():
    Print ( "Starting server ...... " ) 
    the time.sleep ( 3 ) 
    Print ( " ! Server started successfully " ) 
    . Boot_event the SET () # tag event has occurred 


DEF connect_server (): 
    boot_event.wait () # wait for an event the occurrence of 
    Print ( " linked server successfully! " ) 

T1 = the Thread (target = boot_server) 
t1.start () 

T2 = the Thread (target = connect_server) 
t2.start ()

 

Guess you like

Origin www.cnblogs.com/wyf20190411-/p/10986570.html