Thread queue, event and event coroutine

Thread queue, event and event coroutine

Thread queue

Multithreading seize the resources to keep them serially in two ways:

1, mutex

2, queue

Thread queue is divided into the following three:

1, Queue (FIFO)

import queue

q = queue.Queue(3)
q.put(1)
q.put(2)
q.put(3)
# q.put(4,block=False)  # 若不设置block参数,默认为True,大于队列长度进入阻塞状态,若设置block为False,大于对列长度直接报错
print(q.get())
print(q.get())
print(q.get())
# print(q.get(timeout=2)) 阻塞2s 还没有值直接报错
# 结果
1
2
3

2, LifoQueue (LIFO)

import queue

q = queue.LifoQueue(3)
q.put(1)
q.put(2)
q.put(3)
print(q.get())
print(q.get())
print(q.get())
# 结果:
3
2
1

3, PriorityQueue (priority queue)

import queue

q = queue.PriorityQueue(3)
q.put((-1,'awe'))   # 操作对象为元祖,第一个位置的数字越小,优先级越高
q.put((2,6))
q.put((0,3))
print(q.get())
print(q.get())
print(q.get())
# 结果:
(-1, 'awe')
(0, 3)
(2, 6)

Event event

Open two threads, one thread runs into the middle of a stage, triggering the other threads execute two threads increases the coupling.

The introduction of two stages of the event event:

Version 1] to determine the state of the global variable)

from threading import Thread
from threading import current_thread
import time

flag =False
def check():
    print(f'{current_thread().name}监测服务器是否开启')
    time.sleep(3)
    global flag
    flag = True
    print('服务器已开启')

def connect():
    while not flag:
        print(f'{current_thread().name}等待连接')
        time.sleep(0.5)
    else:
        print(f'{current_thread().name} 连接成功...')

t1 = Thread(target=check,)
t2 = Thread(target=connect,)
t1.start()
t2.start()
# 结果:
Thread-1监测服务器是否开启
Thread-2等待连接
Thread-2等待连接
Thread-2等待连接
Thread-2等待连接
Thread-2等待连接
Thread-2等待连接
服务器已开启
Thread-2 连接成功...

Version 2 :( events Event)

from threading import Thread
from threading import current_thread
from threading import Event
import time

event = Event() # 创建事件对象

def check():
    print(f'{current_thread().name}监测服务器是否开启')
    time.sleep(3)
    print(event.is_set()) # 判断事件是否设置
    event.set() # 设置事件
    print(event.is_set())
    print('服务器已开启')

def connect():

    print(f'{current_thread().name}等待连接')
    event.wait() # 等待事件设置,阻塞状态
    print(f'{current_thread().name} 连接成功...')

t1 = Thread(target=check,)
t2 = Thread(target=connect,)
t1.start()
t2.start()

Small Exercise:

Examples of the above-described server to monitor the status of a thread, another thread server status is determined, if the server status is turned on, displays the connection is successful, the thread tries to connect to the server once every one second, a total of three times the connection, not the connection is successful, then the display Connection failed

from threading import Thread
from threading import current_thread
from threading import Event
import time

event = Event() # 创建事件对象

def check():
    print(f'{current_thread().name}监测服务器是否开启')
    time.sleep(3)
    event.set() # 设置事件
    print('服务器已开启')

def connect():
    count = 1
    while 1:
        print(f'{current_thread().name}等待连接')
        event.wait(1) # 等待事件设置,阻塞状态
        if count == 4:
            print(f'{current_thread().name}连接成功')
        count += 1
        print(f'{current_thread().name}尝试连接{count}次...')
    else:
        print(f'{current_thread().name}连接成功')

t1 = Thread(target=check,)
t2 = Thread(target=connect,)
t1.start()
t2.start()

Coroutine

Coroutine: simply means that a thread concurrent processing tasks.

Serial: a thread executing a task, after the execution is completed, the next task.

Parallel: a plurality of cpu perform multiple tasks, 4 cpu perform four tasks.

Concurrency: a cpu to perform multiple tasks at the same time looks like a run.

Concurrent real core: switch and hold.

Multi-threaded concurrency: 3 threading 10, this quest if the thread 1 processing experience blocking, cpu by the operating system to switch to another thread,

A thread concurrent processing tasks: a thread to perform three tasks, for example:

Association Process Definition: coroutine lightweight thread is a user-state, i.e. coroutine is controlled by the user program scheduling themselves.

Single cpu concurrent execution of 10 missions in three ways:

1, way: open multi-process concurrent execution, operating system switching + hold.

2, two ways: through multiple concurrently executing threads, the OS switching + hold.

3, three way: Open concurrent coroutine execution, the control program with its own cpu switch between tasks + 3 hold.

These three implementations, coroutine Preferably, this is because:

1. coroutine switching overhead is smaller, program-level switching part of the operating system is completely imperceptible, and thus more lightweight

2. coroutine run faster

3. coroutine long-term occupation cpu will perform all the tasks I just inside the program.

Coroutine features:

  1. It must be implemented concurrently in only a single thread in
  2. Modify shared data without locking
  3. A plurality of user program context save their stacks of the control flow (holding state)
  4. Additional: a coroutine experience other coroutine IO operation automatically switches to

Greenlet

Greenlet python is one third-party modules, real coroutine module is used to complete the handover greenlet

Concurrent two core: holding the switching state and then we slowly introduced into the module from a usage example

# 版本一:单切换
def func1():
    print('in func1')

def func2():
    print('in func2')
    func1()
    print('end')

func2()

# 版本二:切换+保持状态
import time
def gen():
    while 1:
        yield 1
        time.sleep(0.5)  # 手动设置IO,遇到IO无法自动切换

def func():
    obj = gen()
    for i in range(10):
        next(obj)
func()

# 版本三:切换+保持状态,遇到IO自动切换
from greenlet import greenlet
import time
def eat(name):
    print('%s eat 1' %name)  # 2
    g2.switch('taibai')  # 3
    time.sleep(3)
    print('%s eat 2' %name)  # 6
    g2.switch()  # 7

def play(name):
    print('%s play 1' %name)  # 4
    g1.switch()  # 5
    print('%s play 2' %name)  # 8

g1=greenlet(eat)
g2=greenlet(play)

g1.switch('taibai')  # 1 切换到eat任务

Coroutine module gevent

gevent is a third-party libraries can be easily achieved by gevent concurrent synchronous or asynchronous programming, the main mode is used in gevent greenlet , it is a form of access Python C extension module lightweight coroutines. Greenlet all run inside the main operating system processes, but they are collaboratively scheduling.

# gevent模块的几个用法
# 用法:
g1=gevent.spawn(func,1,2,3,x=4,y=5)创建一个协程对象g1,spawn括号内第一个参数是函数名,如eat,后面可以有多个参数,可以是位置实参或关键字实参,都是传给函数eat的,spawn是异步提交任务

g2=gevent.spawn(func2)

g1.join() # 等待g1结束

g2.join() # 等待g2结束  有人测试的时候会发现,不写第二个join也能执行g2,是的,协程帮你切换执行了,但是你会发现,如果g2里面的任务执行的时间长,但是不写join的话,就不会执行完等到g2剩下的任务了

# 或者上述两步合作一步:gevent.joinall([g1,g2])

Using a blocking time.sleep simulation program encountered:

import gevent
import time
from threading import current_thread
def eat(name):
    print('%s eat 1' %name)
    print(current_thread().name)
    # gevent.sleep(2)
    time.sleep(2)
    print('%s eat 2' %name)

def play(name):
    print('%s play 1' %name)
    print(current_thread().name)
    # gevent.sleep(1) # gevent.sleep(1)模拟的是gevent可以识别的io阻塞
    time.sleep(1)
    # time.sleep(1)或其他的阻塞,gevent是不能直接识别的需要用下面一行代码,打补丁,就可以识别了
    print('%s play 2' %name)


g1 = gevent.spawn(eat,'egon')
g2 = gevent.spawn(play,name='egon')
print(f'主{current_thread().name}')
g1.join()
g2.join()
# 结果: 
主MainThread
egon eat 1
MainThread
egon eat 2
egon play 1
MainThread
egon play 2

The final version:

import gevent
from gevent import monkey
monkey.patch_all()  # 打补丁: 将下面的所有的任务的阻塞都打上标记
def eat(name):
    print('%s eat 1' %name)
    time.sleep(2)
    print('%s eat 2' %name)

def play(name):
    print('%s play 1' %name)
    time.sleep(1)
    print('%s play 2' %name)


g1 = gevent.spawn(eat,'egon')
g2 = gevent.spawn(play,name='egon')

# g1.join()
# g2.join()
gevent.joinall([g1,g2])
# 结果:
egon eat 1
egon play 1
egon play 2
egon eat 2

Load balancing: it refers to the load (tasks) to be balanced spread over a plurality of operating units operating

Nginx: Nginx is a lightweight Web server / reverse proxy server and e-mail (IMAP / POP3) proxy server, which is characterized by possession of less memory, high concurrency.

In general, we are all working process + threads + coroutine way to achieve concurrency, concurrent to achieve the best results, if a 4-core cpu, generally from five processes, each of the 20 threads (5 times cpu number), each thread can play 500 coroutine, when large-scale crawling the page, wait for the network time delay when we can use to achieve concurrent coroutines. The number of concurrent = 5 * 20 * 500 = 50000 concurrent, which is generally a 4cpu the maximum number of concurrent machine. nginx maximum load when the load balancing is a 5w

In the single-threaded code for these 20 tasks usually calculated both operations have a blocking operation, we can go to perform the task 2 in task execution time experience blocking 1:00, on the use of obstruction. . . . So, in order to improve efficiency, which uses Gevent module.

Guess you like

Origin www.cnblogs.com/lifangzheng/p/11420036.html