day38_ million annual salary into the thirty-eighth day - thread queue, event event, coroutine

day38

Thread queue

Multithreading seize resources

Only let Serial - to use a mutex

Thread queue
  • Queue - First In First Out (FIFO)
import queue
q = queue.Queue(3)
q.put(1)
q.put(2)
q.put(3)
# q.put(4)  # 阻塞等其他进程或者线程来拿
print(q.get())
print(q.get())
print(q.get())
# print(q.get(block=False))  # 没有值就直接报错
# q.get(timeout=2)  # 阻塞2s,还没有值直接报错
  • Stack - advanced out after (LIFO)

import queue
q = queue.LifoQueue(4)
q.put(1)
q.put(2)
q.put("alex")
q.put("太白")
print(q.get())
print(q.get())
print(q.get())
print(q.get())
结果:
太白
alex
2
1
  • Priority Queue - set up their own priorities
import queue
q = queue.PriorityQueue(4)
q.put((5, "元宝"))
q.put((-2, "狗狗"))
q.put((0, "2李业"))
q.put((0, "1刚哥"))
print(q.get())
print(q.get())
print(q.get())  # 数字越小就先出去,相同数字按照asill码来排序

Event event

Open two threads, one thread runs into the middle of a stage, triggering the other threads execute two threads increases the coupling

A version
from threading import Thread
from threading import current_thread
import time

flag = False


def check():
    print(f"{current_thread().name} 监测服务器是否开启。。。")
    time.sleep(3)
    global flag
    flag = True
    print("服务器已经开启。。。")


def connect():
    while 1:
        print(f"{current_thread().name} 等待连接。。。")
        time.sleep(0.5)
        if flag:
            print(f"{current_thread().name} 连接成功。。。")
            break


t1 = Thread(target=check)
t2 = Thread(target=connect)
t1.start()
t2.start()
Version two - event event
from threading import Thread
from threading import current_thread
from threading import Event
import time

event = Event()


def check():
    print(f"{current_thread().name} 监测服务器是否开启。。。")
    time.sleep(3)
    # print(event.is_set())
    event.set()
    # print(event.is_set())
    print("服务器已经开启。。。")


def connect():
    print(f"{current_thread().name} 等待连接。。。")
    event.wait()  # 阻塞直到event.set() 方法之后
    # event.wait(1)  # 只阻塞1s,1s之后如果还没有进行set 直接进行下一步操作
    print(f"{current_thread().name} 连接成功。。。")


t1 = Thread(target=check)
t2 = Thread(target=connect)
t1.start()
t2.start()

Requirements: whether to start a thread server monitoring, to determine if another thread started, it shows the connection is successful, try to connect this thread only three times, 1s once, if more than three times, yet the connection is successful, it shows the connection fails

from threading import Thread
from threading import current_thread
from threading import Event
import time

event = Event()


def check():
    print(f"{current_thread().name} 监测服务器是否开启")
    time.sleep(2)
    event.set()


def connect():
    print(f"{current_thread().name} 等待连接,,,")
    for i in range(3):
        event.wait(1)
        if event.is_set():
            print("服务器已经开启")
            print(f"{current_thread().name} 连接成功")
            break
        else:
            print(f"{current_thread().name} 连接失败{i+1}次")


t1 = Thread(target=check)
t2 = Thread(target=connect)
t1.start()
t2.start()

Coroutine

Coroutine Details: https://www.cnblogs.com/jin-xin/articles/11245654.html

A thread concurrent processing tasks

  • Serial: a thread executing a task, after the execution is completed, the next task
  • Parallel: a plurality of CPU executes a plurality of tasks, CPU 4 performs four tasks
  • Concurrency: a CPU to perform multiple tasks at the same time looks like a run

Concurrent real core: CPU + switch on hold

Multi-threaded concurrency: 3 Threading 10 task if the task processing thread 1 experience blocking, CPU by the operating system to switch to another thread

A thread processing tasks can concurrently? ?

One thread to handle three tasks

Single CPU: 10 tasks, let me give you the concurrent execution of these 10 tasks:

  • Method 1: Open the concurrent execution of multiple processes, the operating system is switched on hold +
  • Second way: open the concurrent execution of multiple threads, the operating system is switched on hold +
  • Three ways: Open concurrent coroutine execution, the control program with its own CPU to switch back and forth between the three tasks hold +

Detailed explanation of 3: coroutine he switched very fast, eyes blinded by the operating system, so that the operating system has been considered CPU you run this thread (coroutines)

Coroutine best way, why?

advantage:

  • Small overhead
  • Run faster
  • Coroutine will long occupied the CPU to perform all the tasks I just inside the program

Disadvantages:

Nature coroutine is single-threaded, multi-core can not be used, a program may be more open process, open multiple threads within each process, each coroutine open thread

Coroutine good deal with IO-intensive, computing-intensive good or serial

What is the coroutine?

A plurality of single thread concurrent processing tasks, switching program control hold state coroutine +

Coroutine features
  • It must be implemented concurrently in only a single thread in
  • Modify shared data without locking
  • A plurality of user program context save their stacks of the control flow (holding state)
  • A coroutine encountered IO will automatically switch to other tasks
Work

In general, we are all working process + threads + coroutine way to achieve concurrency, concurrent to achieve the best results, if a 4-core cpu, generally from five processes, each of the 20 threads (5 times cpu number), each thread can play 500 coroutine, when large-scale crawling the page, wait for the network time delay when we can use to achieve concurrent coroutines. The number of concurrent = 5 * 20 * 500 = 50000 concurrent, which is generally a 4cpu the maximum number of concurrent machine. nginx maximum load when the load balancing is a single-threaded 5w in this 20 task code is usually calculated both operations have a blocking operation, we can experience blocking in the implementation of Task 1, on the use of the blocking time 2 to perform the task. . . . So, in order to improve efficiency, which uses Gevent module.

There are no codes to learn before switching:
def func1():
    print("in func1")


def func2():
    print("in func2")
    func1()
    print("end")


func2()
Switch + hold: IO met not take the initiative to switch
def gen():
    while 1:
        yield 1
        print(333)


def func():
    obj = gen()
    for i in range(10):
        print(next(obj))


func()
greenlet-- underlying technology coroutine
from greenlet import greenlet
import time


def eat(name):
    print(f"{name} eat 1")  # 2
    g2.switch("taibai")  # 3
    # time.sleep(3)
    print(f"{name} eat 2")  # 6
    g2.switch()  # 7


def play(name):
    print(f"{name} play 1")  # 4
    g1.switch()  # 5
    print(f"{name} play 2")  # 8


g1 = greenlet(eat)
g2 = greenlet(play)

g1.switch("taibai")  # 切换到eat任务 1
Coroutine low version
  • Analog blocking
import gevent
import time
from threading import current_thread


def eat(name):
    print(f"{name} eat 1")  # 2
    print(current_thread().name)  # 3
    gevent.sleep(2)
    # time.sleep(2)
    print(f"{name} eat 2")  # 7


def play(name):
    print(f"{name} play 1")  # 4
    print(current_thread().name)  # 5
    gevent.sleep(1)
    # time.sleep(1)
    print(f"{name} play 2")  # 6


g1 = gevent.spawn(eat, "egon")
g2 = gevent.spawn(play, "egon")
print(f"主{current_thread().name}")  # 1
g1.join()
g2.join()
结果:
主MainThread
egon eat 1
MainThread
egon play 1
MainThread
egon play 2
egon eat 2
  • The real obstruction
import gevent
import time
from threading import current_thread


def eat(name):
    print(f"{name} eat 1")  # 2
    print(current_thread().name)  # 3
    # gevent.sleep(2)
    time.sleep(2)
    print(f"{name} eat 2")  # 4


def play(name):
    print(f"{name} play 1")  # 5
    print(current_thread().name)  # 6
    # gevent.sleep(1)
    time.sleep(1)
    print(f"{name} play 2")  # 7


g1 = gevent.spawn(eat, "egon")
g2 = gevent.spawn(play, "egon")
print(f"主{current_thread().name}")  # 1
g1.join()
g2.join()
结果:
主MainThread
egon eat 1
MainThread
egon eat 2
egon play 1
MainThread
egon play 2
The final version
import gevent
import time
from gevent import monkey
monkey.patch_all()  # 打补丁:将下面所有任务的阻塞都打上标记


def eat(name):
    print(f"{name} eat 1")  # 1
    time.sleep(2)
    print(f"{name} eat 2")  # 4


def play(name):
    print(f"{name} play 1")  # 2
    time.sleep(1)
    print(f"{name} play 2")  # 3


g1 = gevent.spawn(eat, "egon")
g2 = gevent.spawn(play, "egon")

# g1.join()
# g2.join()
gevent.joinall([g1, g2])
结果:
egon eat 1
egon play 1
egon play 2
egon eat 2
Coroutine applications

reptile

from gevent import monkey;monkey.patch_all()
import gevent
import requests
import time

def get_page(url):
    print('GET: %s' %url)
    response=requests.get(url)
    if response.status_code == 200:
        print('%d bytes received from %s' %(len(response.text),url))


start_time=time.time()
gevent.joinall([
    gevent.spawn(get_page,'https://www.python.org/'),
    gevent.spawn(get_page,'https://www.yahoo.com/'),
    gevent.spawn(get_page,'https://github.com/'),
])
stop_time=time.time()
print('run time is %s' %(stop_time-start_time))

结果:
GET: https://www.python.org/
GET: https://www.yahoo.com/
GET: https://github.com/
48919 bytes received from https://www.python.org/
87845 bytes received from https://github.com/
515896 bytes received from https://www.yahoo.com/
run time is 2.729017734527588

Gevent achieved by the concurrent single-threaded socket (from gevent import monkey; import must be placed before the socket module monkey.patch_all (), otherwise blocking the socket gevent unrecognized)

A plurality of network request time delay time which elapsed

server

from gevent import monkey;monkey.patch_all()
from socket import *
import gevent

#如果不想用money.patch_all()打补丁,可以用gevent自带的socket
# from gevent import socket
# s=socket.socket()

def server(server_ip,port):
    s=socket(AF_INET,SOCK_STREAM)
    s.setsockopt(SOL_SOCKET,SO_REUSEADDR,1)
    s.bind((server_ip,port))
    s.listen(5)
    while True:
        conn,addr=s.accept()
        gevent.spawn(talk,conn,addr)

def talk(conn,addr):
    try:
        while True:
            res=conn.recv(1024)
            print('client %s:%s msg: %s' %(addr[0],addr[1],res))
            conn.send(res.upper())
    except Exception as e:
        print(e)
    finally:
        conn.close()

if __name__ == '__main__':
    server('127.0.0.1',8080)

client

from socket import *

client=socket(AF_INET,SOCK_STREAM)
client.connect(('127.0.0.1',8080))


while True:
    msg=input('>>: ').strip()
    if not msg:continue

    client.send(msg.encode('utf-8'))
    msg=client.recv(1024)

Or multi-threaded multiple clients, to ask for the top end of the service is no problem

from threading import Thread
from socket import *
import threading

def client(server_ip,port):
    c=socket(AF_INET,SOCK_STREAM) #套接字对象一定要加到函数内,即局部名称空间内,放在函数外则被所有线程共享,则大家公用一个套接字对象,那么客户端端口永远一样了
    c.connect((server_ip,port))

    count=0
    while True:
        c.send(('%s say hello %s' %(threading.current_thread().getName(),count)).encode('utf-8'))
        msg=c.recv(1024)
        print(msg.decode('utf-8'))
        count+=1
if __name__ == '__main__':
    for i in range(500):
        t=Thread(target=client,args=('127.0.0.1',8080))
        t.start()

Another coroutine module asyncio

#!/usr/bin/env python
# -*- coding:utf-8 -*-

# import asyncio

# 起一个任务.
# async def demo():   # 协程方法
#     print('start')
#     await asyncio.sleep(1)  # 阻塞
#     print('end')

# loop = asyncio.get_event_loop()  # 创建一个事件循环
# loop.run_until_complete(demo())  # 把demo任务丢到事件循环中去执行

# 启动多个任务,并且没有返回值
# async def demo():   # 协程方法
#     print('start')
#     await asyncio.sleep(1)  # 阻塞
#     print('end')
#
# loop = asyncio.get_event_loop()  # 创建一个事件循环
# wait_obj = asyncio.wait([demo(),demo(),demo()])
# loop.run_until_complete(wait_obj)

# 启动多个任务并且有返回值
# async def demo():   # 协程方法
#     print('start')
#     await asyncio.sleep(1)  # 阻塞
#     print('end')
#     return 123
#
# loop = asyncio.get_event_loop()
# t1 = loop.create_task(demo())
# t2 = loop.create_task(demo())
# tasks = [t1,t2]
# wait_obj = asyncio.wait([t1,t2])
# loop.run_until_complete(wait_obj)
# for t in tasks:
#     print(t.result())

# 谁先回来先取谁的结果
# import asyncio
# async def demo(i):   # 协程方法
#     print('start')
#     await asyncio.sleep(10-i)  # 阻塞
#     print('end')
#     return i,123
#
# async def main():
#     task_l = []
#     for i in range(10):
#         task = asyncio.ensure_future(demo(i))
#         task_l.append(task)
#     for ret in asyncio.as_completed(task_l):
#         res = await ret
#         print(res)
#
# loop = asyncio.get_event_loop()
# loop.run_until_complete(main())



# import asyncio
#
# async def get_url():
#     reader,writer = await asyncio.open_connection('www.baidu.com',80)
#     writer.write(b'GET / HTTP/1.1\r\nHOST:www.baidu.com\r\nConnection:close\r\n\r\n')
#     all_lines = []
#     async for line in reader:
#         data = line.decode()
#         all_lines.append(data)
#     html = '\n'.join(all_lines)
#     return html
#
# async def main():
#     tasks = []
#     for url in range(20):
#         tasks.append(asyncio.ensure_future(get_url()))
#     for res in asyncio.as_completed(tasks):
#         result = await res
#         print(result)
#
#
# if __name__ == '__main__':
#     loop = asyncio.get_event_loop()
#     loop.run_until_complete(main())  # 处理一个任务


# python原生的底层的协程模块
    # 爬虫 webserver框架
    # 题高网络编程的效率和并发效果
# 语法
    # await 阻塞 协程函数这里要切换出去,还能保证一会儿再切回来
    # await 必须写在async函数里,async函数是协程函数
    # loop 事件循环
    # 所有的协程的执行 调度 都离不开这个loop

Guess you like

Origin www.cnblogs.com/NiceSnake/p/11432238.html