2019.09.16 learning finishing

2019.09.16 learning finishing

Process lock

Question: When multiple processes use the same data resources, security or order of the data will lead to confusion.

Priority to grab votes

from  multiprocessing import Process,Lock
import json,time,os

def search():
    time.sleep(1) # 模拟网络io
    with open('db.txt',mode='rt',encoding='utf-8') as f:
        res = json.load(f)
        print(f'还剩{res["count"]}')

def get():
    with open('db.txt',mode='rt',encoding='utf-8') as f:
        res = json.load(f)
        # print(f'还剩{res["count"]}')
    time.sleep(1) # 模拟网络io
    if res['count'] > 0:
        res['count'] -= 1
        with open('db.txt',mode='wt',encoding='utf-8') as f:
            json.dump(res,f)
            print(f'进程{os.getpid()} 抢票成功')
        time.sleep(1.5) # 模拟网络io
    else:
        print('票已经售空啦!!!!!!!!!!!')

def task(lock):
    search()

    # 锁住
    lock.acquire()
    get()
    lock.release()
    # 释放锁头

if __name__ == '__main__':
    lock = Lock() # 写在主进程是为了让子进程拿到同一把锁.
    for i in range(15):
        p = Process(target=task,args=(lock,))
        p.start()
        # p.join()

    #  进程锁 是把锁住的代码变成了串行
    #  join 是把所有的子进程变成了串行


# 为了保证数据的安全,串行牺牲掉效率.

When the lock can modify multiple processes to ensure that the same piece of data, at the same time only one task can be modified, that is, the serial changes

queue

ipc-process communication mechanism

Pipeline: pipe-based shared memory space

Queue: pipe + lock queue

Queue

Create a shared process queue, Queue is a multi-process safe queue Queue can be used to pass data between multiple processes.

Queue([maxsize])Create a shared process queue.
Parameters: maxsize is the maximum number of entries allowed in the queue. If omitted, no size limit.

Using pipes and locking the underlying queue implementation.

2.1.1 Method Introduction

Queue([maxsize]): Create a shared process queue. maxsize is the maximum number of entries allowed in the queue. If omitted, no size limit. Using pipes and locking the underlying queue implementation. In addition, the thread needs to be run to support the transmission data queue to the underlying pipe.
Examples of q Queue following methods:

q.get( [ block [ ,timeout ] ] ): Return of a project q. If q is empty, this method will block until the queue becomes available items. block for controlling the blocking behavior, the default is True. If set to False, throws exception Queue.Empty (defined in the Queue module). timeout timeout is optional, for use in blocking mode. If there are no changes to the project within the time interval making available, it will lead to Queue.Empty exception.

q.get_nowait(): Same q.get(False)method.

q.put(item [, block [,timeout ] ] ): The item in the queue. If the queue is full, this method will block until space becomes available. block blocking behavior control, the default is True. If set to False, throws exception Queue.Empty (defined in the library module Queue). timeout specifies the length of the wait for an available space of time in blocking mode. After a timeout caused Queue.Full exception.

q.qsize(): Returns the queue correct number of current projects. The results of this function are not reliable, because between the returns, and use the results later in the program, the queue may add or delete items. On some systems, this method may lead to NotImplementedError exception.

q.empty(): Q If this method is called when empty, returns True. If other processes or threads are add items to the queue, the results are not reliable. That is to say, between the return and use the results, the queue may have been added to new projects.

q.full(): If q is full, due to the presence of the thread returns True, the results may be unreliable (see. q.empty()Method).

2.1.2 Other methods (understanding)

q.close(): Turn off the queue, more data is added to prevent the queue. When you call this method, the background thread will continue to be written into the queue but that data has not been written, but will close immediately upon completion of this method. If q is garbage collected, it will automatically call this method. Close queue does not generate any type of data or an abnormal end signal of the user in the queue. For example, if a user is blocked in get()operation, close the queue does not result in the producer get()method returns an error.

q.cancel_join_thread(): Automatically connect when the background thread will not withdraw from the process. This prevents the join_thread()method blocks.

q.join_thread(): Connection queue background thread. This method is used to invoke q.close()the method, all waiting queue items are consumed. By default, this method is called by all processes is not the original creator of q. Call the q.cancel_join_thread()method may prohibit such behavior

Case

### 案例一
# q = Queue()
# q.put('鲁照山')
# q.put([1,2,4])
# q.put(2)
# print(q.get())
# print(q.get())
# print(q.get())
# # q.put(5)
# # q.put(5)
# print(q.get()) # 默认就会一直等着拿值



### 案例2
# q = Queue(4)
# q.put('鲁照山')
# q.put([1,2,4])
# q.put([1,2,4])
# q.put(2)
#
# q.put('乔碧萝')  #队列满了的情况再放值,会阻塞


### 案例3 (从这往下都是了解)
# q = Queue(3)
# q.put('zhao',block=True,timeout=2) #
# q.put('zhao',block=True,timeout=2) #
# q.put('zhao',block=True,timeout=2) #
#
# q.put('zhao',block=True,timeout=5) # put里的  block=True 如果满了会等待,timeout最多等待n s,如果ns还是队列还是满的就报错了

### 案例4
# q = Queue()
# q.put('yyyy')
# q.get()
# q.get(block=True,timeout=5) # block=True 阻塞等待,timeout最多等5s, 剩下同上

### 案例5

# q = Queue(3)
# q.put('qwe')
# q.put('qwe')
# q.put('qwe')
#
# q.put('qwe',block=False) # 对于put来说block=False 如果队列满了就直接报错

# q = Queue(3)
# q.put('qwe')
# q.get()
#
#
# q.get(block=False)
# block = Flase 拿不到不阻塞,直接报错

### 案例6
q = Queue(1)
q.put('123')
# q.get()
q.put_nowait('666') # block = False
# q.get_nowait() # block = False
#

Producer consumer model

Producer: Production data tasks
consumers: data processing tasks
producers - queue (pots) -> Consumer
producers can not stop production, to reach their maximum productivity, consumers can keep spending , also reached their maximum consumption efficiency.
producer-consumer model greatly improves the efficiency of production and producers of consumer spending efficiency

Supplementary: queue is not suitable for large file transfer, production pass through some of the messages.

Producer and consumer use patterns in concurrent programming can solve the vast majority of concurrency problems. The overall pattern to increase the speed of data processing programs by working ability to balance production and consumption thread thread.

Why use producer and consumer patterns

In the world of the thread, the thread is the producer of production data, the consumer is the thread consumption data. In multi-threaded development which, if treated quickly producer, and consumer processing speed is very slow, then the producer must wait for consumers processed, in order to continue production data. By the same token, if the consumer's capacity is larger than the producer, the consumer would have to wait for the producer. To solve this problem so the introduction of producer and consumer patterns.

What is a producer-consumer model

Producer-consumer model is the strong coupling problem is solved by a producer and consumer of container. Producers and consumers do not directly communicate with each other, and to communicate by blocking queue, so producers after completion of processing of production data without waiting for the consumer, direct throw blocking queue, consumers do not find a producer to data, but taken directly from blocking the queue, the queue is equivalent to a blocking buffer, balance the producers and consumers of processing power.

from multiprocessing import Process,Queue,JoinableQueue
import time,random

def producer(q,name,food):
    '''生产者'''
    for i in range(3):
        print(f'{name}生产了{food}{i}')
        time.sleep(random.randint(1, 3))
        res = f'{food}{i}'
        q.put(res)
    # q.put(None)

def consumer(q,name):
    '''消费者'''
    while True:
        res = q.get()
        # if res is None:break
        time.sleep(random.randint(1,3))
        print(f'{name}吃了{res}')
        q.task_done() #

if __name__ == '__main__':
    q = JoinableQueue()
    p1 = Process(target=producer,args=(q,'rocky','包子'))
    p2 = Process(target=producer,args=(q,'mac','韭菜'))
    p3 = Process(target=producer,args=(q,'nick','蒜泥'))
    c1 = Process(target=consumer,args=(q,'成哥'))
    c2 = Process(target=consumer,args=(q,'浩南哥'))
    p1.start()
    p2.start()
    p3.start()
    c1.daemon = True
    c2.daemon = True
    c1.start()
    c2.start()
    p1.join()
    p2.join()
    p3.join() # 生产者生产完毕
    # q.put(None)# 几个消费者put几次
    # q.put(None)
    q.join() # 分析
    # 生产者生产完毕--这是主进程最后一行代码结束--q.join()消费者已经取干净了,没有存在的意义了.
    #这是主进程最后一行代码结束,消费者已经取干净了,没有存在的意义了.守护进程的概念.

JoinableQueue

Create a shared process queue can be connected. It's like a Queue object, but the queue allows the user to inform the producer of the project the project has been successfully processed. Notification process is using a shared signal and condition variables to achieve.

Methods Introduction

Examples of p except JoinableQueue Queue object with the same method, the method further comprising the following:

q.task_done(): Using this method the user signal that the q.get () returns the item has been processed. If the number of calls to this method is greater than the number of items removed from the queue, it will raise a ValueError exception.

q.join(): Producers will use this method blocks until all the items in the queue have been processed. Blocking will continue to queue for each item are calling q.task_done up method ().
The following example shows how to create a process to run forever, use and disposal project on the queue. Producers will project into the queue, and wait for them to be processed.

from multiprocessing import Process,Queue,JoinableQueue


q = JoinableQueue()

q.put('zhao') # 放队列里一个任务
q.put('qian')

print(q.get())
q.task_done() # 完成了一次任务
print(q.get())
q.task_done() # 完成了一次任务
q.join() #计数器不为0的时候 阻塞等待计数器为0后通过

# 想象成一个计数器 :put +1   task_done -1

Acquaintance thread

Early identification thread.
In a traditional operating system, each process has an address space, but there is a thread of control default
in the factory, each plant has a house, and each shop will have a default line.

OS ===> factory
process ===> workshop
thread ===> pipeline
cpu ===> Power

Thread: cpu smallest unit of execution
process: collection / resource units
threads running = running code
process running thread + resources =

Guess you like

Origin www.cnblogs.com/zhangmingyong/p/11527904.html