Concurrent programming ~ ~ ~ ~ ~ ~ guardian multi-process communication, process model between producers and consumers, and orphaned zombie process, mutex, process

A daemon

Child process guarding the main process, as long as the end of the main course, followed by the child process is over

from multiprocessing import Process
import time
def task(name):
    print(f'{name}is running')
    time.sleep(2)
    print(f'{name}is gone')

if __name__ == '__main__':
    p = Process(target=task,args=('小黑',))
    p.daemon = True  # 在进程start之前,将p子进程设置成守护进程,只要主进程结束,守护进程马上结束.
    p.start()
    time.sleep(1)
    print('主进程')

Two zombie process and orphaned

Unix-based environment (linux, macOS)

  1. After the end of the primary process to wait until the end of the child process

    The main process always monitor child processes running, when after the child process, within a period of time the child process will be recycled.

  2. Why will not immediately master the process of their recovery after the child process?

    The main process and sub-process is asynchronous relationship, the primary process can not capture the child process immediately when the end.

    If the child immediately after the end of the process to release resources in memory, the main process there is no way to monitor the status of the child process

  3. unix provides a mechanism for the above problem:

    After the end of all the sub-processes, it will be immediately freed action links, most of the data memory of the file, but will retain some elements: process ID, end time, running, waiting for the main process monitoring, recycling.

  4. Zombie process: after the end of all the sub-processes, before being recycled primary process, will enter zombie state.

    Hazard: If the primary process does not recover zombie process (wait / waitpid), generate a lot of zombie process, which would take up memory, taking up the process pid number.

    Solution: directly kill the main process, all the zombies orphaned process for recycling by the init.

  5. Orphan process: the main process for some reason ended, but the child process is still running, the child process becomes orphaned primary process is over, all orphans init process will be recovered, init becomes. primary process, which was recovered.

Three mutex

from multiprocessing import Process
import time
import random
import os
import json
from  multiprocessing import Lock
def task(lock):
    lock.acquire()
    print(f'{os.getpid()}开始打印')
    time.sleep(random.randint(1,3))
    print(f'{os.getpid()}打印结束')
    lock.release()

if __name__ == '__main__':
    lock = Lock()
    for i in range(3):
        p = Process(target=task,args=(lock,))
        p.start()

While many processes share a resource grab (data), to ensure that its order (data security), then give it to add a lock to make the serial, which is called the mutex lock.
Mutex : You can safely guarantee the order and fairness of the data.

Lock and join the difference

Common: it can become complicated by the serial to ensure that the procedures carried out along

Different points: join artificial setting procedure, let Lock scramble sequence to ensure fairness.

Communication between the four processes

1. file-based communication:

Level process in memory is isolated, but the file on disk, you can access it

from multiprocessing import Process
import time
import random
import os
import json
from  multiprocessing import Lock
def search():
    time.sleep(random.randint(1,3))
    with open('t1.json',encoding='utf-8')as f:
        dic = json.load(f)
        print(f'{os.getpid()}查看了票数,还剩{dic["count"]}张票')

def buy():
    with open('t1.json', encoding='utf-8')as f:
        dic = json.load(f)
    if dic["count"] > 0:
        dic['count'] -= 1
        time.sleep(random.randint(1, 3))
        with open('t1.json','w',encoding='utf-8')as f1:
            json.dump(dic,f1)
        print(f'{os.getpid()}购票成功')
    else:
        print('没票了')

def task(lock):
    search()
    lock.acquire()
    buy()
    lock.release()

if __name__ == '__main__':
    lock = Lock()
    for i in range(6):
        p = Process(target=task,args=(lock,))
        p.start()

Communication between file-based processes: low efficiency, own lock and prone to deadlock.

2. Based on the communication queue

Characteristics of the queue: first in first out (FIFO), always keep this data.

from multiprocessing import Queue
q = Queue(3) # maxsize 最大容量

q.put(1)
q.put('asdr')
q.put([1,2,3])
q.put(666,block=False) # 不写block默认为True,超过最大容量会阻塞,block = False直接报错
q.put(666,timeout=3) # 延迟3s之后还阻塞直接报错

print(q.get())
print(q.get())
print(q.get())
print(q.get(block=False))
print(q.get(timeout=3))

Five producer-consumer model

Producer consumer model three elements:

Manufacturer: generating data

Consumers: receiving data for further processing

Container: Queue

The role of the queue container: acts as a cushion to balance productivity and power consumption, decoupling

from multiprocessing import Process,Queue
import time,random

def producer(q,name):
    for i in range(1,6):
        time.sleep(random.randint(1,2))
        res = f'{i}号包子'
        q.put(res)
        print(f'生产者{name}生产了{res}')

def consumer(q,name):
    while 1:
        try:
            food = q.get(timeout=3)
            time.sleep(random.randint(1,3))
            print(f'消费者{name}吃了{food}')
        except Exception:
            return

if __name__ == '__main__':
    q = Queue()
    p1 = Process(target=producer,args=(q,'孙宇'))
    p2 = Process(target=consumer,args=(q,'海狗'))

    p1.start()
    p2.start()

Guess you like

Origin www.cnblogs.com/lav3nder/p/11802258.html