day34_ million annual salary into the third fourteen days - communication between the zombies and orphaned, mutex, process

day34

Zombie process and orphaned

Unix-based environment (linux, macOS)

  • After the main process needs to wait for the end of the child process, the main process until the end

    The main process always monitor child processes running, when after the child process, within a period of time, the child process for recycling

  • Why is not the primary process child process immediately after the end of its recycle it?

    • The main process and sub-process is asynchronous relationship, the main process can not be captured immediately when the process ends
    • If the child immediately after the end of the process to release resources in memory, the main process there is no way to monitor the status of the child process
  • unix For questions to the above, provides a mechanism

    After the end of all the sub-processes, will be immediately freed action links, most of the data memory of the file, but will retain some content (process ID, end time, run status), waiting for the main process monitoring Recycling

  • Zombie process: after the end of all the sub-processes, before being recycled primary process, will enter zombie state

  • Zombie process whether the harm?

    If the parent does not zombie process for recycling (wait / waitpid), generate a lot of zombie process, which would take up memory, taking up the process pid No.

  • Orphaned

    • For some reason the parent process is over, but the child process is still running, the child process orphaned process.
    • If the parent process is over, all orphans process will be recycled init process, init will become parent, be recycled
  • How to solve the zombie process?

    The parent process generated a lot of progress, but not recovered, which would form a large number of zombie process, solution is to directly kill the parent process, the process becomes all the zombies orphaned by the init process recycling

Mutex

Definition: The mutex is a serial process while ensuring the child, but also to ensure the safety of the execution order of the randomness of the child, as well as data

Lock and join the difference

Common: can become complicated by the serial to ensure that the order

Different points: join artificial setting procedure, let Lock scramble sequence to ensure fairness

三个同事 同时用一个打印机打印内容
三个进程模拟三个同事,输出平台模拟打印机

# 版本一
from multiprocessing import Process
import time
import random
import os


def task1():
    print(f"{os.getpid()}开始打印了")
    time.sleep(random.randint(1,3))
    print(f"{os.getpid()}打印结束了")


def task2():
    print(f"{os.getpid()}开始打印了")
    time.sleep(random.randint(1,3))
    print(f"{os.getpid()}打印结束了")


def task3():
    print(f"{os.getpid()}开始打印了")
    time.sleep(random.randint(1,3))
    print(f"{os.getpid()}打印结束了")


if __name__ == '__main__':
    p1 = Process(target=task1)
    p2 = Process(target=task2)
    p3 = Process(target=task3)

    p1.start()
    p2.start()
    p3.start()

# 现在是所有的进程都并发的抢占打印机
# 并发是以效率优先的,但是目前我们的需求:顺序优先
# 多个进程共抢一个资源时,要保证顺序优先:串行,一个一个来


# 版本二
from multiprocessing import Process
import time
import random
import os


def task1(p):
    print(f"{p}开始打印了")
    time.sleep(random.randint(1,3))
    print(f"{p}打印结束了")


def task2(p):
    print(f"{p}开始打印了")
    time.sleep(random.randint(1,3))
    print(f"{p}打印结束了")


def task3(p):
    print(f"{p}开始打印了")
    time.sleep(random.randint(1,3))
    print(f"{p}打印结束了")


if __name__ == '__main__':
    p1 = Process(target=task1, args=("p1",))
    p2 = Process(target=task2, args=("p2",))
    p3 = Process(target=task3, args=("p3",))

    p1.start()
    p1.join()
    p1.start()
    p1.join()
    p1.start()
    p1.join()

# 我们利用join解决串行的问题,保证了顺序优先,但是这个谁先谁后时固定的
# 这样不合理,你在争抢同一个资源的时候,应该时先到先得,保证公平


# 版本三
from multiprocessing import Process
from multiprocessing import Lock
import time
import random
import os


def task1(p, lock):
    # lock.acquire() 不可重复使用,会导致阻塞
    lock.acquire()
    print(f"{p}开始打印了")
    time.sleep(random.randint(1,3))
    print(f"{p}打印结束了")
    lock.release()


def task2(p, lock):
    lock.acquire()
    print(f"{p}开始打印了")
    time.sleep(random.randint(1,3))
    print(f"{p}打印结束了")
    lock.release()


def task3(p, lock):
    lock.acquire()
    print(f"{p}开始打印了")
    time.sleep(random.randint(1,3))
    print(f"{p}打印结束了")
    lock.release()


def task4(p, lock):
    lock.acquire()
    print(f"{p}开始打印了")
    time.sleep(random.randint(1,3))
    print(f"{p}打印结束了")
    lock.release()


if __name__ == '__main__':
    mutex = Lock()
    p1 = Process(target=task1, args=("p1", mutex))
    p2 = Process(target=task2, args=("p2", mutex))
    p3 = Process(target=task3, args=("p3", mutex))
    p4 = Process(target=task4, args=("p4", mutex))

    p1.start()
    p2.start()
    p3.start()
    p4.start()

Communication between processes

File-based communication

Level process in memory is isolated, but not the file on disk

Use to explain the system to grab votes

Disadvantages:

  • low efficiency
  • Lock yourself in trouble and is prone to deadlock
抢票系统
1、先可以查票,查询余票数,并发
2、进行购买,向服务端发送请求,服务端接收请求,在后端将票数-1,返回到前端,串行

from multiprocessing import Process
from multiprocessing import Lock
import random
import json
import os
import time


def search():
    time.sleep(random.randint(1,3))
    with open("ticket.json", encoding="utf-8")as f1:
        dic = json.load(f1)
        print(f"{os.getpid()}查看了票数,剩余{dic['count']}")


def paid(lock):
    with open("ticket.json", encoding="utf-8")as f:
        dic = json.load(f)
    if dic["count"] > 0:
        dic["count"] -= 1
        time.sleep(random.randint(1,3))
        with open("ticket.json", encoding="utf-8", mode="w")as f1:
            json.dump(dic,f1)
        print(f"{os.getpid()}购买成功")


def task(lock):
    search()
    paid()


if __name__ == '__main__':
    for i in range(5):
        p = Process(target=task)
        p.start()
当多个进程共抢一个数据时,如果要保证数据的安全,必须要串行
要想让购买环节进行串行,我们必须要加锁处理

from multiprocessing import Process
from multiprocessing import Lock
import random
import json
import os
import time


def search():
    time.sleep(random.randint(1,3))
    with open("ticket.json", encoding="utf-8")as f1:
        dic = json.load(f1)
        print(f"{os.getpid()}查看了票数,剩余{dic['count']}")


def paid(lock):
    # lock.acquire()
    with open("ticket.json", encoding="utf-8")as f:
        dic = json.load(f)
    if dic["count"] > 0:
        dic["count"] -= 1
        time.sleep(random.randint(1,3))
        with open("ticket.json", encoding="utf-8", mode="w")as f1:
            json.dump(dic,f1)
        print(f"{os.getpid()}购买成功")
    # lock.release()


def task(lock):
    search()
    lock.acquire()
    paid(lock)
    lock.release()


if __name__ == '__main__':
    mutex = Lock()
    for i in range(15):
        p = Process(target=task, args=(mutex,))
        p.start()
当很多进程共抢一个资源(数据)时,你要保证顺序(数据的安全),一定要串行
互斥锁:可以公平性的保证顺序以及数据的安全
Based on the communication queue

Queue: Queue to be understood as a container that may carry some data

Characteristics of the queue: FIFO always keep this data. --FIFO

  • Introduction
from multiprocessing import Queue
q = Queue()


def func():
    print("in func")


q.put(1)
q.put("alex")
q.put([1, 2, 3])
q.put(func)

print(q.get())
print(q.get())
print(q.get())
q.get()()
  • characteristic
from multiprocessing import Queue
q = Queue(4)  # 可以设置队列最大长度(maxsize)


def func():
    print("in func")


q.put(1)
q.put("alex")
q.put([1, 2, 3])
q.put(func)
q.put(555,block=False)  # 当队列满了时,在进程put数据就会阻塞,如果block=False就会报错而不是阻塞

print(q.get())
print(q.get())
print(q.get(timeout = 3))  # 阻塞3秒,3秒之后还阻塞直接报错
q.get()()
print(q.get(block=False))  # 当数据取完时,在进程get数据也会出现阻塞,直到某个进程put数据。如果block=False就会报错而不是阻塞
Based on the communication pipe

Guess you like

Origin www.cnblogs.com/NiceSnake/p/11432169.html