Communication between the zombies and orphaned mutex process

Zombie process and orphaned

Unix-based environment (linux, macOS)

After the main process needs to wait for the end of the child process, the main process until the end

The main process always monitor child processes running, when after the child process, within a period of time, the child process for recycling

Why is not the primary process child process immediately after the end of its recycle it?

The main process and sub-process is asynchronous relationship, the primary process can not capture the child process immediately when to stop

If, after the end of the child, immediately release the resources in memory, the main process there is no way to monitor the status of the child process

unix For questions to the above, provides a mechanism

After the end of all the sub-processes, it will be immediately freed action links, most of the data memory of the file, but will retain some elements: process ID, end time, waiting for the main operational status monitoring process, recycle.

Zombie process

After the end of all the sub-processes, before being recycled primary process, will enter zombie state

Whether the harm zombie process

If the parent does not zombie process for recycling (wait / waitpid), generate a lot of zombie process, which would take up memory, taking up the process pid No.

How to solve the zombie process

The parent process produces a large number of child processes, but not recovered, which would form a large number of zombie process, solution is to directly kill the parent process, all the zombies orphaned process for recycling by the init

Orphaned

For some reason the parent process is over, but your child process is still running, so your child process these orphaned processes, if your parent process is over, all you orphans will be init process recycling process, init becomes your parent, your recycling

Mutex

Three colleagues, and print with a printer.

Three colleagues three process simulation, simulation platform printer output.

Mutex:

We can guarantee the safety of fairness and order data

# 版本一
from multiprocessing import Process
import time
import random
import os


def task1():
    print(f"{os.getpid()}开始打印了")
    time.sleep(random.randint(1, 3))
    print(f"{os.getpid()}打印结束了")


def task2():
    print(f"{os.getpid()}开始打印了")
    time.sleep(random.randint(1, 3))
    print(f"{os.getpid()}打印结束了")


def task3():
    print(f"{os.getpid()}开始打印了")
    time.sleep(random.randint(1, 3))
    print(f"{os.getpid()}打印结束了")


if __name__ == '__main__':
    p1 = Process(target=task1)
    p2 = Process(target=task2)
    p3 = Process(target=task3)

    p1.start()
    p2.start()
    p3.start()

"""
188412开始打印了
188408开始打印了
188388开始打印了
188412打印结束了
188388打印结束了
188408打印结束了
"""

# 现在是所有的进程都并发的抢占打印机,
# 并发是以效率优先的, 但是目前我们的需求: 顺序优先
# 多个进程共抢一个资源时, 要保证顺序优先: 串行, 一个一个来
# 版本二
from multiprocessing import Process
import time
import random
import os


def task1():
    print(f"{os.getpid()}开始打印了")
    time.sleep(random.randint(1, 3))
    print(f"{os.getpid()}打印结束了")


def task2():
    print(f"{os.getpid()}开始打印了")
    time.sleep(random.randint(1, 3))
    print(f"{os.getpid()}打印结束了")


def task3():
    print(f"{os.getpid()}开始打印了")
    time.sleep(random.randint(1, 3))
    print(f"{os.getpid()}打印结束了")


if __name__ == '__main__':
    p1 = Process(target=task1)
    p2 = Process(target=task2)
    p3 = Process(target=task3)

    p1.start()
    p1.join()
    p2.start()
    p2.join()
    p3.start()
    p3.join()

"""
160876开始打印了
160876打印结束了
188264开始打印了
188264打印结束了
188328开始打印了
188328打印结束了
"""

# 我们利用join, 解决串行的问题, 保证了顺序优先, 但是这个谁先谁后是固定的.
# 这样不合理, 你在争抢同一个资源的时候, 应该是先到先得, 保证公平
# 版本三
from multiprocessing import Process
from multiprocessing import Lock
import time
import random


def task1(p, lock):
    lock.acquire()
    print(f"{p}开始打印了")
    time.sleep(random.randint(1, 3))
    print(f"{p}打印结束了")
    lock.release()

def task2(p, lock):
    lock.acquire()
    print(f"{p}开始打印了")
    time.sleep(random.randint(1, 3))
    print(f"{p}打印结束了")
    lock.release()


def task3(p, lock):
    lock.acquire()
    print(f"{p}开始打印了")
    time.sleep(random.randint(1, 3))
    print(f"{p}打印结束了")
    lock.release()


if __name__ == '__main__':

    mutex = Lock()
    p1 = Process(target=task1, args=("p1", mutex))
    p2 = Process(target=task2, args=("p2", mutex))
    p3 = Process(target=task3, args=("p3", mutex))

    p1.start()
    p2.start()
    p3.start()

"""
p1开始打印了
p1打印结束了
p2开始打印了
p2打印结束了
p3开始打印了
p3打印结束了
"""

Lock and join the difference

Common

Can become complicated by the serial to ensure that the order

difference

join believes setting procedure, lock allowed to scramble order to ensure the fairness

Communication between processes

Level process in memory is isolated, but on the disk file

1. file-based communication

Use to explain the system to grab votes

# 抢票系统
# 1. 先可以查票, 查询余票数.    并发
# 2. 进行购买, 向服务端发送请求, 服务端接收请求, 在后端将票数-1, 返回到前端           串行
from multiprocessing import Process
import json
import time
import os
import random


def search():
    time.sleep(random.randint(1, 3))   # 模拟网络延迟(查询环节)
    with open("ticket.json", "r", encoding="utf-8")as f:
        dic = json.load(f)
        print(f"{os.getpid()}查看了票数, 剩余{dic['count']}")


def paid():
    with open("ticket.json", "r", encoding="utf-8")as f:
        dic = json.load(f)
        if dic["count"] > 0:
            dic["count"] -= 1
            time.sleep(random.randint(1, 3))
            with open("ticket.json", "w",  encoding="utf-8")as f1:
                json.dump(dic, f1)
            print(f"{os.getpid()} 购买成功")


def task():
    search()
    paid()
    
if __name__ == '__main__':

    for i in range(6):
        p = Process(target=task)
        p.start()

# 当多个进程共抢一个数据时, 如果要保证数据的安全, 必须要串行.
# 要想让购买环节进行串行, 我们必须要加锁处理
from multiprocessing import Process
from multiprocessing import Lock
import json
import time
import os
import random


def search():
    time.sleep(random.randint(1, 3))
    with open("ticket.json", encoding="utf-8")as f:
        dic = json.load(f)
        print(f"{os.getpid()} 查看了票数, 剩余{dic['count']}")


def paid():
    with open("ticket.json", encoding="utf-8")as f:
        dic = json.load(f)
    if dic["count"] > 0:
        dic["count"] -= 1
        time.sleep(random.randint(1, 3))
        with open("ticket.json", "w", encoding="utf-8")as f1:
            json.dump(dic, f1)
        print(f"{os.getpid()}购买成功")


def task(lock):
    search()
    lock.acquire()
    paid()
    lock.release()

if __name__ == '__main__':
    mutex = Lock()
    for i in range(6):
        p = Process(target=task, args=(mutex,))
        p.start()
"""
203496 查看了票数, 剩余3
203512 查看了票数, 剩余3
203496购买成功
203504 查看了票数, 剩余2
203480 查看了票数, 剩余2
203488 查看了票数, 剩余2
203520 查看了票数, 剩余2
203512购买成功
203504购买成功
"""

# 当很多进程共抢一个资源(数据)时, 你要保证顺序(数据的安全), 一定要串行. 
# 互斥锁: 可以公平性的保证顺序以及数据的安全
# 基于文件的进程之间的通信:
        # 效率低
        # 自己加锁麻烦而且很容易出现死锁

2. Based on the communication queue

3. Based on the communication pipe

Guess you like

Origin www.cnblogs.com/beichen123/p/11391594.html