Day 26: Father process and child process, zombie process and orphan process, daemon process mutex and semaphore

Running a py program is to start a python interpreter process, and the child process opened under the py program is also a python interpreter process.
Starting a py file in the python interpreter is equivalent to only starting a python interpreter process. The essential principle is that after the python interpreter is started, it
reads the content of the py file and executes the file content through the operation of the interpreter.

pid is the unique number of the task in the operating system

在windows系统下:
    通过cmd命令查看所有pid进程命令:tasklist
    通过pid号查看进程的命令:tasklist | findstr (pid号)
    强制杀死进程的命令:taskkill /F /PID (pid号)
在linux系统下:
    查看所有进程的命令:ps aux
    通过pid号查看进程的命令:ps aux | grep (pid号)
    强制杀死进程的命令:kill -9 (pid号)

Parent process and child process

from multiprocessing import Process
import os
import time

def task():
    print("父进程:%s  子进程:%s" % (os.getppid(), os.getpid()))
    time.sleep(500)


if __name__ == '__main__':
    p = Process(target=task)
    p.start()

    print("主进程的pid:%s 主进程的父进程pid:%s" % (os.getpid(), os.getppid()))

主进程的pid:7272(该运行文件的pid)     主进程的父进程pid:10208(pycharm的pid,windows系统下通过cmd查看不到主进程的父进程也是显示python解释器的)
父进程:7272(该运行文件的pid)       子进程:15684(该文件下子进程task的pid)

Zombie process orphan process (the concept running under the Linux system)
start and jion have their own zombie process cleanup function

Zombie process
is a special data structure in Linux operating
system. All child processes in Linux system will enter zombies after death.
Zombie process: After the child process exits, the heavy resources of the process (cpu, memory, open files) will be released, and the process descriptor of the child process is still stored in the system, such as pid.
The zombie process will only be generated when the program ends normally. If the parent process is forcibly closed, the operating system will delete all the child processes that the parent process has finished running, and no zombie process will be generated.

The harm of zombie processes The
pid number of the system is limited. If the information retained by the zombie process has not been released, the accumulation will result in no available pid number and the system cannot generate new processes.

How to deal with zombie processes in the Linux system
If the zombie process of the application is lagging behind, you can notify the parent process to reclaim the son of the zombie process through the command: kill -CHLD (the pid of the parent process)
If the application does not have a zombie process recovery mechanism, Then you can directly kill the parent process, let the init process with the process number 1 control all the child processes under the parent process
init back to automatically reclaim the zombie process [command to kill the process: kill -9 (pid number)]

Orphan process (harmless)
A parent process exits, and one or more of its child processes are still running, then those child processes will become orphan processes
. After the death of the parent process, the orphan process will be adopted by the init process (process number 1) , And the init process completes the state collection work for them.

Daemon (understanding knowledge points) The
daemon guards the code of the main process, not the life cycle

主进程代码运行完毕,守护进程就会结束
from multiprocessing import Process
import os, time


def task():
    print("进程%s开启" % os.getpid())
    time.sleep(10)
    print("进程%s结束" % os.getpid())


if __name__ == '__main__':
    p = Process(target=task)
    p.daemon = True   # 这一行代码会把子进程变成守护代码,主进程运行完,子进程也就运行完了,不会打印进程结束的那行代码
    p.start()
    print("主:%s" % os.getpid())
    time.sleep(3)

Case study

from multiprocessing import Process
import time


def foo():
    print(123)
    time.sleep(1)
    print("end123")


def bar():
    print(456)
    time.sleep(3)
    print("end456")


if __name__ == '__main__':
    p1 = Process(target=foo)
    p2 = Process(target=bar)

    p1.daemon = True
    p1.start()
    p2.start()
    print("main-------")
    
可能会出现的情况(大部分都是第一种情况)
"""
main-------
456
end456
"""

"""
main-------
123
456
end456
"""

"""
123
main-------
456
end456
"""

Mutex and semaphore

Locking can ensure that when multiple processes modify the same piece of data, only one task can be modified at the same time, that is, serial modification, which reduces the speed but ensures data security.
Although you can use file sharing data to achieve inter-process communication, the problems are:
1. Low efficiency (shared data is based on files, and files are data on the hard disk)
2. You need to lock yourself.

Semaphore refers to how many locks are running concurrently at the same time

Mutex lock code implementation

db1.json
{
    
    "COUNT": 3}
# 模拟抢票案例,需求:利用互斥锁实现局部串行的效果(串行并不是真正意义上的串行,还是由cpu去实现并行,只不过是一旦有进程抢到了锁,其他的进程都是在阻塞的状态)
from multiprocessing import Process
from multiprocessing import Lock  # 导入Lock,调用Lock会的得到一把锁,这把锁就是互斥锁
import json
import os
import time


def check():
    with open("db1.json", mode="rt", encoding="utf-8")as f:
        time.sleep(1)        # 模拟网络延迟
        dic = json.load(f)
        print("%s查看剩余票数为:%s" % (os.getpid(), dic["COUNT"]))


def get():
    with open("db1.json", mode="rt", encoding="utf-8")as f:
        time.sleep(1)       # 模拟网络延迟
        dic = json.load(f)

    if dic["COUNT"] > 0:
        dic["COUNT"] -= 1
        time.sleep(3)       # 模拟网络延迟
        with open("db1.json", mode="wt", encoding="utf-8")as f:
            json.dump(dic, f)
            print("%s购票成功" % os.getpid())

    else:
        print("购票失败")

def func(mutex):
    check()

    mutex.acquire()  # 抢锁
    get()
    mutex.release()  # 释放锁
    # cup也是处于并行的状态,但是一旦有进程抢到了锁,其他进程就是阻塞的状态

    # with mutex:  写法2
    #     get()


if __name__ == '__main__':
    mutex = Lock()

    for i in range(10):
        p = Process(target=func, args=(mutex,))
        p.start()

    print("主")

Guess you like

Origin blog.csdn.net/Yosigo_/article/details/112913541