[Python] process

Table of contents

1. Process creation and destruction

2. Global data is not shared between processes

3. Message queue

4. Process pool 


1. Process creation and destruction

multiprocessing : process module

Process(group, target, name, args, kwargs)

  • group: specify the process group, can also be None
  • target: the target function to execute
  • name: process name
  • args: pass parameters to the process in the form of tuples
  • kwargs: pass parameters to the process as a dictionary

Process creates instantiated objects:

        start() : Start the child process instance

        join(timeout) : Wait for the end of subprocess execution (you can also set the waiting time)

        terminate() : Immediately terminate the child process

pid : current process id

ppid : parent process id

multiprocessing.current_process() can get the current process information

multiprocessing.current_process().pid  ==  os.getpid()

# 导入进程模块
import multiprocessing
import time
import os

def child_process(name, age):
    while True:
        # 查看当前进程
        current_process = multiprocessing.current_process()
        print('这是一个子进程,pid=', current_process.pid, ', ppid=', os.getppid())
        for i in range(5):
            print(f'{age}岁的程序员{name}创建的子进程正在运行...')
            time.sleep(1)
            # os.kill(os.getppid(), 9)            # 杀死父进程,子进程直接退出
        os.kill(current_process.pid, 9)     # 杀死子进程,父进程并不会直接退出

if __name__ == '__main__':
    # 创建子进程
    # process = multiprocessing.Process(target=child_process, name='ChildPro', args=('张三', 18))
    process = multiprocessing.Process(target=child_process, name='ChildPro',
                                      kwargs={'age': 18, 'name': '张三'})
                                    # 用字典传参可以指定参数名称传递,可改变顺序
    # 启动子进程
    process.start()

    cnt = 0
    while True:
        print(f'{cnt}: 这是父进程,pid=', multiprocessing.current_process().pid)
        time.sleep(1)
        cnt += 1
        if cnt == 3:
            process.terminate() # 子进程直接终止

2. Global data is not shared between processes

import multiprocessing
import time
import os

# 定义全局变量
my_list = list()

def ChildProcess():
    for i in range(5):
        print(f'{i}: 这是一个子进程, pid=', os.getpid(), ',ppid=', os.getppid())
        my_list.append(i)
        time.sleep(1)

def Run():
    print(my_list)

if __name__ == '__main__':
    child_process = multiprocessing.Process(target=ChildProcess)
    run_process = multiprocessing.Process(target=Run)

    child_process.start()

    # 主进程等待写入进程执行完毕之后再继续执行
    child_process.join()
    run_process.start()

3. Message queue

        Processes cannot communicate directly, and inter-process communication can be realized through message queues

import multiprocessing
import time

# 写入数据
def WriteProcess(queue):
    for i in range(10):
        if queue.full():
            print('队列已满')
            break
        queue.put(i)
        time.sleep(1)
        print(f'{i} 入队列')

# 读取数据
def ReadProcess(queue):
    while True:
        if queue.qsize() == 0:
            print('队列已空')
            break
        value = queue.get()
        print(value, end=' ')

if __name__ == '__main__':
    # 创建消息队列
    queue = multiprocessing.Queue(5) # 设置容量为5

    # 创建进程
    write_process = multiprocessing.Process(target=WriteProcess, args=(queue,))
    read_process = multiprocessing.Process(target=ReadProcess, args=(queue,))

    write_process.start()
    write_process.join()
    read_process.start()

When initializing the Queue object, if q = multiprocessing.Queue() does not specify a capacity or the capacity is set to a negative value, it means that there is no upper limit on the number of acceptable messages (until the memory is exhausted)

4. Process pool 

        The process pool will automatically create processes according to the task execution status, and create as few processes as possible, and reasonably use the processes in the process pool to complete multi-tasks.

        When the number of sub-processes to be created is small, you can directly use the Process of multiprocessing to dynamically generate multiple sub-processes, but when calling a large number of sub-processes, you can use the Pool process pool method provided by the multiprocessing module

        When initializing  the Pool  , you can set the process capacity. When a new request is submitted to the Pool, if the pool is not full, a new process will be created to execute the request; but if the process in the pool is full, the request Will wait for a process in the pool to end before executing the request

        apply(func, args, kwds) : Synchronize the process pool, call the function in blocking mode

        apply_async(func, args, kwds) : Asynchronous process pool, call function in non-blocking way

        Synchronous execution of the process pool: one task is executed before another task is executed

import multiprocessing
import time
import os

def Task():
    print(f'这是一个任务,执行者: {os.getpid()}')
    time.sleep(0.5)

if __name__ == '__main__':
    pool = multiprocessing.Pool(3)  # 设置进程池容量为3

    for i in range(10):
        pool.apply(Task)

        Asynchronous execution of the process pool: the process will not wait, and multiple tasks can be executed at the same time

import multiprocessing
import time
import os

def Task():
    print(f'这是一个任务,执行者: {os.getpid()}')
    time.sleep(0.5)

if __name__ == '__main__':
    pool = multiprocessing.Pool(3)  # 设置进程池容量为3

    for i in range(10):
        # pool.apply(Task)        # 同步进程池
        pool.apply_async(Task)  # 异步进程池

    pool.close()    # 关闭进程池,代表不再有新任务添加进入进程池
    pool.join()     # 如果不进程等待的话,主进程会直接退出,进程池也会立即终止

Guess you like

Origin blog.csdn.net/phoenixFlyzzz/article/details/129826092