Python's multi-process and multi-threaded usage scenarios (computation-intensive, IO-intensive)

Python's multi-process and multi-threaded usage scenarios (computation-intensive, IO-intensive)

Regarding multi-process and multi-threading-many people may wonder, is there any difference between the two? Why are there two kinds?
Below I will show you an example to analyze the two:

Computationally intensive: multithreaded concurrency of a single process VS concurrent parallelism of multiple processes

from threading import Thread
from multiprocessing import Process
import time
import random

def task():
    count = 0
    for i in range(200000000):  # 这里运算的数量越大,差距越明显
        count += 1

if __name__ == "__main__":

    # 多进程的并发,并行
    start_time = time.time()
    l1 = []
    for i in range(5):
        p = Process(target=task)
        p.start()
        l1.append(p)

    for p in l1:
        p.join()
    print(f"执行效率{time.time()-start_time}")  # 执行效率9.980311870574951

	# 多线程的并发并行
    # start_time = time.time()
    # l1 = []
    # for i in range(5):
    #     p = Thread(target=task)
    #     p.start()
    #     l1.append(p)
    #
    # for p in l1:
    #     p.join()
    # print(f"执行效率{time.time()-start_time}")  # 执行效率38.803406953811646

Summary: Computationally intensive, multi-process concurrency or parallelism is more efficient.

Let's take a look at IO-intensive: multi-threaded concurrency of a single process VS concurrency of all processes:

from threading import Thread
from multiprocessing import Process
import time
import random

def task():
    count = 0
    time.sleep(random.randint(1,3))

if __name__ == "__main__":

    # 多进程的并发,并行
    start_time = time.time()
    l1 = []
    for i in range(1000):
        p = Process(target=task)
        p.start()
        l1.append(p)

    for p in l1:
        p.join()
    print(f"执行效率{time.time()-start_time}") #执行效率13.508974075317383

	# 多线程的并发并行
    start_time = time.time()
    l1 = []
    for i in range(1000):
        p = Thread(target=task)
        p.start()
        l1.append(p)

    for p in l1:
        p.join()
    print(f"执行效率{time.time()-start_time}")  #执行效率3.1451008319854736

Summary: For IO-intensive: a single process of multi-threaded concurrency efficiency is high.

Guess you like

Origin blog.csdn.net/m0_50481455/article/details/113919330