Python Implementation Parallel Memo

First write the work as a single-parameter function func, then call multiprocessing.Pool, set the number of cores used for calculation, and assign parameters to func for calculation through map. The simple implementation is as follows.

from multiprocessing import Pool

def func(parameter):
    ...

pool = Pool(4)  # set the pool size to 4
pool.map(func, parameter)
pool.close()
pool.join()

The above code uses 4 cores for calculation. After running, you can view the process with top and you can see four identical Python processes. The CPU usage of a single process is up to 100%. This multiprocessing implementation is suitable for CPU-intensive work.

The parallelization of multi-parameter functions requires further encapsulation of parameters, see https://www.rawidn.com/posts/Python-multiprocessing-for-multiple-arguments.html .

For IO-intensive work, the sub-library dummy implementation of multiprocessing can be called. The import part is as follows, and other codes remain unchanged.

from multiprocessing.dummy import Pool

There is only one Python process at runtime, but its CPU usage can exceed 100%.

More detailed instructions can be found at https://segmentfault.com/a/1190000000414339 .

In addition, it can also be implemented through the Parallel Python module, but it has not been tested specifically. Its use can be seen at http://wiki.jikexueyuan.com/project/python-actual-combat/tutorial-25.html .

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325482903&siteId=291194637