Python multi-process exception

1. Catching exceptions

Exception level: https://cloud.tencent.com/developer/section/1366492
Insert image description here
https://zhuanlan.zhihu.com/p/321408784

try:
    <语句>
except Exception as e:
    print('异常说明',e)
1 捕获所有异常
包括键盘中断和程序退出请求(用 sys.exit() 就无法退出程序了,因为异常被捕获了),因此慎用。

try:
    <语句>
except:
    print('异常说明')
2 捕获指定异常
try:
    <语句>
except <异常名>:
    print('异常说明')
万能异常:

try:
    <语句>
except Exception:
    print('异常说明')
一个例子:

try:
    f = open("file-not-exists", "r")
except IOError as e:
    print("open exception: %s: %s" %(e.errno, e.strerror))

2. Capture and determine the exception type
. Reference: https://blog.csdn.net/weixin_35757704/article/details/128490868

异常的完整代码是:
try:
    raise Exception("wa")
except:
    print("报错")
else:
    print("没有报错")
finally:
    print("程序关闭")

When an exception occurs, the exception information will be saved to the sys.exc_info() method.

import sys
import os

try:
    raise RuntimeError('这里有个报错')
except Exception as e:
    except_type, except_value, except_traceback = sys.exc_info()
    except_file = os.path.split(except_traceback.tb_frame.f_code.co_filename)[1]
    exc_dict = {
    
    
        "报错类型": except_type,
        "报错信息": except_value,
        "报错文件": except_file,
        "报错行数": except_traceback.tb_lineno,
    }
    print(exc_dict)

Actively throw exceptions

raise Exception("0不能做分母")

Python rethrows the previous exception
using the raise statement

>>> def e():
...     try:
...         int('N/A')
...     except Exception as e:
...         # 做一些记录日志等的处理,然后继续抛出异常
...         raise
... 
>>> e()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<stdin>", line 3, in e
ValueError: invalid literal for int() with base 10: 'N/A'
>>> 

2. Exit the program

https://zhuanlan.zhihu.com/p/426492312

To exit the python process, you may have heard of many methods, including exit(), sys.exit(), os._exit(), quit(). These methods can all exit the process, so what are the differences between them.

The following table is a comparison of these four methods.
Whether the applicable scenario of the function throws a SystemExit exception
is exit. The interactive environment is
quit. The interactive environment is
sys.exit. The main process is
os._exit. The child process is no. Both
exit and quit are applicable to the interactive environment. When used, they are all built-in functions and all throw SystemExit exceptions.
os._exit will not throw SystemExit exceptions. Once executed, it will exit, so you have no time to do some cleanup work.
sys.exit will throw SystemExit exceptions, which you can catch. For this exception, do some request work
. sys.exit is the most commonly used and formal way to exit the program in python. You must specify the exit code when calling. An exit code of 0 means a normal exit, and other means an abnormal exit.

import sys

try:
    sys.exit(3)
except SystemExit as e:
    print(f'进程退出,退出码是{
      
      e.code}')
使用e.code可以获得退出码,程序可以根据退出码不同执行相应的清理工作。

Python terminates code execution.
Method 1:
import sys
sys.exit() # Exit the current program without restarting the shell.
Method 2:
exit() # Exit the current program and restart the shell.
Method 3:
quit() # The effect is the same as exit(). , exit and restart the shell

3. Process shared variables

Value and Array share data through shared memory,
and Manager shares data through shared processes.
'spawn' enables multi-process data copying

 	num=multiprocessing.Value('d',1.0)#num=0
    arr=multiprocessing.Array('i',range(10))#arr=range(10)
    p=multiprocessing.Process(target=func1,args=(num,arr))
	manager=Manager()
    list1=manager.list([1,2,3,4,5])
    dict1=manager.dict()
    array1=manager.Array('i',range(10))
    value1=manager.Value('i',1)

Shared variables between processes
If you want different processes to read and write the same variable, you need to make a special declaration. Multiprocessing provides two implementation methods, one is to share memory, and the other is to use a service process. Shared memory only supports two data structures, Value and Array.
The address in memory of the count accessed by the child process and the main process is the same. There are two points to note here:

  • 1. When the multiprocessing.Value object is used with Process, it can be used as a global variable as above, or as an incoming parameter. However, when used with Pool, it can only be used as a global variable, and an error will be reported when used as an incoming parameter. RuntimeError: Synchronized objects should only be shared between processes through inheritance
  • 2. When multiple processes read and write shared variables, pay attention to whether the operation is process-safe. For the previous cumulative counter, although it is a statement, it involves reading and writing, and local temporary variables of the process. This operation is not process safe. When accumulating multiple processes, incorrect results will appear. Need to lock cls.count += 1. To lock, you can use an external lock or directly use the get_lock() method.
  • 3. Shared memory supports limited data structures. Another way to share variables is to use a service process to manage variables that need to be shared. When other processes operate shared variables, they interact with the service process. This method supports types such as lists and dictionaries, and can share variables between multiple machines. But the speed is slower than the shared memory method. In addition, this method can be used as the incoming parameter of Pool. Similarly, locking is also required for non-process-safe operations.

4. Process exception capture (non-blocking)

1.multiprocessing.Process cannot catch exceptions

p = multiprocessing.Process(target=taskimg_execute, args=(start_img,))
p.start()

2. The pool cannot capture it and can only receive it through functions.

def throw_error(e):
    print("error回调方法: ", e)
    return {
    
    'code': 440, 'msg': '进程创建异常'+str(e), 'data': {
    
    'status': 400}}

pool = multiprocessing.Pool(processes = 3)
pool.apply_async(taskimg_execute, (start_img,),rror_callback=throw_error)  pool.close()

4. Restart the process in the process started by the multiprocessing Pool

As we all know, multiprocessing is Python's built-in multi-process library. It seems that a problem with its Pool is that when multiple processes are started, it is difficult to start new processes in these processes because an error will be reported - AssertionError: daemonic processes are not allowed to have children. Reference: https://blog
. csdn.net/nirendao/article/details/128945428
If you do not use the multiprocessing Pool and just use its Process, it is relatively simple. Just set the daemon parameter to False when creating the Process; or do not write the daemon at all. The parameter can also be used, because it defaults to False.

Guess you like

Origin blog.csdn.net/weixin_44986037/article/details/131521446