python multi-process exception caught

In normal python multi-process, the parent process is only responsible for the tasks distributed to the child process, the success of the child process, the parent process does not care.

But in a production environment, this is clearly inappropriate. I found through research status of the child can get through the callback method, then the failure of the child process marked by a queue, you can achieve retry a failed process, the code as follows:

Import Queue
 Import Random
 Import Time
 from multiprocessing Import Pool 

Q = Queue.Queue () 
SUCCESS_COUNT = 0 

# actual operation method task 
DEF long_time_task (table_name): 
    RD = the random.randint (. 1, 10 ) 
  # task fails thrown, throwing a task name is in the callback method may capture
IF RD% 2 == 0: the raise Exception (table_name) Global Q Print ( ' OK = S% ' % table_name) the time.sleep ( . 1 ) # Mission success count DEF Success (SUC): Global SUCCESS_COUNT SUCCESS_COUNT = SUCCESS_COUNT. 1 + # capture failed tasks DEF ERR (error): q.put (error) IF the __name__ == ' __main__ ' :   # initialize a thread pool 4 slots P = Pool (. 4 ) Lists = [ ' table_ ' + STR (I) for I in Range (. 1, 21 is )] lists_num = len (Lists)   # tasks in the queue for I in Lists: q.put (I)


  # Queue is empty is not necessarily all tasks completed, there may be tasks in operation, it is necessary to satisfy two conditions in order to exit the loop
the while not q.empty () or SUCCESS_COUNT =! Lists_num: IF q.empty (): the time.sleep ( . 1 ) the else : p.apply_async (long_time_task, args = (q.get (),), the callback = Success, error_callback = ERR) p.close () p.join () Print ( ' q.size = D%, D% = SUCCESS_COUNT ' % (q.qsize (), SUCCESS_COUNT))

 

Guess you like

Origin www.cnblogs.com/wangbin2188/p/12627365.html