1 daemon
Definition: a daemon process may be another process
eg: if b is a daemon, is a daemon process, a finished, b will also shui finished, QQ receives a video file, then open a child process to download, if you drop out of qq, there is no need to download task to continue running
Scenario: parent to child process a task, the task is not yet complete, the parent process is over, the child process will not continue with the meaning of
from multiprocessing Import Process Import Time # concubine's life DEF Task (): Print ( " palace of ..... " ) the time.sleep ( 50 ) Print ( " concubine died of ...... " ) IF __name__ == ' __main__ ' : # Kangxi ascended the throne Print ( " ascended the throne ..... " ) # find a concubine of the p- = process (target = Task) # set to daemon must be set before turning on p.daemon = True p.start () # Kangxi Jiabeng time.sleep (3 ) Print ( " End of story! " )
2. mutex
Definition: mutually exclusive lock (if the resources are locked, other programs can not be executed)
Note: lock, not the resources to lock up, but limit your level of code in the code can not be executed
why you want to lock: Concurrent competition will bring resources, such as a multi-process operating simultaneously the same resource, the data will lead to confusion
Solution: 1. Direct plus join
Drawbacks: 1 original concurrent task will become serial, while avoiding the confusion of data, but the efficiency is reduced, so, no child will open a roast
2. had more than one process is fair competition, plus join, the order of execution on the set dead, unreasonable
2. public resources to add mutex, this is still a fair competition, the first to grab, the first to perform, the parent process to do other things
Locks and join difference:
join: fixed dead will execute the order, will cause the child process the parent process and so on, he is now the task of all the serial process of
lock: or fair competition, the first to grab whoever executed, the lock can lock any code, a line of code is also OK, you can adjust their own size
Particles: one kind of level, the more particles larger lock code, the lower the efficiency, and vice versa.
Note: 1 once again acquire the corresponding release, do not lock to lock, so that the program can not be executed
2. To protect data security, we must ensure that all processes use the same lock
from multiprocessing Import Process, Lock Import Time, Random DEF task1 ( Lock ): # To start using the lock Lock .acquire () # is equivalent to a judgment if Print ( " the Hello Jerry IAM " ) the time.sleep (random.randint ( 0 , 2 )) Print ( " Gender IS Boy " ) the time.sleep (random.randint ( 0 , 2 )) Print ( " Age IS 15 " ) # run out to unlock Lock .release () DEF task2 ( Lock ) : lock.acquire() print("hello iam owen") time.sleep(random.randint(0,2)) print("gender is girl") time.sleep(random.randint(0,2)) print("age is 48") lock.release() def task3(lock): lock.acquire() print("hello iam jason") time.sleep(random.randint(0,2)) print("gender is women") time.sleep(random.randint(0,2)) print("age is 26") lock.release() if __name__ == '__main__': lock = Lock() p1 = Process(target=task1,args=(lock,)) p2 = Process(target=task2,args=(lock,)) p3 = Process(target=task3,args=(lock,)) p1.start() # p1.join() p2.start () # p2.join () p3.start () # p3.join () # Print ( " End of story! " ) pseudo-code lock # implementation # IF my_lock == False: # my_lock = True # # locked code my_lock = False unlocking
3.ipc
Definitions: IPC is through consultation between processes, which is a process the data to another process, inter-process memory isolation from each other, want to use the communication ipc
Method: 1. Pipeline: only one-way communication, data are binary,
2. File: Create a shared folder on the hard disk is almost no limit to the amount of data, but slow
3.socket: Programming high complexity,
4 Shared Memory:
1.Manager: It provides a lot of data structure list dict, etc., it provides a data structure with characteristics shared among processes
Note: Some data structures created by the Manager is not locked, it may be a problem
from multiprocessing import Process,Manager,Lock import time def task(data,l): l.acquire() num = data["num"] # time.sleep(0.1) data["num"] = num - 1 l.release() if __name__ == '__main__': # 让Manager开启一个共享的字典 m = Manager() data = m.dict({"num":10}) l = Lock() for i in range(10): p = Process(target=task,args=(data,l)) p.start() time.sleep(2) print(data)
2.Queue queue
Queue: The queue is a special data structure, FIFO, just line up
Stack: last-out, like a wardrobe, bottled potato chips
Extension: execution means sequentially advanced out of the nest when the function call, the function call stack,
When you call the function, the function stack, the stack would end function
from multiprocessing Import Queue creates the queue is not specified is no limit to the number maxsize # Q = Queue ( . 3 ) # storage element # q.put ( " ABC " ) # q.put ( " HHH " ) # q.put ( " KKK " ) Print # (q. GET ()) # q.put ( " OOO " ) # If the capacity is full, at the time of the call to put into the blocking state until someone took the position data available from the queue will continue to # removed element # Print (q. get ()) # If the queue is empty, when you call get blocked until someone enters new data from memory to the queue will continue to # Print (q. get ()) # Print (q. GET ()) #block indicating whether blocking is blocked # default settings when an exception is thrown and when the queue is empty False Q. GET (Block = True, timeout = 2 ) # Block indicates whether blocking default # blocked when the queue is set to False and an exception is thrown when full # q.put ( " 123 " , block = False,) # timeout represents the blocking timeout, over time or no value or an exception is thrown or not the location is only effective in the block True
4. producer consumer model
One kind of routine data and solve problems of data processing: Definition
Producer and consumer problem: they both processing speed unbalanced, one of a fast slow, resulting in the need to wait for the other party
Problem-solving ideas:
Originally the two sides are coupled together, the consumer must wait for producers to generate complete before beginning treatment, and vice versa;
Now, it is possible to separate the two sides, the party responsible for production, the party responsible for processing, so that the data can not directly interact, and both sides need a common container. Then the producer into the container, remove the data consumer. This would solve the problem of imbalance in both processing speed, can do their own, without waiting.
EAT DEF (q): for i in the Range ( 10 ): # To consume Rose . = q GET () the time.sleep (random.randint ( 0 , 2 )) Print (Rose, " ! finished " ) # production tasks make_rose DEF (Q): for I in Range ( 10 ): # reproduction the time.sleep (the random.randint ( 0 , 2 )) Print ( " % s of disk green pepper pork finished! " % I) Rose= " % S of disk green pepper pork " % I # complete the generated data into the queue q.put (Rose) IF the __name__ == ' __main__ ' : # Create a shared queue Q = Queue () make_p = Process ( = make_rose target, args = (Q,)) eat_p = Process (target = EAT, args = (Q,)) make_p.start () eat_p.start ()