Network Programming python exclusive communication between the lock and the process

First, the mutex

Data isolation between processes, but a shared file system, which can be achieved through the process of direct communication documents, but the problem must deal with their own lock.

Note: The purpose of locking is to ensure data security and to ensure that when multiple processes modifying the same piece of data, at the same time only one modification, namely serial changes, yes, the speed is slow, the expense of speed.

1. small example of the toilet: the toilet when you definitely have to lock it, someone came to see the door locked, will be waiting outside, waiting for you to open the door opened it came out, people go on next WC.

Copy the code
 1 from multiprocessing import Process,Lock
 2 import os
 3 import time
 4 def work(mutex):
 5     mutex.acquire()
 6     print('task[%s] 上厕所'%os.getpid())
 7     time.sleep(3)
 8     print('task[%s] 上完厕所'%os.getpid())
 9     mutex.release()
10 if __name__ == '__main__':
11     mutex = Lock()
12     p1 = Process(target=work,args=(mutex,))
13     p2 = Process(target=work,args=(mutex,))
14     p3 = Process(target=work,args=(mutex,))
15     p1.start()
16     p2.start()
17     p3.start()
18     p1.join()
19     p2.join()
20     p3.join()
21     print('主')
Copy the code

Second, the simulation rush tickets (also utilizes the principle of a mutex: LOCK mutex)

Copy the code
Import json 1 
 2 Import Time 
 3 Random Import 
 4 Import os 
 5 from multiprocessing Import Process, Lock 
 6 DEF Chakan (): 
 7 dic = the json.load (Open ( 'piao',)) # first check votes, that is, open the file 
 8 print ( 'remaining votes:% s'% dic [' count ']) # view the remaining votes 
 . 9 Buy DEF (): 
10 = the json.load DIC (Open (' Piao ',)) 
. 11 IF DIC [' COUNT ']> 0: If there is the ticket # 
12 dic [' count '] - = 1 # change the value of the inside of -1 
13 is the time.sleep (the random.randint (l, 3)) which performs ticket series # on the first operation is not performed, and so will replace a sleep (sleep and random) 
14 the json.dump (DIC, Open ( 'Piao', 'W')) 
15 Print ( '% S purchase success'% os.getpid ()) # id that the current ticketing success 
16 def task (mutex): # grab votes 
17 chakan () # View time because we all can see, no lock 
18 mutex.acquire () # locked
19 buy () # buy time to be a buy one, let's wait for a person to buy over the back of people buying 
20 mutex.release () # cancel lock 
21 IF __name__ == '__main__': 
22 = Lock mutex () 
23 for i in range (50) : # allow 50 people to visit the number of votes 
24-the p-Process = (target = Task, args = (mutex,)) 
25 p.start ()
Copy the code

Third, other property of the Process object

p.daemon: daemon (daemon must be set before the open): If the parent dies, the child also died p

p.join: p execution of the parent process and other finished before running the main process, the parent process is blocked in place, and p is still running in the background.

terminate: forced to close. (P ensure that there is no other child processes, when closed, if there are sub-process, you use this method to force the shut down will have a zombie process (an analogy: If I hang you, you have not hang, then there is no corpses of people give you, ah ha ha))

is_alive: shutdown process, it does not shut down immediately, so the results is_alive immediately view may still survive

p.join (): waiting for the end of the parent process p, is the parent process blocked in place, and p is still running in the background

p.name: View name

p.pid: View id

We can briefly explain zombie process:

Child process runs to completion, but the parent process has yet to be recovered, this time the child process does not actually exit, which still occupies the system resources, such submenus process is called a zombie process .

Because resources zombie process has not recovered, resulting in a waste of system resources, too many zombie processes will result in system performance degradation, it should avoid stiff ⼫ process occurs.

Copy the code
Process Import from multiprocessing. 1 
 2 Import OS 
 . 3 Import Time 
 . 4 DEF Work (): 
 . 5 Print ( '% S IS Working' os.getpid% ()) 
 . 6 the time.sleep (. 3) 
 . 7 IF the __name__ == '__main__': 
 . 8 process = p1 (target = Work) 
 9 P2 = process (target = Work) 
10 p3 = process (target = Work) 
11 p1.daemon = True # 
12 # p2.daemon = True # daemon (guardian of his father) 
13 # p3.daemon = True # primary process dead child process is dead (it will not execute a child process) 
14 p1.start () 
15 p2.start () 
16 p3.start () 
17 
18 p3.join () 
19 p2.join () 
20 p1.join () # multiple join is waiting for the longest time spent on the implementation of the main program that runs out of the 
21 print ( 'main') 
22 
23 # - learn how ------ ---------
24 # p1.terminate () # forcibly closed process 
25 # the time.sleep (3) 
26 # Print (p1.is_alive ()) is not alive to see # 
27 # print (p1.name) # view the process name 
28 # print (p1.pid) # View id number 
29 # print ( 'main')
Copy the code

Three three inter-process communication (IPC) method:

  One way: Queue (recommended)

  Isolated from each other processes, to achieve inter-process communication (IPC), multiprocessing module supports two forms: the queue and the pipeline, these two methods are the use of the message passing

1. Queue: The queue similar to a pipe, FIFO element 
to note is this: the queue is in memory operation, the process of withdrawal, empty the queue, in addition, a queue is blocked form
2. Classification queue
there are many queues , but all rely on the module queue
Queue.Queue () # FIFO
queue.LifoQueue () # LIFO
queue.PriorityQueue () # priority queue
queue.deque () # queue wire

Create a class queue (that is, to the bottom of the pipe and locked manner):

1
2
Queue([maxsize]):创建共享的进程队列,Queue是多进程安全的队列,
可以使用Queue实现多进程之间的数据传递。

Parameter Description:

1
1  maxsize是队列中允许最大项数,省略则无大小限制。

Methods Introduction:

1
2
3
4
5
6
7
8
9
q.put方法用以插入数据到队列中,put方法还有两个可选参数:blocked和timeout。如果blocked为 True (默认值),并且timeout为正值,该方法会阻塞timeout指定的时间,直到该队列有剩余的空间。如果超时,会抛出Queue.Full异常。如果blocked为 False ,但该Queue已满,会立即抛出Queue.Full异常。
q.get方法可以从队列读取并且删除一个元素。同样,get方法有两个可选参数:blocked和timeout。如果blocked为 True (默认值),并且timeout为正值,那么在等待时间内没有取到任何元素,会抛出Queue.Empty异常。如果blocked为 False ,有两种情况存在,如果Queue有一个值可用,则立即返回该值,否则,如果队列为空,则立即抛出Queue.Empty异常.
  
q.get_nowait():同q.get( False )
q.put_nowait():同q.put( False )
 
q.empty():调用此方法时q为空则返回 True ,该结果不可靠,比如在返回 True 的过程中,如果队列中又加入了项目。
q.full():调用此方法时q已满则返回 True ,该结果不可靠,比如在返回 True 的过程中,如果队列中的项目被取走。
q.qsize():返回队列中目前项目的正确数量,结果也不可靠,理由同q.empty()和q.full()一样

application:

# 1 can be put into any type of queue 
 2 FIFO # 2 
 . 3 multiprocessing Import from Process, Queue 
 . 4 Queue Q = (. 3) 
 . 5 q.put ( 'First') = True # Default Block 
 . 6 q.put ( 'SECOND') 
 . 7 q.put ( 'THIRD') 
 . 8 
 . 9 Print (q.get ()) 
10 Print (q.get ()) 
. 11 Print (q.get ())

Producer and consumer model

Producer and consumer use patterns in concurrent programming can solve the vast majority of concurrency problems. The overall pattern to increase the speed of data processing programs by working ability to balance production and consumption thread thread.

Why use producer and consumer patterns

In the world of the thread, the thread is the producer of production data, the consumer is the thread consumption data. In multi-threaded development which, if treated quickly producer, and consumer processing speed is very slow, then the producer must wait for consumers processed, in order to continue production data. By the same token, if the consumer's capacity is larger than the producer, the consumer would have to wait for the producer. To solve this problem so the introduction of producer and consumer patterns.

What is a producer-consumer model

Producer-consumer model is the strong coupling problem is solved by a producer and consumer of container. Producers and consumers do not directly communicate with each other, and to communicate by blocking queue, so producers after completion of processing of production data without waiting for the consumer, direct throw blocking queue, consumers do not find a producer to data, but taken directly from blocking the queue, the queue is equivalent to a blocking buffer, balance the producers and consumers of processing power.

Producer-consumer model queue implementation

A producer and a consumer (two ways)

1, q.put (None): None producers to put into a

Process multiprocessing Import from, Queue 
 2 Import OS 
 . 3 Import Time 
 . 4 Import Random 
 . 5 # first have producers and consumers 
 6 # Manufacturer producing buns 
 7 '' 'to a method of use put this None q.put (None) While solving the problem of 
 8 but if there are multiple producers more than consumers, perhaps inside the box but no buns 
 9 there are other foods do you already show empty, so you can not solve, it is not perfect, 
10 also can be used to solve JoinableQueue '' ' 
. 11 DEF Producter (Q): 
12 is for I in Range (10): 
13 is the time.sleep (2) # buns have to have a production process, they let sleep a 
14 res =' buns% s'% i # produced so much bun 
15 q.put (res) # produced it into box to the inside of the bun 
16 print ( '\ 033 [44m % s manufactured% s \ 033 [0m' % (os.getpid (), RES)) 
17 q.put (None) # only producer to know what time it finished production (put into a description at this time None has produced over) 
18 # consumers bun 
19 def consumer (Q): 
20 the while True:If consumers continue to eat #
RES = q.get 21 is ()
22 if res is None: break # if eating time frame which was empty, a direct break 
23 is the time.sleep (the random.randint (l, 3)) 
24 Print ( '\ 033 [41M% s% s eat \ 033 [0m '% (os.getpid (), RES)) 
25 IF the __name__ ==' __main__ ': 
26 is Q = Queue () 
27 Process P1 = (target = Producter, args = (Q,)) 
28 = P2 process (target = Consumer, args = (Q,)) 
29 p1.start () 
30 p2.start () 
31 is p1.join () 
32 p2.join () # wait executing the above process, to the execution of the main 
33 print ( 'primary') 

2, using JoinableQueue
Process multiprocessing Import from, JoinableQueue 
 2 Import OS 
 . 3 Import Time 
 . 4 Import Random 
 . 5 # first have producers and consumers 
 6 # consumer bun 
 . 7 DEF Consumer (Q): 
 . 8 '' 'consumer' '' 
 . 9 the while True : If consumers continue to eat # 
10 q.get RES = () 
. 11 the time.sleep (the random.randint (l, 3)) 
12 is Print ( '\ 033 [% S 41M eat% s \ 033 [0m'% (os.getpid (), RES)) 
13 q.task_done () # mission is over (the consumer to tell the producers that I have removed the right thing) 
14 # producer manufacturer buns 
15 DEF Producter (q): 
16 ' 'producer' '' 
. 17 for I in Range (. 5): 
18 is the time.sleep (2) # buns have to have a production process, they let sleep a 
19 res = 'buns% s'% i # produced so many buns 
20 q.put (res) # produced buns into the box to go inside 
21 print ( '\ 033 [44m% s manufactured% s \ 033 [0m '% (os.getpid (), res)) 
22 is q.join () 
23 is 
24 the __name__ IF == '__main__': 
25 Q = JoinableQueue () 
26 p1 = Process (target = Producter, args = (q,)) 
27 P2 = Process (target = consumer, args = (q,)) 
28 p2.daemon = True # before starting it set to guard consumers process, p1 p2 ending will be over 
29 p1.start () 
30 p2.start () 
31 p1.join () # waiting for the end of the producer (after the producer, not manufactured buns, and that consumers have been eating, got stuck 
32 # produced not eat anything further, put end consumers) 
33 # wait executing the above process, to the execution of the main 
34 print ( 'primary')

Multiple producers and multiple consumers (two ways)

 1, q.put (None): None producers to put into a

2、利用JoinableQueue

Process multiprocessing Import from, JoinableQueue 
 2 Import OS 
 . 3 Import Time 
 . 4 Import Random 
 . 5 # first have producers and consumers 
 6 # consumer bun 
 . 7 DEF Consumer (Q): 
 . 8 the while True: 
 . 9 q.get RES = () 
the time.sleep 10 (the random.randint (l, 3)) 
. 11 Print ( '\ 033 [% S 41M S eat% \ 033 [0m'% (os.getpid (), RES)) 
12 is q.task_done () # end of the mandate of the (consumer to tell the producers that I have removed the right thing) 
13 DEF product_baozi (q): 
14 for i in the Range (5): 
15 the time.sleep (2) 
16 RES = 'buns% s' I% 
. 17 q.put (RES) 
18 is Print ( '\ 033 [% S 44M manufactured S% \ 033 [0m'% (os.getpid (), RES)) 
. 19 q.join () # do not put (None ), and the like in the q been fetched. (If the data is not to take complete, the producer will not end off)
DEF product_gutou 20 is (Q): 
39 Process P2 = (target = product_doujiang, args = (Q,)) 
21 is for I in range (5):
The time.sleep 22 is (2) 
23 is RES = 'S bone%'% I 
24 q.put (RES) 
25 Print ( '\ 033 [manufactured 44M% S% s \ 033 [0m'% (os.getpid () , RES)) 
26 is q.join () 
27 DEF product_doujiang (Q): 
28 for I in Range (. 5): 
29 the time.sleep (2) 
30 = RES 'milk S%'% I 
31 is q.put (RES) 
32 print ( '\ 033 [44m % s S manufactured% \ 033 [0m'% (os.getpid (), RES)) 
33 is q.join () 
34 is 
35 the __name__ IF == '__main__': 
36 Q = JoinableQueue () 
37 # producers to: cooks 
38 is Process P1 = (target = product_baozi, args = (Q,)) 
40 = Process P3 (target = product_gutou, args = (Q,)) 
41 is 
42 is # consumers: eat goods they
43     p4 = Process(target=consumer,args=(q,))
44     p5 = Process(target=consumer,args=(q,))
45     p4.daemon = True
46     p5.daemon = True
47     # p1.start()
48     # p2.start()
49     # p3.start()
50     # p4.start()
51     # p5.start()
52     li = [p1,p2,p3,p4,p5]
53     for i in li:
54         i.start()
55     p1.join()
56     p2.join()
57     p3.join()
58     print('主')

  Second way: pipe (not recommended, you can understand)

Pipe is the queue, but the pipe is not locked automatically

  Three ways: Shared data (not recommended, you can understand)

Sharing data and no automatic locking function, so it is recommended to use the queue. Research pipes and can share data of interest

Guess you like

Origin www.cnblogs.com/intruder/p/10936207.html