day 39 thread-locking, deadlocks, recursive locks, semaphores, gil Global Interpreter Lock

Thread lock

# Since threads within the same process is a shared resource, when multiple threads simultaneously operating a variable, probably because of switching threads, resulting in

Error variable values, so it is necessary to ensure a thread-locking thread runs out, and then continue to the next thread

Threading the Thread Import from, Lock 

x = 0
the mutex = Lock ()
DEF Task ():
Global x
# mutex.acquire ()
for I in Range (200000):
x = x +. 1
# x T1 is just to get saved state 0 was cut
# t2 get the x 0 of 1 +1
# T1 was given the run = 0 +1 1 x
# Thinking: a total of plus 1 plus a couple of times a digital real operation should have been out twice a +? 2 actually only +1
# This raises the problem of data security.
# mutex.release ()

IF the __name__ == '__main__':
T1 = the Thread (target = Task)
T2 = the Thread (target = Task)
T3 = the Thread (target = Task)
t1.start ()
t2.start ()
t3.start ()

t1.join ()
t2.join ()
t3.join ()
Print (X)


Deadlock
Threading the Thread Import from, Lock 
mutex1 = Lock ()
mutex2 = Lock ()
Import Time
class MyThreada (the Thread):
DEF RUN (Self):
self.task1 ()
self.task2 ()
DEF Task1 (Self):
mutex1.acquire ( )
Print (F '{} grab the self.name lock. 1')
mutex2.acquire ()
Print (F '{} grab the self.name lock 2')
mutex2.release ()
Print (F '{} the self.name release lock 2 ')
mutex1.release ()
Print (F' {} the self.name lock release. 1 ')

DEF Task2 (Self):
mutex2.acquire ()
Print (F' {} grab the self.name lock 2 ')
the time.sleep (. 1)
mutex1.acquire ()
print (f '{self.name} grab the lock. 1')
mutex1.release ()
print (f '{self.name} releases the lock. 1')
mutex2.release ()
print (f '{self.name} released a lock 2 ')


for I in Range (. 3):
T = MyThreada ()
t.start ()

# two threads
# 1 to get the threads (lock 2) the need to perform desired downward (lock 1),
# thread 2 got (lock 1) want to perform the required down (lock 2)
# each other have got the necessary conditions for the implementation of want each other down, and do not let go of each other in the lock.


recursive lock
Threading the Thread Import from, Lock, RLOCK 
# recursive lock in the same thread can be repeatedly acquire
# How to release internal maintains a counter that is equivalent to the same thread acquire will release several times
# mutex1 = Lock ()
# mutex2 = Lock ()
mutex1 = RLOCK ()
mutex2 = mutex1

Import Time
class MyThreada (the Thread):
DEF RUN (Self):
self.task1 ()
self.task2 ()
DEF task1 (Self):
mutex1.acquire ( )
Print (F '{} grab the self.name lock. 1')
mutex2.acquire ()
Print (F '{} grab the self.name lock 2')
mutex2.release ()
Print (F '{} the self.name release lock 2 ')
mutex1.release ()
Print (F' {} the self.name release the lock 1 ')

Task2 DEF (Self):
mutex2.acquire ()
Print (F '{} grab the self.name lock 2')
the time.sleep (1)
mutex1.acquire ()
Print (F '{} grab the self.name lock 1 ')
mutex1.release ()
Print (F' {} the self.name lock release. 1 ')
mutex2.release ()
Print (F' {} the self.name release lock 2 ')


for I in Range (. 3):
MyThreada = t ()
t.start ()




semaphore
# equivalent of a key lock had five, only five threads simultaneously collect keys, the thread must wait after the previous thread to release key
Threading the Thread Import from, currentThread, Semaphore 
Import Time

DEF Task ():
sm.acquire ()
( '{. currentThread ()} In performing name' F) Print
the time.sleep (. 3)
sm.release ()

SM = Semaphore ( . 5)
for I in Range (15):
T = the Thread (target = Task)
t.start ()


GIL global interpreter lock
has a lock GIL (global interpreter lock) in Cpython interpreter, GIl lock is essentially a mutex.

Led under the same process, the same time can only run one thread, you can not take advantage of multi-core advantage.
Multiple threads can only be achieved under the same process can not be achieved concurrent parallel.

Why should GIL?
Because cpython own garbage collection is not thread safe, so be GIL lock.


Analysis:
We have four tasks need to be addressed, treatment will definitely have to play a concurrent effect, the solution can be:
Option one: open the four processes
Option two: the next process, open four threads

compute-intensive recommend the use of multiple processes
each have calculated 10s
multithreading
only one thread will be executed at the same time, it means that every 10s are not the province, should be calculated separately for each 10s, were 40.ns
multiple processes
in parallel execution of multiple threads, 10s + open process time

io-intensive multi-threaded recommend
90% most of the four tasks each task in time io.
each task io10s 0.5s
multi-threading
can be achieved concurrently, each thread io time is not ye take up cpu, 10s + computing tasks four times
multi-process
can be implemented in parallel, 10s + 1 task execution time + open process time


performance comparison
Threading the Thread Import from 
from multiprocessing Import Process
Import Time

computationally intensive:
DEF WORK1 ():
RES = 0
for I in Range (100000000): # +. 1. 8 th 0
RES = I *

IF the __name__ == '__main__':
t_list = []
Start the time.time = ()
for I in Range (. 4):
# T = the Thread (target = WORK1)
T = Process (target = WORK1)
t_list.append (T)
t.start ()
for T in t_list:
t.join ()
End = time.time ()
# Print ( 'multi-threaded', end-start) # multithreading 15.413789510726929
Print ( 'multi-process', end-start) # multi-process 4.711405515670776



IO-intensive:
DEF WORK1 ( ):
x = 1+1
time.sleep(5)
x=3

if __name__ == '__main__':
t_list = []
start = time.time()
for i in range(4):
t = Thread(target=work1)
# t = Process(target=work1)
t_list.append(t)
t.start()
for t in t_list:
t.join()
end = time.time()
print('多线程',end-start) # 多线程 5.002625942230225
#print('多进程',end-start) # 多进程 5.660863399505615






































Guess you like

Origin www.cnblogs.com/wwei4332/p/11542376.html