thread lock, gil lock, deadlock

thread lock

from threading import Thread,Lock

x = 0
mutex = Lock()
def task(): global x # mutex.acquire() for i in range(200000): x = x+1 # t1 的 x刚拿到0 保存状态 就被切了 # t2 的 x拿到0 进行+1 1 # t1 又获得运行了 x = 0 +1 1 # 思考:一共加了几次1? 加了两次1 真实运算出来的数字本来应该+2 实际只+1 # 这就产生了数据安全问题. # mutex.release() if __name__ == '__main__': t1 = Thread(target=task) t2 = Thread(target=task) t3 = Thread(target=task) t1.start() t2.start() t3.start() t1.join() t2.join() t3.join() print(x) 

531388

The process we have talked about has process locks, so threads also have thread locks. Let’s look at the above code first. We know that threads share the space of a process, so they all increase x. Logically, the final printed result should be 600000 , but it is far from here, this is because when the thread is performing a long operation, the cup cuts to the next thread, and at this time, it may be x=1000+1, only 1000+1 is executed, so it is saved In this state, no matter how many times the subsequent thread adds one to x, it will be reassigned to 1001 when it finally returns to this thread.

Therefore, it is necessary to add a thread lock and turn the process of adding into serial, which is the part of the comment. When it runs for too long, it cuts to other threads, and other threads can't get in because it is locked in the hands of the first thread. So this little piece of code is serial.

deadlock

What is deadlock:

​ Each has the necessary conditions for the other party to continue to execute the following code, and must obtain the necessary conditions in the other party’s hands before giving the other party his necessary conditions. As a result, neither can continue to execute each other.

The concept is summed up by myself, and it will be clearer to look at the code:

from threading import Thread,Lock
import time
lock1=Lock()
lock2=Lock()
class Mythread1(Thread): def run(self): self.task1() def task1(self): lock1.acquire() print('a获得锁1') time.sleep(3) lock2.acquire() print('a获得锁2') lock1.release() print('a释放锁1') lock2.release() print('a释放锁2') class Mythread2(Thread): def run(self): self.task2() def task2(self): lock2.acquire() print('b获得锁2') time.sleep(3) lock1.acquire() print('b获得锁1') lock2.release() print('b释放锁2') lock1.release() print('b释放锁1') t1 =Mythread1() t1.start() t2=Mythread2() t2.start()

a acquires lock 1
b acquires lock 2

Because ti has obtained lock1, it must first take lock2, but lock2 is in the hands of t2 at this time, so it has to wait for t2 to release lock2, and t2 must first obtain lock1 to release lock2, and at this time lock1 is in the hands of t1, this is It caused a deadlock.

Next, let's take a look at what a deadlock looks like when multi-threaded.

from threading import Thread,Lock
mutex1 = Lock()
mutex2 = Lock()
import time
class MyThreada(Thread): def run(self): self.task1() self.task2() def task1(self): mutex1.acquire() print(f'{self.name} 抢到了 锁1 ') mutex2.acquire() print(f'{self.name} 抢到了 锁2 ') mutex2.release() print(f'{self.name} 释放了 锁2 ') mutex1.release() print(f'{self.name} 释放了 锁1 ') def task2(self): mutex2.acquire() print(f'{self.name} 抢到了 锁2 ') time.sleep(1) mutex1.acquire() print(f'{self.name} 抢到了 锁1 ') mutex1.release() print(f'{self.name} 释放了 锁1 ') mutex2.release() print(f'{self.name} 释放了 锁2 ') for i in range(3): t = MyThreada() t.start()

Thread-1 grabs lock 1
Thread-1 grabs lock 2
Thread-1 releases lock 2
Thread-1 releases lock 1
Thread-1 grabs lock 2
Thread-2 grabs lock 1

So how to solve the deadlock problem, use recursive locks.

recursive lock

Recursive lock. In Python, in order to support multiple requests for the same resource in the same thread, python provides a reentrant lock RLock.

This RLock internally maintains a Lock and a counter variable, and the counter records the number of acquires, so that the resource can be required multiple times. Until all acquires of a thread are released, other threads cannot acquire resources. In the above example, if RLock is used instead of Lock, no deadlock will occur.

mutex1 = RLock()
mutex2 = mutex1

import time
class MyThreada(Thread): def run(self): self.task1() self.task2() def task1(self): mutex1.acquire() print(f'{self.name} 抢到了 锁1 ') mutex2.acquire() print(f'{self.name} 抢到了 锁2 ') mutex2.release() print(f'{self.name} 释放了 锁2 ') mutex1.release() print(f'{self.name} 释放了 锁1 ') def task2(self): mutex2.acquire() print(f'{self.name} 抢到了 锁2 ') time.sleep(1) mutex1.acquire() print(f'{self.name} 抢到了 锁1 ') mutex1.release() print(f'{self.name} 释放了 锁1 ') mutex2.release() print(f'{self.name} 释放了 锁2 ') for i in range(3): t = MyThreada() t.start() 

Thread-1 grabs lock 1
Thread-1 grabs lock 2
Thread-1 releases lock 2
Thread-1 releases lock 1
Thread-1 grabs lock 2
Thread-1 grabs lock 1
Thread-1 releases lock 1
Thread -1 released lock 2
Thread-2 seized lock 1
Thread-2 seized lock 2
Thread-2 released lock 2
Thread-2 released lock 1
Thread-2 seized lock 2
Thread-2 seized lock 1
Thread- 2 released lock 1
Thread-2 released lock 2
Thread-3 grabbed lock 1
Thread-3 grabbed lock 2
Thread-3 released lock 2
Thread-3 released lock 1
Thread-3 grabbed lock 2
Thread-3 Locked 1
Thread-3 Released Lock 1
Thread-3 Released Lock 2

signal

from threading import Thread,currentThread,Semaphore
import time

def task(): sm.acquire() print(f'{currentThread().name} 在执行') time.sleep(3) sm.release() sm = Semaphore(5) for i in range(15): t = Thread(target=task) t.start()

What he allows is how many threads can get the lock at the same time.

GIL lock

There is a GIL lock (global interpreter lock) in the Cpython interpreter, which is essentially a mutex lock.
As a result, under the same process, only one thread can run at the same time, and the advantage of multi-core cannot be used.
Multiple threads under the same process can only achieve concurrency but not parallelism.

Why is there a GIL?
Because the garbage collection mechanism that comes with cpython is not thread-safe, it needs a GIL lock.

As a result, under the same process, only one thread can run at the same time, and the advantage of multi-core cannot be used.
Analysis:
We have four tasks to be processed, and the processing method must be to play a concurrent effect. The solution can be:
Option 1: Turn on Four process
scheme 2: under one process, open four threads

It is recommended to use multiple processes for computation-intensive types. Each of them needs to be calculated for
10s . Only one
thread
will be executed at the same time, which means that every 10s cannot be saved. Separate each one to calculate for 10s, a total of 40.ns
of multiple processes.
Multiple threads can be executed in parallel, 10s+ time to start the process

IO-intensive recommended multi-threaded
4 tasks, each task spends 90% of the time in io.
Each task io10s 0.5s
multi-threading
can achieve concurrency, the time of each thread io does not take up CPU, 10s + 4 tasks Computational time
Multi-process
can achieve parallelism, 10s + 1 task execution time + process opening time

Reprinted in: https://www.cnblogs.com/whnbky/p/11545995.html

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326547691&siteId=291194637