Lock thread operations of Use

A thread lock

1. multiple threads to seize the resources of the situation:

Locks are typically used to synchronize access to shared resources. Lock create a shared resource to each object, when you need to access the resource, call acquire methods to get the lock object (if another thread has already acquired the lock, then the current thread needs to wait for it to be released) until after the resource access and then call the release method to release the lock:

Case number one:

from threading import  Thread,Lock
import time
K = Lock()
def func():
    global n
    K.acquire()       # 加锁
    team = n
    time.sleep(0.1)   # 分析:第一个线程刚拿到team = n = 100,就遇到阻塞,马上切换到下一个线程,下一个线程还是跟上一个线程一样的情况,
                      #       由于线程切换速度很快,一直拿到的team都是99,除非你的线程数量足够大。
                      # 解决方法: 给处理数据部分进行加锁处理。
    n = team-1
    K.release()      # 释放锁
if __name__ == '__main__':
    n = 100
    l = []
    for i in range(100):
        t = Thread(target=func)
        l.append(t)
        t.start()
    for t in l:
        t.join()
    print(n)

Case II:

from threading import Thread,Lock
x = 0
K = Lock()
def func():
    global x
    K.acquire()
    for i in range(60000):

        x = x+1
        # t1 的x刚拿到0 保存状态 就被切换了
        # t2 的x拿到0 进行+1操作  x = 1
        # t1 又获得了运行 ,返回被切之前的保存状态,x=0+1  x= 1
        # 其实运算出来的数字应该是+2,实际上只+1
        # 这就产生了数据安全问题
    K.release()
if __name__ == '__main__':
    t1 = Thread(target=func)
    t2 = Thread(target=func)
    t3 = Thread(target=func)

    t1.start()
    t2.start()
    t3.start()

    t1.join()
    t2.join()
    t3.join()
    print(x)

Second, deadlock

Deadlock: refers to the phenomenon of two or more processes or threads in the implementation process, a result of competition for resources caused by waiting for each other, in the absence of external force, they will not be able to promote it. At this time, say the system is in deadlock state or system to produce a deadlock, which is always in the process of waiting for another process called the deadlock, deadlock is as follows

from threading import Lock as Lock
import time
mutexA=Lock()
mutexA.acquire()
mutexA.acquire()
print(123)
mutexA.release()
mutexA.release()

Solution, recursive lock, in order to support Python, in the same thread multiple requests for the same resource, python provides reentrant lock RLock.

This internal RLock maintains a Lock and a counter variable, counter records the number of times acquire, so that resources can be many times require. Acquire a thread until all have been release, other threads to get resources. The above example if instead of using RLock Lock, the deadlock will not occur:

from threading import Thread,RLock,currentThread
import time
k1 = k2 = RLock()   #  一个线程拿到锁,counter加1,该线程内又碰到加锁的情况,则counter继续加1,这期间所有其他线程都只能等待,等待该线程释放所有锁,即counter递减到0为止
class Myt(Thread):
    def run(self):
        self.task1()
        self.task2()

    def task1(self):
        k1.acquire()
        print(f'{self.name} 抢到了 锁1')
        k2.acquire()
        print(f'{self.name} 抢到了 锁2')
        k2.release()
        print(f'{self.name} 释放了 锁2')
        k1.release()
        print(f'{self.name} 释放了 锁1')


    def task2(self):
        k2.acquire()
        print(f'{self.name} 抢到了 锁2')
        time.sleep(1)     # 当线程1 执行到这一步的时候,遇到阻塞,马上切换到线程2,此时,线程1已经拿到了锁2,但是线程2也要拿,所以形成了死锁。
        k1.acquire()      # 解决方法: 加入递归锁RLock
        print(f'{self.name} 抢到了 锁1')
        k1.release()
        print(f'{self.name} 释放了 锁1')
        k2.release()
        print(f'{self.name} 释放了 锁2')

for i in range(3):
    t = Myt()
    t.start()

Third, the signal amount Semaphore

Like the process of

Semaphore management of a built-in counter,
whenever the call to acquire () built-in counter 1;
built-in counter +1 calling Release ();
counter is not less than 0; when the counter is 0, acquire () will block until another thread invokes the thread release ().

Examples :( while only five threads can get semaphore, which can limit the maximum number of connections to 5):

from threading import Thread,currentThread,Semaphore

import time
def func():
    sm.acquire()
    print(f'{currentThread().name} 正在执行')
    time.sleep(3)
    sm.release()
sm = Semaphore(5)
for i in range(15):
    t= Thread(target=func)
    t.start()

Four, GIL lock:

GIL: global interpreter lock. Each thread needs to get GIL in the implementation process to ensure that only one thread can execute code.

GIL and Lock are two locks, in essence, is a mutex lock data, protection is not the same, the former interpreter level (of course, is the interpreter level of data protection, such as data garbage collection), which is application data to protect the user's own development, it is clear that GIL is not responsible for this matter, can only handle user-defined lock that Lock

Process Analysis: grab all the threads that GIL lock, or grab all threads are executing authority

  Thread 1 grab GIL lock, get execute permissions, execution, and then add a handful Lock, has not yet finished, that thread 1 has not been released Lock, thread 2 is likely to grab GIL lock, begin the process of execution Lock has not been found to release the thread 1, thread 2 then enters the blocked execute permissions are taken away, it is possible to get the thread 1 GIL, and then perform normal to release Lock. . . This leads to the effect of serial operation

  Since it is serial, then we execute

  t1.start()

  t1.join

  t2.start()

  t2.join()

  This is the serial execution ah, why add Lock it, Know join t1 is waiting for all of the code executed, the equivalent of all the code lock t1, and Lock code is only part of the operation to lock shared data.

Fifth, compute-intensive

Such as scientific computing processing requires continuous use of cpu type.

from multiprocessing import Process
from threading import Thread
import os,time

res = 0
def func():
    global res
    for i in range(10000000):
        res += i


if __name__ == '__main__':

    print(os.cpu_count())   # CPU核心数
    start = time.time()
    l = []
    for i in range(4):
        # t = Process(target=func)
        t =Thread(target=func)
        l.append(t)
        t.start()
    for t in l :
        t.join()
    end = time.time()
    print(end-start)   # 多进程耗时:3.4384214878082275   多线程耗时:4.417709827423096

Seen from the above: the treatment lasts fast computing code prolonged use of the CPU processing speed than the multi-process multi-threaded. Recommended for multi-process operated.

Six, IO-intensive

Refers to the type of treatment like IO operation may cause obstruction

from multiprocessing import Process
from threading import Thread
import time

def func():
    time.sleep(5)

if __name__ == '__main__':
    l = []
    start = time.time()
    for i in range(4):
        # t = Process(target=func)  # 多进程
        t = Thread(target=func)  # 多线程
        l.append(t)
        t.start()
    for t in l:
        t.join()
    end = time.time()
    print(end-start)   # 多进程:5.8953258991241455  多线程:5.002920150756836

As can be seen from the above: When this task causing obstruction of an IO operation, multi-threading significantly faster than multiple processes.

From the above two cases, it concluded:

python for compute-intensive tasks to open the efficiency of multi-threaded and can not bring on how to enhance performance, even better than the serial (not a lot of switching), but, for efficiency IO-intensive tasks or have significantly improved. Multithreading for IO-intensive, such as socket, reptiles, web multiple processes for compute-intensive, such as financial analysis


Guess you like

Origin www.cnblogs.com/guapitomjoy/p/11546305.html