Chapter XV, Python multi-thread synchronization locks, deadlocks and lock recursion

Chapter XV, Python multi-thread synchronization locks, deadlocks and lock recursion

1. Introduction:

1.创建线程对象
t1 = threading.Thread(target=say,args=('tony',))
2.启动线程
t1.start()
后面又说了两个点就是join和守护线程的概念

These are the basic use python multithreading

Description: in front of that two functions are independent, non-interference in each other, and will not be used to share data or resources, if we have multiple threads to use the same data, it will have a resource contention and lock problem, no matter in what language, this is unavoidable. So then talk about synchronization lock, using recursive locks and deadlocks

2. Genlock

Locks are typically used to synchronize access to shared resources. Lock create a shared resource to each object, when you need to access the resource, call acquire methods to get the lock object (if another thread has already acquired the lock, then the current thread needs to wait for it to be released) until after the resource access and then call the release method to release the lock.

Examples of suitable synchronization lock is as follows:

import threading
import time

num = 100

def fun_sub():
    global num
    # num -= 1
    num2 = num
    time.sleep(0.001)
    num = num2-1

if __name__ == '__main__':
    print('开始测试同步锁 at %s' % time.ctime())

    thread_list = []
    for thread in range(100):
        t = threading.Thread(target=fun_sub)
        t.start()
        thread_list.append(t)

    for t in thread_list:
        t.join()
    print('num is %d' % num)
    print('结束测试同步锁 at %s' % time.ctime())
-----------------------------------------------------
开始测试同步锁 at Sun Apr 28 09:56:45 2019
num is 91
结束测试同步锁 at Sun Apr 28 09:56:45 2019

Such examples are described : create a thread 100, and then from the common resources to each thread num variable to perform a subtraction operation, the following normal circumstances, until the end of the code execution, printing num variable that should get 0 because the threads 100 1 to perform the operation are once reduced.

The question is : 0 but the result is not found 91

Let's sort out the idea of code :

1. Because GIL, only one thread (assuming that thread 1) num got this resource, and then assigned to the variable num2, sleep 0.001 seconds, this time 100 = num
2. When this period the first thread sleep 0.001 seconds this thread do yield operation, is to switch to other cpu thread execution (assuming that thread 2 get a GIL, obtain the right to use cpu), thread 1 and thread 2 also also get the same num, returns assigned to num2, then sleep, this when, in fact or num = 100.
3. thread 2 sleep time, but also yield it is assumed that get num thread 3, performing the above operations, in fact, there may still num 100
4. when the cpu later re-switching to the thread 1, thread 2, 3, when executed on a thread, they perform decremented, in fact, until the num are actually 99, rather than descending order.
The other remaining thread operations described above

Solution : Here we must lock by means of synchronization of the python, which is the same time can only put a thread to operate num variable, then subtract 1, thread operations to operate behind the num variable. We look at how to achieve the following.

import threading
import time

num = 100

def fun_sub():
    global num
    lock.acquire()
    print('----加锁----')
    print('现在操作共享资源的线程名字是:',t.name)
    num2 = num
    time.sleep(0.001)
    num = num2-1
    lock.release()
    print('----释放锁----')

if __name__ == '__main__':
    print('开始测试同步锁 at %s' % time.ctime())

    lock = threading.Lock() #创建一把同步锁

    thread_list = []
    for thread in range(100):
        t = threading.Thread(target=fun_sub)
        t.start()
        thread_list.append(t)

    for t in thread_list:
        t.join()
    print('num is %d' % num)
    print('结束测试同步锁 at %s' % time.ctime())
 ------------------------------------------------
 .......
----加锁----
现在操作共享资源的线程名字是: Thread-98
----释放锁----
----加锁----
现在操作共享资源的线程名字是: Thread-100
----释放锁----
num is 0
结束测试同步锁 at Sun Apr 28 12:08:27 2019

Idea : We see above minus 1 to the middle of the block, add a synchronization lock, so that we can get the result we want, and this is the role of synchronization lock, only one thread operations to share resources.

3. Deadlock

Introduction:

This concept deadlock exists in many places, comparison data, a brief overview of how a deadlock is generated

# 线程1拿到了(锁头2)想要往下执行需要(锁头1),
# 线程2拿到了(锁头1)想要往下执行需要(锁头2)
# 互相都拿到了彼此想要往下执行的必需条件,互相都不放手里的锁头.
# 产生了死锁问题

Cause: Python sharing of resources between threads more time, if two threads each occupy a portion of the resources and at the same time waiting for each other's resources, will result in a deadlock, because the system determines that part of the resources are in use, all of these thread in the absence of external force will wait forever.

from threading import Thread,Lock
mutex1 = Lock()
mutex2 = Lock()
import time
class MyThreada(Thread):
    def run(self):
        self.task1()
        self.task2()
    def task1(self):
        mutex1.acquire()
        print(f'{self.name} 抢到了 锁1 ')
        mutex2.acquire()
        print(f'{self.name} 抢到了 锁2 ')
        mutex2.release()
        print(f'{self.name} 释放了 锁2 ')
        mutex1.release()
        print(f'{self.name} 释放了 锁1 ')

    def task2(self):
        mutex2.acquire()
        print(f'{self.name} 抢到了 锁2 ')
        time.sleep(1)
        mutex1.acquire()
        print(f'{self.name} 抢到了 锁1 ')
        mutex1.release()
        print(f'{self.name} 释放了 锁1 ')
        mutex2.release()
        print(f'{self.name} 释放了 锁2 ')

for i in range(3):
    t = MyThreada()
    t.start()

So, in order to resolve this deadlock, the lock on the introduction of a recursive program

4. recursive lock RLock

principle:

In order to support multiple requests for the same resource in the same thread, python provides a "recursive lock": threading.RLock. Internal RLock maintains a Lock and a counter variable, counter records the number of times acquire, so that resources can be repeatedly acquire. Acquire a thread until all have been release, other threads to get resources

Not much to say, put the code

from threading import Thread,Lock,RLock
# mutex1 = Lock()
# mutex2 = Lock()
mutex1 = RLock()
mutex2 = mutex1

import time
class MyThreada(Thread):
    def run(self):
        self.task1()
        self.task2()
    def task1(self):
        mutex1.acquire()
        print(f'{self.name} 抢到了 锁1 ')
        mutex2.acquire()
        print(f'{self.name} 抢到了 锁2 ')
        mutex2.release()
        print(f'{self.name} 释放了 锁2 ')
        mutex1.release()
        print(f'{self.name} 释放了 锁1 ')

    def task2(self):
        mutex2.acquire()
        print(f'{self.name} 抢到了 锁2 ')
        time.sleep(1)
        mutex1.acquire()
        print(f'{self.name} 抢到了 锁1 ')
        mutex1.release()
        print(f'{self.name} 释放了 锁1 ')
        mutex2.release()
        print(f'{self.name} 释放了 锁2 ')


for i in range(3):
    t = MyThreada()
    t.start()

to sum up:

Above we used a recursive lock, to solve a number of synchronization lock deadlock caused. We can put RLock understood as there are large lock small lock, only to wait until all the little internal locks are gone, the other threads to enter this public resource.

5. large sum

Another point, not all multi-threaded data there are not synchronized, deadlock, but when accessing a shared resource, the lock is sure to exist, so we lock in the code inside, pay attention to the where plus, minimal impact on performance, this would rely on the understanding of the logic.

Guess you like

Origin www.cnblogs.com/demiao/p/11543888.html