GIL Global Interpreter Lock \ Event event \ semaphore \ deadlock \ recursive lock

GIL is what? 
First of all we need to know a common sense, there are many python interpreter, we are using cpython interpreter, Cpython is written in C of
nature that GIL is a mutex:
mutex:
the concurrent turn into serial expense of operating efficiency, but to improve the safety, security lock thing is to
be performed using a process to prevent multiple threads at the same time, multiple threads (the same process are not parallel, but can be achieved concurrently)
exist GIL global interpreter lock because cpython memory management thread is unsafe, because the garbage collection mechanism, if there is no variable will be generated GIL finished just generated directly to the garbage collection recovery
 
So what garbage collection mechanism? 
1. The reference count
data in memory if there is no variable name with which they have a binding relationship, it will be automatically recovered
2. Mark cleared
when the memory is about to be filled in an application when, will automatically trigger
3. generational recovery
depending on the time of survival value, classified as different levels, the higher the garbage collection mechanism to scan the lower frequency level
GIL features:
features gil is not a python interpreter, single-process side can not take advantage of multi-core this is a common problem for all the advantages of an interpreted language

Interpretive language is read one line to solve a row, and this method of compilation is not the same, he is not a python advantage is cpython of
different data for different locks we should add processing can not be used with a lock to a different data above, GIL is used to ensure the security thread, other data are not available
for use on, after we had GIL do not need to add another mutex in the process


# Verification Examples 
from Threading Import the Thread 

Import Time 

n- = 100 DEF Task ():
     Global n- 
    tmp = n- 
    the time.sleep ( . 1 )
     # because the first lock to grab a second seating, generated IO operation, the automatic lock release to let other people go after the break to grab a second # this time of the original 100-1 equal to 99, this time making the sign of the value of some people inside are 99 # If no io operation, then the next rush to get the value of the lock person is a person on the processing result 
    n-tmp = -. 1 
t_list = []
 for I in Range (100 ): 
    T = the Thread (target = Task) 
    t_list.append (T) for



    
    

 
    t.start ()

 t in t_list:
    t.join()
print(t)
print(n)
Is this to say in python multithreading is not also impossible to take advantage of multi-core yet? 
That would study this for just a moment, we come for example from two
aspects of a Four computationally intensive task type
single-core cases
open thread more provincial resources, because open process to open space, open space process is not required
under the multicore case
open process four times faster than the open thread, open process can be calculated simultaneously, but only a thread of a calculation, is the need to line up the
two. io intensive four tasks the types of
single-core cases
open thread to save resources
on multicore case
is open threads a smaller footprint, because he is io operation, and the number of cpu core is anyway not have anything to do with stagnation, a wait is a switch .
the conclusion is:
certainly useful, but need to see the case may be, under normal circumstances it is multi-threaded multi-process with the use of
 
# IO密集型
from multiprocessing import Process
from threading import Thread
import threading
import os, time


def work():
    time.sleep(2)


if __name__ == '__main__':
    l = []
    print(os.cpu_count())
    start = time.time()
    for i in range(300):
        # p = Process(target=work)  # 运行时间:16.447757959365845
        p = Thread(target=work)  # 运行时间:2.0386486053466797
        l.append(p)
        p.start()
    for p in l:
        p.join()
    stop = time.time()
    print("运行时间:%s" % (stop - start))


# 计算密集型
def work1():
    res = 0
    for i in range(100000000):
        res *= i


if __name__ == '__main__':
    l = []
    print(os.cpu_count())  # 本机为8核
    start = time.time()
    for i in range(6):
        p = Process(target=work1)  # 耗时  运行时间:7.944072008132935
        # p = Thread(target=work1)  # 运行时间:29.27473521232605
        l.append(p)
        p.start()
    for p in l:
        p.join()
    stop = time.time()
    print("运行时间:%s" % (stop - start))
 
死锁和递归锁
所谓死锁: 是指两个或两个以上的进程或线程在执行过程中,因争夺资源而造成的
一种互相等待的现象,若无外力作用,它们都将无法推进下去。此时称系统处于死锁
状态或系统产生了死锁,这些永远在互相等待的进程称为死锁进程
递归锁
我们分析了死锁,那么python里面是如何解决这样的递归锁呢?
在Python中为了支持在同一线程中多次请求同一资源,python提供了可重入锁RLock。
这个RLock内部维护着一个Lock和一个counter变量,counter记录了acquire的次数,
从而使得资源可以被多次require。直到一个线程所有的acquire都被release,
其他的线程才能获得资源。上面的例子如果使用RLock代替Lock,则不会发生死锁:
 
死锁实例:
from threading import Thread, Lock
import time

mutexA = Lock()
mutexB = Lock()


class MyThread(Thread):
    def run(self):
        self.func1()
        self.func2()

    def func1(self):
        mutexA.acquire()
        print('\033[41m%s 拿到A锁\033[0m' % self.name)

        mutexB.acquire()
        print('\033[42m%s 拿到B锁\033[0m' % self.name)
        mutexB.release()

        mutexA.release()

    def func2(self):
        mutexB.acquire()
        print('\033[43m%s 拿到B锁\033[0m' % self.name)
        time.sleep(2)

        mutexA.acquire()
        print('\033[44m%s 拿到A锁\033[0m' % self.name)
        mutexA.release()

        mutexB.release()


if __name__ == '__main__':
    for i in range(5):
        t = MyThread()
        t.start()
结果:>>>
Thread-1 拿到A锁
Thread-1 拿到B锁
Thread-1 拿到B锁
Thread-2 拿到A锁
分析如上代码是如何产生死锁的: 
启动5个线程,执行run方法,假如thread1首先抢到了A锁,此时thread1没有释放A锁,紧接着执行代码mutexB.acquire(),
抢到了B锁,在抢B锁时候,没有其他线程与thread1争抢,因为A锁没有释放,其他线程只能等待,然后A锁就执行完func1代码,
然后继续执行func2代码,与之同时,在func2中,执行代码 mutexB.acquire(),抢到了B锁,然后进入睡眠状态,在thread1执
行完func1函数,释放AB锁时候,其他剩余的线程也开始抢A锁,执行func1代码,如果thread2抢到了A锁,接下来thread2要抢B锁
,ok,在这个时间段,thread1已经执行func2抢到了B锁,然后在sleep(2),持有B锁没有释放,为什么没有释放,因为没有其他的
线程与之争抢,他只能睡着,然后thread1握着B锁,thread2要抢B锁,ok,这样就形成了死锁
<<<
# 递归锁的实例
from threading import Thread, Lock, RLock
import time

mutexA = mutexB = RLock()


class MyThread(Thread):
    def run(self):
        self.f1()
        self.f2()

    def f1(self):
        mutexA.acquire()
        print('%s 拿到A锁' % self.name)

        mutexB.acquire()
        print('%s 拿到B锁' % self.name)
        mutexB.release()

        mutexA.release()

    def f2(self):
        mutexB.acquire()
        print('%s 拿到B锁' % self.name)
        time.sleep(0.1)
        mutexA.acquire()
        print('%s 拿到A锁' % self.name)
        mutexA.release()

        mutexB.release()


if __name__ == '__main__':
    for i in range(5):
        t = MyThread()
        t.start()

<<<Thread-1 拿到A锁
Thread-1 拿到B锁
Thread-1 拿到B锁
Thread-1 拿到A锁
Thread-2 拿到A锁
Thread-2 拿到B锁
Thread-2 拿到B锁
Thread-2 拿到A锁
Thread-4 拿到A锁
Thread-4 拿到B锁
Thread-4 拿到B锁
Thread-4 拿到A锁
Thread-3 拿到A锁
Thread-3 拿到B锁
Thread-3 拿到B锁
Thread-3 拿到A锁
Thread-5 拿到A锁
Thread-5 拿到B锁
Thread-5 拿到B锁
Thread-5 拿到A锁>>>
由于锁A,B是同一个递归锁,thread1拿到A,B锁,counter记录了acquire的次数2次,然后在func1执行完毕,就释放递归锁,
在thread1释放完递归锁,执行完func1代码,接下来会有2种可能,
    1、thread1在次抢到递归锁,执行func2代码 
    2、其他的线程抢到递归锁,去执行func1的任务代码
信号量,在不同的领域中对应的是不同的知识点

比如:
互斥锁:就是抢厕所(这个厕所的一个坑位的)
信号量就是:公共测试就是多个坑位的厕所
就是说可以多个人去同时拿到锁,也可以同时释放锁
# 代码演示
from threading import Semaphore, Thread
import time
import random

SM = Semaphore(6)  # 六个坑位的厕所


def task(name):
    SM.acquire()  # 抢锁
    print("%s 占了一个坑位" % name)
    time.sleep(random.randint(1, 5))  # 每个人上厕所的时间在1秒到5秒之间
    SM.release()  # 释放锁
    print()
    


for i in range(50):  # 50个人在排队
    t = Thread(target=task, args=(i,))
    t.start()

<<<
0 占了一个坑位
1 占了一个坑位
2 占了一个坑位
3 占了一个坑位
4 占了一个坑位
5 占了一个坑位
有六个坑位,进去了六个人

6 占了一个坑位
出来了一个人第七个人进去了
7 占了一个坑位
又出来了一个人第八个人进去了
8 占了一个坑位



11 占了一个坑位
9 占了一个坑位
10 占了一个坑位


13 占了一个坑位
12 占了一个坑位

14 占了一个坑位

15 占了一个坑位


17 占了一个坑位
16 占了一个坑位

18 占了一个坑位

19 占了一个坑位

20 占了一个坑位

21 占了一个坑位

22 占了一个坑位

23 占了一个坑位

24 占了一个坑位

25 占了一个坑位

26 占了一个坑位

27 占了一个坑位

28 占了一个坑位

29 占了一个坑位

30 占了一个坑位

31 占了一个坑位

32 占了一个坑位

33 占了一个坑位

34 占了一个坑位

35 占了一个坑位

36 占了一个坑位

37 占了一个坑位


38 占了一个坑位
39 占了一个坑位

40 占了一个坑位

41 占了一个坑位

42 占了一个坑位

43 占了一个坑位

44 占了一个坑位

45 占了一个坑位

46 占了一个坑位

47 占了一个坑位

48 占了一个坑位

49 占了一个坑位

  Event事件:

同进程的一样,线程的一个关键特性是每个线程都是独立运行且状态不可预测。如果程序中的其 他线程需要通过判断某个线程的状态来确定自己下一步的操作,
这时线程同步问题就会变得非常棘手。为了解决这些问题,我们需要使用threading库中的Event对象。 对象包含一个可由线程设置的信号标志,它允许线程等
待某些事件的发生。在 初始情况下,Event对象中的信号标志被设置为假。如果有线程等待一个Event对象, 而这个Event对象的标志为假,那么这个线程将会
被一直阻塞直至该标志为真。一个线程如果将一个Event对象的信号标志设置为真,它将唤醒所有等待这个Event对象的线程。如果一个线程等待一个已经被设置
为真的Event对象,那么它将忽略这个事件, 继续执行
from threading import Thread, Event
import time

event = Event()


def light():
    print('红灯正亮着')
    time.sleep(3)
    event.set()  # 绿灯亮


def car(name):
    print('车%s正在等绿灯' % name)
    event.wait()  # 等灯绿 此时event为False,直到event.set()将其值设置为True,才会继续运行.
    print('车%s通行' % name)


if __name__ == '__main__':
    # 红绿灯
    t1 = Thread(target=light)
    t1.start()
    #
    for i in range(10):
        t = Thread(target=car, args=(i,))
        t.start()
运行结果:      
红灯正亮着
车0正在等绿灯
车1正在等绿灯
车2正在等绿灯
车3正在等绿灯
车4正在等绿灯
车5正在等绿灯
车6正在等绿灯
车7正在等绿灯
车8正在等绿灯
车9正在等绿灯
车3通行
车5通行
车4通行
车6通行
车9通行
车2通行
车7通行
车1通行
车8通行
车0通行        
        
进程queue
同一个进程下的多个线程本来就是数据共享 为什么还要用队列

因为队列是管道+锁 使用队列你就不需要自己手动操作锁的问题

因为锁操作的不好极容易产生死锁现象

PriorityQueue() 优先级数字越小优先级越大
import queue

q = queue.Queue()
q.put("你追我呀!")
print(q.get())

q = queue.LifoQueue()  # 从左
q.put("你追我呀!")
q.put("追上就让你")
q.put("嘿嘿嘿!")
print(q.get())  # 从左取

q = queue.PriorityQueue()
q.put((10, "追上就让你"))
q.put((-100, "你追我"))
q.put((90, "papapapa!!!"))
print(q.get())
print(q.get())
print(q.get())

 

 














Guess you like

Origin www.cnblogs.com/ioipchina/p/11354053.html