GIL deadlock event recursive lock semaphore thread Queue

I. Introduction

'' ' 
Is defined: 
the In CPython, The Global Interpreter Lock, or the GIL, IS A the mutex that Prevents Multiple 
Native Threads from Executing the Python bytecodes AT Once This Lock IS Necessary Mainly. 
Because CPython apos Memory Management IS Not Thread-Safe (HOWEVER, Operating since. GIL at the 
EXISTS, OTHER Features have have grown to the depend oN IT at the Guarantees that enforces). 
'' ' 
Conclusion: Cpython interpreter, multi-threaded open under the same process, the same time only one thread of execution, unable to take advantage of multi-core Advantage
ps: python interpreter, there are many of the most common is Cpython interpreter 
will simultaneously become serial sacrifice efficiency to ensure data security: GIL also essentially a mutex 
executed to prevent multiple threads in the same process at the same time (multiple threads within the same process but can not be achieved in parallel to achieve concurrent) 
    
exist GIL is because the memory management CPython interpreter is not thread safe

Two .GIL Introduction

GIL is essentially a mutex, mutex since it is the essence of all mutex are the same, all become serial will run concurrently, in order to control the sharing of data within the same time can only be modified by a task , thus ensuring data security.

To be sure of is this: to protect the safety of different data, you should add different locks.

To understand GIL, first determine one thing: each time the python program, it will have a separate process. For example python test.py, python aaa.py, python bbb.py will produce three different processes python

'' ' 
# Python test.py verification process will only produce a 
# test.py content 
Import os, Time 
Print (os.getpid ()) 
the time.sleep (1000) 
' '' 
python3 test.py 
# under the windows 
tasklist | Python findstr
 # under Linux 
PS the AUX | grep Python 

verify python test.py will only produce a process
View Code

A python in the process, not only the interpreter level test.py thread the main thread or other threads Chengkai Qi from the main line, as well as the interpreter turned on garbage collection, etc. In short, all the threads are running in this process inside, there is no doubt

# 1 All data are shared, which, as a data codes are shared by all threads (all the code and all code test.py Cpython interpreter) 
example: define a function test.py work (Code Contents as shown below), all threads within a process can access the code to work, so we can open three threads and then point to the target code to access means is that you can perform. 

# 2 all the threads of the task, the task will require code as a parameter to the interpreter to execute code that all the threads in order to run their own task, you first need to address is the ability to access the interpreter code.

In summary:

If multiple threads of target = work, then the process is executed

Multiple threads first visit to the interpreter code that is executed to get the permissions, and then to the target code interpreter to execute code

The interpreter code is shared by all threads, so the garbage collector thread may also have access to the interpreter code execution away, which leads to a question: 100 for the same data, may execute thread 1 x = 100 at the same time, and garbage collection is performed by a clever way to recovery operations 100, this problem is not solved, that is, the lock processing, as shown in GIL, ensure that the code python interpreter at the same time can only perform one task

III. Multithreading study of the usefulness python

10s compute-intensive tasks four 
single-core cases 
    open thread more provincial resources 
multicore case 
    open process 10s 
    open thread 40s 

four IO-intensive task of   
single-core cases 
    open threads more save resources 
multicore case 
    open threads more save resources
#计算密集型
from multiprocessing import Process
from threading import Thread
import os,time
def work():
    res=0
    for i in range(100000000):
        res*=i


if __name__ == '__main__':
    l=[]
    print(os.cpu_count())  # 本机为8核
    start=time.time()
    for i in range(6):
        p=Process(target=work) #耗时  7.551327705383301
        # p=Thread(target=work) #耗时 27.445183515548706
        l.append(p)
        p.start()
    for p in l:
        p.join()
    stop=time.time()
    print('run time is %s' %(stop-start))
from multiprocessing import Process
from threading import Thread
import threading
import os,time
def work():
    time.sleep(2)


if __name__ == '__main__':
    l=[]
    print(os.cpu_count()) #本机为8核
    start=time.time()
    for i in range(4000):
        p=Process(target=work) #耗时9.001083612442017s多,大部分时间耗费在创建进程上
        # p=Thread(target=work) #耗时2.051966667175293s多
        l.append(p)
        p.start()
    for p in l:
        p.join()
    stop=time.time()
    print('run time is %s' %(stop-start))

四.GIL和普通互斥锁

from threading import Thread
import time
n = 100
def task():
    global n
    tmp = n
    # time.sleep(1)  
    n = tmp -1
t_list = []
for i in range(100):
    t = Thread(target=task)
    t.start()
    t_list.append(t)
for t in t_list:
    t.join()
print(n)
1.当程序中的time.sleep(1)开启后,代码执行到这的时候会进入阻塞态,这时候GIL锁会被释放,别的线程可以进行抢锁,在睡的这1秒的时间中,开启的100个线程都可以轮一遍,都可以抢到锁,所以每个线程中拿到的tmp都为100,在执行玩睡眠时间后执行后面的代码,所以结果为99
2.当未开启time.sleep(1)时,开启100个线程,每次只有一个线程可以拿到GIL锁,拿到的线程可以对数据进行操作,别的进程只能等到其释放锁后才能进行抢锁,之后再去执行代码,所以结果为0

五.死锁

所谓死锁: 是指两个或两个以上的进程或线程在执行过程中,因争夺资源而造成的一种互相等待的现象,若无外力作用,它们都将无法推进下去。此时称系统处于死锁状态或系统产生了死锁,这些永远在互相等待的进程称为死锁进程,如下就是死锁

from threading import Thread,Lock
import time
mutexA=Lock()
mutexB=Lock()

class MyThread(Thread):
    def run(self):
        self.func1()
        self.func2()
    def func1(self):
        mutexA.acquire()
        print('\033[41m%s 拿到A锁\033[0m' %self.name)

        mutexB.acquire()
        print('\033[42m%s 拿到B锁\033[0m' %self.name)
        mutexB.release()

        mutexA.release()

    def func2(self):
        mutexB.acquire()
        print('\033[43m%s 拿到B锁\033[0m' %self.name)
        time.sleep(2)

        mutexA.acquire()
        print('\033[44m%s 拿到A锁\033[0m' %self.name)
        mutexA.release()

        mutexB.release()

if __name__ == '__main__':
    for i in range(10):
        t=MyThread()
        t.start()

六.递归锁

死锁的解决方法就是递归锁,在Python中为了支持在同一线程中多次请求同一资源,python提供了可重入锁RLock。

这个RLock内部维护着一个Lock和一个counter变量,counter记录了acquire的次数,从而使得资源可以被多次require。直到一个线程所有的acquire都被release,其他的线程才能获得资源。上面的例子如果使用RLock代替Lock,则不会发生死锁:

mutexA=mutexB=threading.RLock() #一个线程拿到锁,counter加1,该线程内又碰到加锁的情况,则counter继续加1,这期间所有其他线程都只能等待,等待该线程释放所有锁,即counter递减到0为止
from threading import Thread,Lock,RLock
import time
mutexA=mutexB=RLock()


class MyThread(Thread):
    def run(self):
        self.func1()
        self.func2()
    def func1(self):
        mutexA.acquire()
        print('\033[41m%s 拿到A锁\033[0m' %self.name)

        mutexB.acquire()
        print('\033[42m%s 拿到B锁\033[0m' %self.name)
        mutexB.release()

        mutexA.release()

    def func2(self):
        mutexB.acquire()
        print('\033[43m%s 拿到B锁\033[0m' %self.name)
        time.sleep(2)

        mutexA.acquire()
        print('\033[44m%s 拿到A锁\033[0m' %self.name)
        mutexA.release()

        mutexB.release()

if __name__ == '__main__':
    for i in range(10):
        t=MyThread()
        t.start()
递归所应用

七.信号量(Semaphore)

互斥锁:相当于一个厕所,只有一个坑位
信号量:相当于一个公共厕所,其中有多个坑位
from threading import Semaphore,Thread
import time
import random


sm = Semaphore(5)  # 造了一个含有五个的坑位的公共厕所

def task(name):
    sm.acquire()
    print('%s占了一个坑位'%name)
    time.sleep(random.randint(1,3))
    sm.release()

for i in range(10):
    t = Thread(target=task,args=(i,))
    t.start()

八.event事件

同进程的一样

线程的一个关键特性是每个线程都是独立运行且状态不可预测。如果程序中的其 他线程需要通过判断某个线程的状态来确定自己下一步的操作,这时线程同步问题就会变得非常棘手。为了解决这些问题,我们需要使用threading库中的Event对象。 对象包含一个可由线程设置的信号标志,它允许线程等待某些事件的发生。在 初始情况下,Event对象中的信号标志被设置为假。如果有线程等待一个Event对象, 而这个Event对象的标志为假,那么这个线程将会被一直阻塞直至该标志为真。一个线程如果将一个Event对象的信号标志设置为真,它将唤醒所有等待这个Event对象的线程。如果一个线程等待一个已经被设置为真的Event对象,那么它将忽略这个事件, 继续执行

event.isSet():返回event的状态值;

event.wait():如果 event.isSet()==False将阻塞线程;

event.set(): 设置event的状态值为True,所有阻塞池的线程激活进入就绪状态, 等待操作系统调度;

event.clear():恢复event的状态值为False。

from threading import Event,Thread
import time

# 先生成一个event对象
e = Event()


def light():
    print('红灯正亮着')
    time.sleep(3)
    e.set()  # 发信号
    print('绿灯亮了')

def car(name):
    print('%s正在等红灯'%name)
    e.wait()  # 等待信号
    print('%s加油门飙车了'%name)

t = Thread(target=light)
t.start()

for i in range(10):
    t = Thread(target=car,args=('伞兵%s'%i,))
    t.start()

九.线程Queue

queue队列 :使用import queue,用法与进程Queue一样

queue is especially useful in threaded programming when information must be exchanged safely between multiple threads.

1.Queue(先进先出)

import queue

q=queue.Queue()
q.put('first')
q.put('second')
q.put('third')

print(q.get())
print(q.get())
print(q.get())
'''
结果(先进先出):
first
second
third
'''

2.LifoQueue(先进后出,相当于堆栈)

import queue

q=queue.LifoQueue()
q.put('first')
q.put('second')
q.put('third')

print(q.get())
print(q.get())
print(q.get())
'''
结果(后进先出):
third
second
first
'''

3.PriorityQueue(设置优先级,数字越小级别越高,级别可为负数)

import queue

q=queue.PriorityQueue()
#put进入一个元组,元组的第一个元素是优先级(通常是数字,也可以是非数字之间的比较),数字越小优先级越高
q.put((20,'a'))
q.put((-10,'b'))
q.put((30,'c'))

print(q.get())
print(q.get())
print(q.get())
'''
结果(数字越小优先级越高,优先级高的优先出队):
(-10, 'b')
(20, 'a')
(30, 'c')
'''

 

Guess you like

Origin www.cnblogs.com/z929chongzi/p/11352401.html