GIL semaphore event event thread queue

GIL Global Interpreter Lock

官方解释:
In CPython, the global interpreter lock, or GIL, is a mutex that prevents multiple
native threads from executing Python bytecodes at once. This lock is necessary mainly
because CPython’s memory management is not thread-safe.

 


The great translator:
Python interpreter, there are many   of the most common is Cpython interpreter
GIL essence, is a mutex: the concurrent become serial sacrifice efficiency to ensure the security of data
is used to prevent multiple threads in the same process at the same time execution ( within the same process multiple threads but can not be achieved in parallel to achieve concurrent)
exist GIL is because the memory management CPython interpreter is not thread safe, unsafe from garbage collection mechanism, we will have at each process a memory management thread

A python in the process, not only the main thread test.py or opened by another thread the main thread of development, as well as interpreter interpreter-level threads and recycling, in short, all the threads are running in this one within a process. 

All the threads of the mandates required pass code interpreter to execute, so we must first solve is the ability to access interpreter code

 

If you target multiple threads = Work name implementation process are: 

multiple threads first visit to the interpreter code that is executed to get permission, then the target code to code execution interpreter, the interpreter code is all threads shared, and to recover the thread may also have access to the interpreter code execution away, 
which leads to a problem: data for a total of
100, execution thread 1 x = 100 at the same time, the implementation of garbage collection is recovered 100 operation, so the need to add GIL lock, ensure python interpreter at the same time can only perform a task code

 

Problem: python multithreading can not use a number of advantages, is not useless

A: The study whether python multithreading is useful to discuss the need Points

Example:

 1 . Four compute-intensive tasks, each 10s 

mononuclear situation: 

open thread more provincial resources 

multicore case: 

Open 10s process 

open thread 40s

 2 four IO-intensive tasks, each 10s. 

Monocytes situation: 

open thread Province resources more 

multi-core conditions: 

open thread more provincial resources

 

from multiprocessing import Process
from threading import Thread
import time, os
​
# 计算密集型
def func():
    res = 0
    for i in range(10000):
        res *= 1# IO 密集型
def func():
    time.sleep(3)
​
if __name__ == '__main__':
    print(os.cpu_count())
    list = []
    Start = time.time ()
     for i in the Range (4 ): 
        the p- = Process (target = FUNC) # multi-process 
        # compute-intensive running time: .21994638442993164   
        # IO-intensive running time: 3.2253575325012207 
        the p-= the Thread (target = FUNC) # multithreading 
        # runtime: 0.003988504409790039   
        # IO-intensive running time: 3.0033791065216064 
        list.append (the p-) 
        p.start () 
    for the p- in List: 
        p.join ()   # wait for all children / child running after the end of the thread run main course / main thread 
    End = time.time ()
     Print (' Run Time: S% ' % (End - Start))

 

 

Deadlock / recursive lock

It refers to two processes or threads in the implementation process, a result of competition for resources caused by the phenomenon of waiting for each other.

Note: I do not easily handle lock

 

Rlock recursive lock

Rlock can be continuously acquire the first person to grab the lock and release 
each acquire a lock body count by one
each release a lock body count by one
as long as the lock count is not zero, other processes / threads are not grab
from Threading Import the Thread, Lock
 Import Time 
# generates two locks 
mutexA = Lock () 
mutexB = Lock () 
class the MyThread (the Thread):
     DEF RUN (Self): 
        self.func1 () 
        self.func2 () 
    DEF func1 (Self): 
        mutexA.acquire () 
        Print ( ' % S grabbed a lock ' % the self.name)   # the self.name equivalent current_thread (), name 
        mutexB.acquire ()
         Print ( ' % S B grab lock '% The self.name) 
        mutexB.release () 
        Print ( ' % S B lock release ' % the self.name) 
        mutexA.release () 
        Print ( ' % S A lock release ' % the self.name) 
    DEF func2 (Self ): 
        mutexB.acquire () 
        Print ( ' % S B grab lock ' % the self.name) 
        the time.sleep ( . 1 ) 
        mutexA.acquire () 
        Print ( ' % S grabbed a lock ' % the self.name) 
        mutexA. Release () 
        Print (' % S A lock release ' % the self.name) 
        mutexB.release () 
        Print ( ' % S B lock release ' % the self.name) 
for I in Range (10 ): 
    T = the MyThread () 
    t.start () 
# recursive lock 
from Threading Import the Thread, RLOCK 
mutexA = mutexB = RLOCK () 
class the MyThread (the Thread):
     DEF RUN (Self): 
        self.func1 () 
        self.func2 () 
    DEFfunc1 (Self): 
        mutexA.acquire () 
        Print ( ' % S grabbed A lock ' % the self.name)   # the self.name equivalent current_thread (), name 
        mutexB.acquire ()
         Print ( ' % S B grab lock ' % the self.name) 
        mutexB.release () 
        Print ( ' % S B lock release ' % the self.name) 
        mutexA.release () 
        Print ( ' % S a lock release ' % the self.name) 
    DEF func2 (Self): 
        mutexB.acquire () 
        Print ( 'grabbed the lock% s B ' % the self.name) 
        the time.sleep ( . 1 ) 
        mutexA.acquire () 
        Print ( ' % s grabbed A lock ' % the self.name) 
        mutexA.release () 
        Print ( ' % s release a lock ' % the self.name) 
        mutexB.release () 
        Print ( ' % S B lock release ' % the self.name) 
for I in Range (10 ): 
    T = the MyThread () 
    t.start ()
View Code

 

signal

Mutex: a key to a lock 
semaphore: a multi-lock keys
from Threading Import the Thread, Semaphore
 Import Time
 Import Random 
SEM = Semaphore (. 5)   # generates a key lock five 
DEF FUNC (name): 
    sem.acquire () Print ( ' % S door of ' % name) 
    Time. SLEEP (the random.randint ( . 1,. 3 )) 
    sem.release () Print ( ' % S to go out ' % name) for I in Range (10 ): 
    T = the Thread (target = FUNC, args =
    
    
 (I,) )
    t.start()

event event

e.set() 发信号
e.wait() 等待信号

 

from threading import Thread, Event
import time
​
​
e = Event()
def light():
    print('红灯')
    time.sleep(3)
    e.set()
    print('绿灯')
​
def car(name):
    print('%s等红灯'%name)
    e.wait()
    print('%s开车了'%name)
​
t = Thread(target=light)
t.start()
​
for i in range(10):
    t = Thread(target=car, args=('车手%i'%i, ))
    t.start()

 

线程queue

同一个进程下的多个线程本来就是数据共享,为什么还要用队列?
因为队列是 管道+锁,使用队列就不需要自己动手操作锁的问题
import queue
​
q = queue.Queue()
q.put('one')
q.put('two')
q.put('three')
print(q.get())  # >>> one  先进先出
​
q = queue.LifoQueue()
q.put('one')
q.put('two')
q.put('three')
print(q.get())  # >>> three  堆栈 先进后出
​
q = queue.PriorityQueue()  # 数字越小,优先级越高
q.put((10, 'one'))
q.put((1, 'two'))
q.put((5, 'three'))
print(q.get())  # >>> (1, 'two')

 

 

Guess you like

Origin www.cnblogs.com/waller/p/11354046.html