GIL Global Interpreter Lock, recursive lock,

First, the global interpreter lock GIL:

  The official explanation: grasp the main concepts

"""
In CPython, the global interpreter lock, or GIL, is a mutex that prevents multiple
native threads from executing Python bytecodes at once. This lock is necessary mainly
because CPython’s memory management is not thread-safe.
"""

 

 (1): run python code python is controlled by a virtual machine (interpreter), the lock is to ensure that only one thread running again

 

At the same time used to prevent multiple threads in the same process of execution (multiple processes can not be achieved in parallel within a process, but you can achieve concurrent)

 

 (2): There are many python interpreter, the most common is cpython interpreter, the interior is the c language;

GIL is the essence of a mutex: the concurrent become serial, although at the expense of efficiency, but to ensure the security of data;

 

GIL's existence is because the memory management Cpython interpreter is not thread safe

Garbage collection:

  1: reference count (defined not used)

  2: Mark Clear ()

  3: zoning recovery (with youth, years old)

 

python multithreading can not take advantage of multi-core advantage is not is of no use? 
  look at the discussions, but certainly useful

(3): whether the research is useful python multithreading needs with different situations (sub compute-intensive and IO-intensive)
10s compute-intensive tasks four 
single-core cases
open thread more provincial resources
multicore case
open process 10s
open thread 40s

four IO-intensive task of
single-core cases
open threads more save resources
multicore case
open threads more save resources

compute-intensive:
1, features: for a lot of computing, consume CPU resources. Pi calculation example, high-definition video decoding and the like, thanks to the computing capacity of the CPU
# Computationally intensive 
from multiprocessing Import Process
 from Threading   Import the Thread
 Import   OS, Time 

DEF Work (): 
    RES = 0
     for I in Range (10000000 ): 
        RES * =. 1
 IF  the __name__ == ' __main__ ' : 
    L = []
     Print ( os.cpu_count ()) # local computer audit 
    Start = the time.time ()
     for I in Range (. 6):
        # p= Thread(target=work)  # 开线程  run time is 0.10372185707092285
        p=Process(target=work)    # 开进程   run time is 0.8038477897644043
        l.append(p)
        p.start()
    for p in l:
        p.join()
    stop = time.time()
    print('run time is %s'%(stop-start))

  IO-intensive:

  Features: CPU consumption, and most of the time a task is waiting for IO operation is complete (because of IO rate is far below the CPU and memory speed)

# IO密集型

from  multiprocessing import Process
from threading  import  Thread
import  threading
import os
import time

def work():
    time.sleep(2)


if __name__ == '__main__':
    l=[]
    print(os.cpu_count())
    start = time.time()
    for i in range(40):
        #p = Process (target = work) # Time IS 2.7755727767944336 process takes RUN 
        the p-= the Thread (target = Work) # thread consuming   2.005638360977173 
        l.append (the p-) 
        p.start () 
    for the p- in L: 
        p.join () 
        STOP = the time.time () 

    Print ( ' RUN Time IS S% ' % (Start-STOP))

ps:Data-intensive (Data-Intensive)

 

Two, GIL ordinary mutex:

 

from threading import Thread
import time

n=100
def task():
    global n
    tmp = n
    time.sleep(2) # IO阻塞等待的 相当于一把锁
    n= tmp -1
t_list = []
for i in range(100):
    t = Thread(target=task)
    t.start()
    t_list.append(t)
for t in t_list:
    t.join()

print(n)


>> 99/0

"""time.sleep (2) # IO operation corresponds to block waiting for a lock, 
100 random grab the first lock on the entrance to the back, to cancel this condition can sequentially take complete all the result is zero . "" "

Third, the deadlock with recursive lock

  The so-called deadlock: refers to two or more processes or threads during execution, because of competition for resources caused by a phenomenon waiting for each other, no longer continue to perform

  Solution to the deadlock is the use of recursive lock (RLock), the Lock become RLock

Recursive lock: is the ability to support multiple requests for the same resource in the same thread, simply means that both share the same lock,

When you lock the lock is released, while the other can access resources and perform cycle lock (acquire) and (lock release) release, until the end of the visit so far

from   Threading Import the Thread, Lock, RLOCK, current_thread
 Import Time 

from Threading Import the Thread, Lock, current_thread, RLOCK
 Import Time
 "" " 
an R lock can be continuously acquire and release the lock of the first people to grab 
each acquire a lock body count plus 1 
per release a lock body count by 1 
as long as the lock count is not zero else can grab 
"" " 
# mutexA = lock () 
# mutexB = lock () 
mutexA = mutexB = RLOCK ()   # AB is now the same lock 

class MyThread (the thread):
     DEF run (Self):   # create a thread automatically trigger a call to the run method in the run method is equivalent to func1 func2 automatically trigger 
        self.func1 ()
        self.func2 () 

    DEF func1 (Self): 
        mutexA.acquire () 
        Print ( ' % S grabbed A lock ' % the self.name)   # the self.name equivalent current_thread () name. 
        mutexB.acquire ()
         Print ( ' % S B grab lock ' % the self.name) 
        mutexB.release () 
        Print ( ' % S B lock release ' % the self.name) 
        mutexA.release () 
        Print ( ' % S a lock release ' % Self .name) 

    DEF func2 (Self): 
        mutexB.acquire () 
        Print( ' % S B grab lock ' % the self.name) 
        the time.sleep ( . 1 ) 
        mutexA.acquire () 
        Print ( ' % S grabbed A lock ' % the self.name) 
        mutexA.release () 
        Print ( ' % S a lock release ' % the self.name) 
        mutexB.release () 
        Print ( ' % S B lock release ' % the self.name) 

for I in Range (10 ): 
    T = the MyThread () 
    t.start ()

About locking problem: they must not easily handle lock problems, likely to cause deadlock

Fourth, the semaphore

  Signal of a certain piece of code, the same time, the process can only be used by the N

"" " 
Mutex: a toilet (a pit bit) 
semaphore: public toilet (s pits bits) 
" "" 
from Threading Import Semaphore, the Thread
 Import Time
 Import Random 


SM = Semaphore (. 5)   # built containing a five a public toilet pit bits 

DEF Task (name): 
    sm.acquire () 
    Print ( ' % S accounted for a pit bit ' % name) 
    the time.sleep (the random.randint ( l, 3 )) # randomly blocked 
    sm. release () 

for I in Range (40 ): 
    T = the Thread (target = Task, args = (I,)) 
    t.start () 
forty cycles randomly generated gun and release locks

 

Five, event event:

  After the child process is generally used in a sub-thread wait for another child thread when the process .join anti-French, and other finished running the main process execution

 

from threading import Thread, Event # Event module 

Event in several ways:

event.isSet (): returns the event status value;

event.wait (): if event.isSet () == False the blocked thread;

event.set (): Set event state value is True, all blocked thread pools activating a ready state, waiting for the operating system scheduling;

the Event.CLEAR (): recovery event state is False.

 

Traffic lights and other events:

from Threading Import the Event, the Thread
 Import Time 

# Mr. into an event object 
E = the Event () 

DEF Light ():
     Print ( ' red lit n ' ) 
    the time.sleep ( . 3 ) 
    e.set ()   # signaling 
    Print ( ' green light ' ) 

DEF CAR (name):
     Print ( ' % S is a red light ' % name) 
    e.wait ()   # wait signal 
    Print ( ' % S fuel door of racing ' %name)

t = Thread(target=light)
t.start()

for i in range(10):
    t = Thread(target=car,args=('伞兵%s'%i,))
    t.start()

>>>>:
8 is a red light parachute
parachute 9 is a red light
green light parachute drag racing the tip-0
parachute drag racing fuel door 3 of the
parachute drag racing fuel door 4 of the
parachute drag racing fuel door 8 a
parachute drag racing fuel door of a
parachute drag racing fuel door 5 the
parachute drag racing fuel door 9 the
parachute drag racing fuel door 6 of the parachute drag racing fuel door 7
 
 

Paratroopers 2 of the drag racing fuel door
 

Six thread Queue (column)

  queue Queue: use import queue, and use the same process Queue

  

class  queue.Queue( MAXSIZE = 0 ) FIFO #

class  queue.LifoQueue( MAXSIZE = 0 ) in the fisrt #Last OUT LIFO

class  queue.PriorityQueue( MAXSIZE = 0 ) can be set to the priority queue priority column when data stored #

The results (the lower the number the higher the priority, the priority of the high priority of the team):

Import Queue 

Q = queue.PriorityQueue ()
 # PUT enters a tuple, the first element of the tuple is a priority (typically digital, and may be non-numeric comparison between), the lower the number the higher the priority 
q. PUT ((20 is, ' A ' )) 
q.put (( 10, ' B ' )) 
q.put (( 30, ' C ' )) 

Print (q.get ())
 Print (q.get ())
 Print (q.get ())
 '' ' 
results (the lower the number the higher the priority, the priority of high priority queued): 
(10,' B ') 
(20 is,' a ') 
(30,' C ' ) 
'' '

 Multiple threads in the same process has always been why do you need to share data queue it?

  Because the lock queue is in the form of pipe +, queues do not need to use its own manually operable locking problem,

Lock operation can easily result in deadlock phenomenon

 

 

 

 









  

 

Guess you like

Origin www.cnblogs.com/Gaimo/p/11354010.html