GIL Global Interpreter Lock
Python interpreter there are many, the most common is the CPython interpreter
GIL essence, is a mutex: the concurrent become serial, sacrificing efficiency to ensure the security of data
is used to prevent multiple threads in the same process at the same time execution
(multiple threads within the same process can not be achieved in parallel, but can be implemented concurrently)
Python can not take advantage of multi-threaded multi-core advantage is not no use?
GIL's existence is because the memory management CPython interpreter is not thread safe
garbage collection mechanism:
. A reference count
. B mark to clear
c generational recovery.
Study the usefulness of multi-threaded Python, we need to divide the discussion:
(Assuming that there are four tasks, each task needs to complete the deal 10s) 1 compute-intensive: single-core conditions: open thread more provincial resources (due to open application process requires memory space and other operations, a waste of time) multi-core: the open process of dealing 10s open threading 40s 2. the I / O-intensive: single-core conditions: open thread more provincial resources (the same reason and calculation-intensive) multi-core cases: open thread more provincial resources (due to the I / O Cheats case, the thread work to switch + save process application memory space, is to consume a long time, and switching threads + save almost no time-consuming) python multithreading in the end there is no use need to see the case may be and certainly useful multi-process multi-threaded with the use of
# I/O密集型 from multiprocessing import Process from threading import Thread import threading import os,time def work(): time.sleep(2) if __name__ == '__main__': l=[] print(os.cpu_count()) # 本机为8核 start=time.time() for i in range(400): pProcess = (target = Work) # consuming 9.506524085998535s more, most of the time spent on the process of creating # the p-= the Thread (target = Work) # consuming 2.0459132194519043 s Multi l.append (the p-) p.start () for P in L: p.join () STOP = the time.time () Print ( ' RUN Time IS S% ' % (Start-STOP))
# 计算密集型 from multiprocessing import Process from threading import Thread import os,time def work(): res=0 for i in range(100000000): res *= i if __name__ == '__main__': l=[] print(os.cpu_count()) # 本机为8核 start=time.time() for i in range(8): p=Process(target=work) # 耗时 8.460614204406738s # p=Thread(target=work) # 耗时 34.822699308395386s l.append(p) p.start() for p in l: p.join() stop=time.time() print('run time is %s' %(stop-start))
Deadlock
1 . The code represents the next step is to acquire a lock 2 , but two threads each lock holding each other's needs, so hang on to the program here this situation is a deadlock
from Threading Import the Thread, Lock, current_thread Import Time mutexA = Lock () mutexB = Lock () class MyThread (the Thread): # create a thread automatically trigger a call to the run method in the run method is equivalent to func1 func2 automatically trigger def RUN (Self): self.func1 () self.func2 () DEF func1 (Self): mutexA.acquire () Print ( ' % S grabbed a lock ' % the self.name) # the self.name equivalent current_thread () .name mutexB.acquire () Print( ' % S B grab lock ' % the self.name) mutexB.release () Print ( ' % S B lock release ' % the self.name) mutexA.release () Print ( ' % S A lock release ' % the self.name) DEF func2 (Self): mutexB.acquire () Print ( ' % S B grab lock ' % the self.name) the time.sleep ( . 1 ) mutexA.acquire () Print ( ' % S a grab lock ' % the self.name) mutexA.release () Print ( ' % S A lock release ' % the self.name) mutexB.release () Print ( ' % S B lock release ' % the self.name) for I in Range (10 ): T = the MyThread () T .start () '' ' executed results: Thread-1 a grab lock Thread-1 B grab lock Thread-1 B lock release Thread-1 a lock release Thread-1 B grab lock Thread- 2 a grab the lock (program does not end here, but has been hang with ...) '' '