Thread (thread)
The minimum unit of the operating system scheduler, a string of instructions is a set of
procedures begin with a main thread simultaneously and start a new thread and the main thread, the main thread after thread independent promoter (sub-thread can be started thread), regardless of whether the child thread the end of the main thread will continue to execute the program shut down at the end of all threads of execution
Global interpreter lock (the GIL)
Unable to control the order of thread execution, in order to prevent data errors by the GIL only one thread at the same time work
need clear is that the Python GIL is not a feature, it is a realization in the Python parser (CPython) introduced concept, Python can not rely GIL
threading module
Start a thread
Direct call
Example:
import threading
import time
def run(i): # 函数名随意
print('test', i)
time.sleep(1)
t1 = threading.Thread(target=run, args=('t1',))
t2 = threading.Thread(target=run, args=('t2',))
t3 = threading.Thread(target=run, args=('t3',))
t1.start()
t2.start()
t3.start()
threading.Thread(target=run, args=('t1',))
The target is a function of the thread execution, args parameter passed in
Inheritance of call
Example:
import threading
class MyThread(threading.Thread):
def __init__(self, n):
super(MyThread, self).__init__()
self.n = n
def run(self): # 函数名必须是 run
print('class test', self.n)
t1 = MyThread('t1')
t2 = MyThread('t2')
t3 = MyThread('t3')
t1.start()
t2.start()
t3.start()
Multi-threaded and single-threaded difference
IO operations do not take up CPU, calculate occupancy CPU
Python is not suitable for multi-threaded CPU intensive operations task type, task IO intensive operations for
single-threaded example:
import threading
import time
def run(i):
print('test', i)
time.sleep(1)
run('t1')
run('t2')
run('t3')
And multi-threading contrast can be found : Multithreading is ending after waiting for 1s after print, while a single thread must wait after each print.
other
join
After the main thread to create a child thread, the main thread and the child thread independent of each other, regardless of whether the child thread execution is completed, the main thread will continue execution
use join
can make the main thread waits for the child after the completion of the thread execution, and then continue with the
example:
import threading
import time
def run(th):
print('test', th)
time.sleep(2)
start_time = time.time()
threading_list = []
num = 0
for i in range(50):
t = threading.Thread(target=run, args=('t-%s' % i,))
t.start()
threading_list.append(t)
for item in threading_list:
item.join()
print('totally', time.time() - start_time)
If you do not use join
the main thread will continue after the child thread is created, direct output time. Wait two seconds after the end of program execution of all threads
using the join
rear main thread will wait for the appropriate time after the child thread then output all execution ends
Daemon threads (deamon)
Daemon thread is the main thread of the service, as long as the non-daemon threads complete the program will execute directly ends
when a child thread is set as a daemon thread, the program will not wait for him to perform complete before the end of the
example:
import threading
import time
def run(th):
print('test', th)
time.sleep(2)
start_time = time.time()
num = 0
for i in range(50):
t = threading.Thread(target=run, args=('t-%s' % i,))
t.setDaemon(True)
t.start()
print('totally', time.time() - start_time)
Note : setDeamon(true)
need to start
set before
Thread lock (mutex)
The next process can be started multiple threads, multiple threads share the parent's memory space, which means that each thread can access the same data.
At this point, if multiple threads to modify the same data, an error
has GIL Why would mistake:
Although there have been GIL ensure that only one thread to modify the data, but as a process of acquiring data modification, not to save your data before release GIL, when an error occurs
import threading
import time
def run(th):
lock.acquire() # 获取锁
global num
time.sleep(0.01)
num += 1
print('test', th)
lock.release() # 释放锁
lock = threading.Lock()
start_time = time.time()
threading_list = []
num = 0
for i in range(50):
t = threading.Thread(target=run, args=('t-%s' % i,))
t.start()
threading_list.append(t)
for item in threading_list:
item.join()
print(num)
Note : Each thread execution time not too long, otherwise it becomes a serial
Deadlock
When there are multiple layers mutex exist there will be a deadlock, the program enters an infinite loop
import threading
def run1():
lock.acquire()
global num1
num1 += 1
lock.release()
return num1
def run2():
lock.acquire()
global num2
num2 += 1
lock.release()
return num2
def run3():
lock.acquire()
res1 = run1()
res2 = run2()
lock.release()
print(res1, res2)
num1, num2 = 0, 0
lock = threading.Lock()
for i in range(10):
t = threading.Thread(target=run3)
t.start()
while threading.active_count() != 1:
print(threading.active_count())
else:
print('-----finished-----')
print(num1, num2)
RLock recursive lock
To avoid a deadlock, we need to use a recursive lock RLock
import threading
def run1():
lock.acquire()
global num1
num1 += 1
lock.release()
return num1
def run2():
lock.acquire()
global num2
num2 += 1
lock.release()
return num2
def run3():
lock.acquire()
res1 = run1()
res2 = run2()
lock.release()
print(res1, res2)
num1, num2 = 0, 0
lock = threading.RLock()
for i in range(10):
t = threading.Thread(target=run3)
t.start()
while threading.active_count() != 1:
print(threading.active_count())
else:
print('-----finished-----')
print(num1, num2)
Semaphore (semaphore)
Mutex while allowing only one thread to change the data, while at the same time Semaphore is allowed a certain number of threads Change Data
import threading
import time
import sys
def run(th):
semaphore.acquire()
string = 'threading:' + str(th) + '\n'
sys.stdout.write(string)
time.sleep(2)
semaphore.release()
semaphore = threading.BoundedSemaphore(5) # 最多允许5个线程同时运行
for i in range(20):
t = threading.Thread(target=run, args=(i, ))
t.start()
while threading.active_count() != 1:
pass
else:
print('Done')
As can be seen from the process of running the program: Start with five threads running at the end of this five thread after thread started another 5
Event (Event)
Event allows interaction between threads, and set the global variable empathy
Event means internal flag
there are two states: a True
, False
by set
, clear
changing the state, the thread by is_set()
acquiring Event status wait()
at False
the time will clog
traffic lights and car interaction Example:
import threading
import time
event = threading.Event()
def light():
count = 0
event.set()
while 1:
if 5 < count < 10:
event.clear()
print('red')
elif count == 10:
event.set()
print('green')
count = 0
else:
print('green')
count += 1
time.sleep(1)
def car():
while 1:
if event.is_set():
print('running...')
time.sleep(1)
else:
print('waiting')
event.wait()
l1 = threading.Thread(target=light,)
l1.start()
c1 = threading.Thread(target=car,)
c1.start()