GIL Global Interpreter Lock, deadlocks, recursive lock, the thread queues

GIL Global Interpreter Lock

  1. GIL is a mutex on nature.
  2. GIL is to prevent multiple processes within the same process executed simultaneously (in parallel)
    • Multiple threads in a single process can not be achieved in parallel, but it can achieve concurrent
  3. This lock is mainly because Cpython memory management is not thread-safe

    • To ensure that the thread will not be garbage collection mechanism in the implementation of tasks
from threading import Thread
import time

num = 100


def task():
    global num
    num2 = num
    time.sleep(1)
    num = num2 - 1
    print(num)


for line in range(100):
    t = Thread(target=task)
    t.start()
    
# 这里的运行结果都是99, 加了IO操作,所有线程都对num进行了减值操作,由于GIL锁的存在,没有修改成功,都是99

Multithreading role

  1. Compute-intensive, there are four tasks, each task needs to 10s

Single-core:

  • Open process
    • Excessive consumption of resources
    • Four processes: 40s
  • Open thread
    • Consumption of resources far less than the process
    • 4 Thread: 40s

Multi-core:

  • Open process
    • Performed in parallel, more efficient
    • Four processes: 10s
  • Open thread
    • Concurrent execution, low efficiency
    • 4 Thread: 40s
  1. IO-intensive, four tasks, each task needs to 10s

Single-core:

  • Open process
    • Excessive consumption of resources
    • Four processes: 40s
  • Open thread
    • Consumption of resources far less than the process
    • 4 Thread: 40s

Multi-core:

  • Open process
    • Parallel execution, efficiency is less than the multi-threaded, but encountered IO will immediately switch the CPU execute permissions
    • Four processes: 40s + open process consumes extra time
  • Open thread
    • Concurrent execution, execution more efficient than multiple processes
    • 4 Thread: 40s

Compute-intensive test

from threading import Thread
from multiprocessing import Process
import time
import os


# 计算密集型
def work1():
    number = 0
    for line in range(100000000):
        number += 1

# IO密集型
def work2():
    time.sleep(2)


if __name__ == '__main__':
    # 测试计算密集型
    print(os.cpu_count())   # 4核cpu

    start = time.time()
    list1 = []
    for line in range(6):

        p = Process(target=work1)  # 程序执行时间8.756593704223633
        # p = Thread(target=work1)   # 程序执行时间31.78555393218994
        list1.append(p)
        p.start()
    for p in list1:
        p.join()
    end = time.time()
    print(f'程序执行时间{end - start}')

IO-intensive

from threading import Thread
from multiprocessing import Process
import time
import os

# 计算密集型
def work1():
    number = 0
    for line in range(100000000):
        number += 1

# IO密集型
def work2():
    time.sleep(1)


if __name__ == '__main__':
    # 测试计算密集型
    print(os.cpu_count())   # 4核cpu

    start = time.time()
    list1 = []
    for line in range(100):

        # p = Process(target=work2)  # 程序执行时间15.354223251342773
        p = Thread(target=work2)   # 程序执行时间1.0206732749938965
        list1.append(p)
        p.start()
    for p in list1:
        p.join()
    end = time.time()
    print(f'程序执行时间{end - start}')

in conclusion:

  • In the case of compute-intensive, multi-process
  • In the case of IO-intensive, multi-threaded
  • Efficiently execute multiple processes, there are multiple IO-intensive programs, the use of multi-threaded multi-process +

Deadlock

Refers to two or more processes or threads in the implementation process, with a waiting another phenomenon caused due to competition for resources, such as the absence of force, they can not promote it. At this time, say the system is in deadlock state

The following is a deadlock:

from threading import Thread, Lock
from threading import current_thread
import time

mutex_a = Lock()
mutex_b = Lock()


class MyThread(Thread):

    def run(self):
        self.func1()
        self.func2()

    def func1(self):
        mutex_a.acquire()
        print(f'用户{self.name}抢到锁a')
        mutex_b.acquire()
        print(f'用户{self.name}抢到锁b')
        mutex_b.release()
        print(f'用户{self.name}释放锁b')
        mutex_a.release()
        print(f'用户{self.name}释放锁a')

    def func2(self):
        mutex_b.acquire()
        print(f'用户{self.name}抢到锁b')
        time.sleep(1)
        mutex_a.acquire()
        print(f'用户{self.name}抢到锁a')
        mutex_a.release()
        print(f'用户{self.name}释放锁a')
        mutex_b.release()
        print(f'用户{self.name}释放锁b')


for line in range(10):
    t = MyThread()
    t.start()
    
    
'''
用户Thread-1抢到锁a
用户Thread-1抢到锁b
用户Thread-1释放锁b
用户Thread-1释放锁a
用户Thread-1抢到锁b
用户Thread-2抢到锁a
'''
# 一直等待

Recursive lock

To solve deadlock

RLock: metaphor of universal keys can be provided to multiple people

But when first used, the lock will make a reference count

Only the reference count is 0, a person can really make use of the release

The above example instead of using RLock Lock, deadlock will not occur

from threading import Thread, Lock, RLock
from threading import current_thread
import time

# mutex_a = Lock()
# mutex_b = Lock()

mutex_a = mutex_b = RLock()

class MyThread(Thread):

    def run(self):
        self.func1()
        self.func2()

    def func1(self):
        mutex_a.acquire()
        print(f'用户{self.name}抢到锁a')
        mutex_b.acquire()
        print(f'用户{self.name}抢到锁b')
        mutex_b.release()
        print(f'用户{self.name}释放锁b')
        mutex_a.release()
        print(f'用户{self.name}释放锁a')

    def func2(self):
        mutex_b.acquire()
        print(f'用户{self.name}抢到锁b')
        time.sleep(1)
        mutex_a.acquire()
        print(f'用户{self.name}抢到锁a')
        mutex_a.release()
        print(f'用户{self.name}释放锁a')
        mutex_b.release()
        print(f'用户{self.name}释放锁b')


for line in range(10):
    t = MyThread()
    t.start()

Semaphore (understand)

Mutex: likened to a household toilet, at the same time can make a person to use

Beta signal likened more toilet: the same time allows multiple people to use

from threading import Semaphore
from threading import Thread
from threading import current_thread
import time

sm = Semaphore(5)

def task():
    sm.acquire()
    print(f'{current_thread().name}执行任务')
    time.sleep(1)
    sm.release()


for i in range(20):
    t = Thread(target=task)
    t.start()

Thread queue

Thread Q: So FIFO queue thread

  • Normal queue: FIFO FIFO
  • Special queue: LIFO LIFO
  • Priority queue: if pass a tuple, in turn determines the size of the ASCII value of the parameter
import queue

# 普通的线程队列: 遵循先进先出
q = queue.Queue()
q.put(1)
q.put(2)
q.put(3)

print(q.get())  # 1
print(q.get())  # 2


# LIFO队列  后进先出
q = queue.LifoQueue()
q.put(1)
q.put(2)
q.put(3)
print(q.get())  # 3


# 优先级队列:根据参数内
q = queue.PriorityQueue()
q.put((4, '我'))
q.put((2, '你'))
q.put((3, 'ta'))
print(q.get())   # (2, '你')

Guess you like

Origin www.cnblogs.com/setcreed/p/11729265.html