day39 thread lock

A thread lock

from threading import Thread, Lock

x = 0
mutex = Lock()

def task():
    global x
    # mutex.acquire()
    for i in range(200000):
        x = x + 1
        # t1 的 x 刚拿到0保存状态就被切了
        # t2 的 x 拿到0 进行+1 1
        # t1 又获取运行了 x = 0 + 1 1
        # 一共加了两次1 真实运算出来的数字本来应该+2,实际只+1
        # 这就产生了数据安全问题
    mutex.release()
    
if __name__ == '__main__':
    t1 = Thread(target=task)
    t2 = Thread(target=task)
    t3 = Thread(target=task)
    t1.start()
    t2.start()
    t3.start()
    
    t1.join()
    t2.join()
    t3.join()
    print(x)

Second, deadlock

from threading import Thread, Lock
import time
mutex1 = Lock()
mutex2 = Lock()

class MyThread(Thread):
    def run(self):
        self.task1()
        self.task2()
        
    def task1(self):
        mutex1.acquire()
        print(f'{self.name} 抢到了 锁1')
        mutex2.acquire()
        print(f'{self.name} 抢到了 锁2')
        mutex2.release()
        print(f'{self.name} 释放了 锁2')
        mutex1.release()
        print(f'{self.name} 释放了 锁1')
        
    def task2(self):
        mutex2.acquire()
        print(f'{self.name} 抢到了 锁2')
        time.sleep(1)
        mutex1.acquire()
        print(f'{self.name} 抢到了 锁1')
        murex1.release()
        print(f'{self.name} 释放了 锁1')
        mutex2.release()
        print(f'{self.name} 释放了 锁2')
        
for i in range(3):
    t = MyThread()
    t.start()
    
# 两个线程
# 线程1拿到了(锁头2)想要往下执行需要(锁头1)
# 线程2拿到了(锁头1)想要往下执行需要(锁头2)
# 互相都拿到了批次想要往下执行的必须条件,互相都不放手里的锁头

Third, recursive lock (understand)

Recursive lock : in the same thread can acquire multiple times

How to release : the equivalent of maintaining an internal counter, that is to say the same thread, acquire will release several times

from threading import Thread, Lock, RLock

mutex1 = RLock()
mutex2 = mutex1

import time
class MyThread(Thread):
    def run(self):
        self.task1()
        self.task2()
        
    def task1(self):
        mutex1.acquire()
        print(f'{self.name} 抢到了 锁1 ')
        mutex2.acquire()
        print(f'{self.name} 抢到了 锁2 ')
        mutex2.release()
        print(f'{self.name} 释放了 锁2 ')
        mutex1.release()
        print(f'{self.name} 释放了 锁1 ')

    def task2(self):
        mutex2.acquire()
        print(f'{self.name} 抢到了 锁2 ')
        time.sleep(1)
        mutex1.acquire()
        print(f'{self.name} 抢到了 锁1 ')
        mutex1.release()
        print(f'{self.name} 释放了 锁1 ')
        mutex2.release()
        print(f'{self.name} 释放了 锁2 ')
        
for i in range(3):
    t = MyThread()
    t.start()

Fourth, the semaphore

The same process as: Semaphore management of a built-in counter, every time we call acquire () built-in counter -1:

Built-in counter +1 calling release (); counter can not be less than zero; when the counter is 0, acquire () will block the thread until another thread invokes the release ().

Examples :( while only five threads can acquire semaphore, which can limit the maximum number of connections 5)

from threading import Thread, semaphore
import threading
import time

def task():
    sm.acquire()
    print(f'{threading.current_thread().name} get sm')
    time.sleep(3)
    sm.release()
    
if __name__ == "__main__":
    sm = Semaphore(5) # 同一时间只有5个进程可以执行.
    for i in range(20):
        t = Thread(target=task)
        t.start()

Semaphore with the process pool is totally different concepts, process pool Pool (4), the maximum can only produce 4 process, from start to finish and just four this process, no new, and semaphores is to produce a bunch of thread / process.

Five, GIL lock

There are a lock GIL (Global Interpreter Lock) in Cpython interpreter, GIL is essentially a mutex lock.

Led under the same process, the same time can only run one thread, you can not take advantage of multi-core advantage.

Under the same process multiple concurrent threads can only be achieved can not be achieved in parallel.

Why should we GIL:

Because cpython own garbage collection is not thread safe, so be GIL lock.

Led under the same process, the same time can only run one thread, you can not take advantage of multi-core advantage.

analysis:

We have four tasks need to be addressed, treatment will definitely have to play a concurrent effect, the solution can be:

Option One: Open the four processes

Option Two: The next process, open four threads

Compute-intensive: Recommended use multiple processes

Each must be calculated 10s

1. Multithreading:

At the same time only one thread will be executed, which means that each province 10s can not, must be calculated separately for each 10s, were 40.xs

2. Multi-process:

Parallel execution of multiple threads, 10s + open process time

from multiprocessing import Process
from threading import Thread
import time


# 计算密集型
def work1():
    res = 0
    for i in range(100000000):  # 1+8个0
        res *= i


if __name__ == '__main__':
    t_list = []
    start = time.time()
    for i in range(4):
        # t = Thread(target=work1)
        t = Process(target=work1)
        t_list.append(t)
        t.start()
    for t in t_list:
        t.join()
    end = time.time()
    # print('多线程',end-start) # 多线程 15.413789510726929
    print('多进程', end - start)  # 多进程 4.711405515670776

io-intensive multi-threaded recommendation

Four tasks each task 90% of the time in io.

Each task io 10s

1. Multithreading:

Can be implemented concurrently, each thread io time is not occupied cpu, 10s + computation time four tasks

2. Multi-process

Can be implemented in parallel, time 10s + 1 task execution time + open process

from threading import Thread
from multiprocessing import Process
import time

# io密集型
def work1():
    x = 1+1
    time.sleep(5)


if __name__ == '__main__':
    t_list = []
    start = time.time()
    for i in range(4):
        t = Thread(target=work1)
        # t = Process(target=work1)
        t_list.append(t)
        t.start()
    for t in t_list:
        t.join()
    end = time.time()
    print('多线程',end-start) #  多线程 5.002625942230225
    # print('多进程',end-start) # 多进程 5.660863399505615

Guess you like

Origin www.cnblogs.com/17vv/p/11543905.html