锁-Lock

由于多线程共享进程的资源和地址空间,因此,在对这些公共资源进行操作时,为了防止这些公共资源出现异常的结果,必须考虑线程的同步和互斥问题,由此引出了锁的概念。

Lock的使用方法:

- withlock

- lock.acquire()和lock.release()

import threading

# global var
count = 0


# Define a function for the thread
def print_time(threadName):
    global count

    c = 0
    while (c < 3):
        c += 1
        count += 1
        print("{0}: set count to {1}".format(threadName, count))


# Create and run threads as follows
try:
    t1 = threading.Thread(target=print_time, args=("Thread-1", ))
    t1.start()
    t2 = threading.Thread(target=print_time, args=("Thread-2", ))
    t2.start()
except Exception as e:
    print("Error: unable to start thread")

在这个例子中,我们同时start了2个线程,每个线程都会对全局变量count进行改写操作。得到的结果如下,每个thread都会交替对count值进行修改:

Thread-1: set count to 1
Thread-1: set count to 2
Thread-2: set count to 3
Thread-1: set count to 4
Thread-2: set count to 5
Thread-2: set count to 6

显然结果跟我们想要的大相径庭,因为当多个线程同时访问一个变量的时候,就会产生这种共享变量的问题,此时我们可以给线程加入锁:

import threading

# global var
count = 0
lock = threading.Lock()

# Define a function for the thread
def print_time(threadName):
    global count

    c = 0
    with lock:
        while (c < 3):
            c += 1
            count += 1
            print("{0}: set count to {1}".format(threadName, count))


# Create and run threads as follows
try:
    t1 = threading.Thread(target=print_time, args=("Thread-1", ))
    t1.start()
    #t1.join()
    t2 = threading.Thread(target=print_time, args=("Thread-2", ))
    t2.start()
    #t2.join()
except Exception as e:
    print("Error: unable to start thread")

得到的结果就正常了:

Thread-1: set count to 1
Thread-1: set count to 2
Thread-1: set count to 3
Thread-2: set count to 4
Thread-2: set count to 5
Thread-2: set count to 6

死锁:指两个或两个以上的线程或进程在执行程序的过程中,因争夺资源而相互等待的一个现象

import threading
import time

lock_1 = threading.Lock()
lock_2 = threading.Lock()


def func_1():
    print("func_1 starting.........")
    lock_1.acquire()
    print("func_1 申请了 lock_1....")
    time.sleep(2)
    print("func_1 等待 lock_2.......")
    lock_2.acquire()
    print("func_1 申请了 lock_2.......")
    lock_2.release()
    print("func_1 释放了 lock_2")
    lock_1.release()
    print("func_1 释放了 lock_1")
    print("func_1 done..........")


def func_2():
    print("func_2 starting.........")
    lock_2.acquire()
    print("func_2 申请了 lock_2....")
    time.sleep(4)
    print("func_2 等待 lock_1.......")
    lock_1.acquire()
    print("func_2 申请了 lock_1.......")
    lock_1.release()
    print("func_2 释放了 lock_1")
    lock_2.release()
    print("func_2 释放了 lock_2")
    print("func_2 done..........")


if __name__ == "__main__":
    print("主程序启动..............")
    t1 = threading.Thread(target=func_1, args=())
    t2 = threading.Thread(target=func_2, args=())
    t1.start()
    t2.start()
    t1.join()
    t2.join()
    print("主程序启动..............")

上述案例就会产生死锁的现象,因为t1申请了lock_1,在sleep的时候,t2申请了lock_2,当2S后t1想要申请lock_2的时候,因为lock_2已经被t2申请走了并且没有release掉,所以t1会继续等待t2把lock_2释放,而t2在sleep后想要申请lock_1,这时t1也没有释放掉lock_1,所以t2也选择等待t1把lock_1释放,这时就会永远的等待下去,看下该程序的输出会停在哪:

主程序启动..............
func_1 starting.........
func_1 申请了 lock_1....
func_2 starting.........
func_2 申请了 lock_2....
func_1 等待 lock_2.......
func_2 等待 lock_1.......

RLock:在同一线程内,对RLock进行多次acquire()操作,程序不会阻塞。主要解决递归调用的时候,需要申请锁的情况

mport threading


lock = threading.RLock()

def f():
  with lock:
    g()
    h()

def g():
  with lock:
    h()
    do_something1()

def h():
  with lock:
    do_something2()

def do_something1():
    print('do_something1')

def do_something2():
    print('do_something2')


# Create and run threads as follows
try:
    threading.Thread( target=f ).start()
    threading.Thread( target=f ).start()
    threading.Thread( target=f ).start()
except Exception as e:
    print("Error: unable to start thread")

每个thread都运行f()f()获取锁后,运行g(),但g()中也需要获取同一个锁。如果用Lock,这里多次获取锁,就发生了死锁。 
但我们代码中使用了RLock。在同一线程内,对RLock进行多次acquire()操作,程序不会堵塞,所以我们可以得到如下的输出:

do_something2
do_something1
do_something2
do_something2
do_something1
do_something2
do_something2
do_something1
do_something2

猜你喜欢

转载自www.cnblogs.com/wjw2018/p/10533116.html