The Python concurrent programming (VI) with a recursive lock deadlock, semaphores

2-- many concurrent programming thread deadlock with recursive locks, semaphores, etc.

  1. Deadlock with recursive lock

    • There is also a process deadlock:

      The so-called deadlock: refers to the phenomenon of two or more processes or threads in the implementation process, due to competition for resources has led to a wait of one another, in the absence of external force,

      They will not be able to promote it. At this time, say the system is in deadlock state or system to produce a deadlock, which is always in the process of waiting for another process called the deadlock,

      Deadlock is as follows

      • Deadlock:

        死锁-------------------
        from  threading import Thread,Lock,RLock
        import time
        mutexA = Lock()
        mutexB = Lock()
        class MyThread(Thread):
            def run(self):
                self.f1()
                self.f2()
            def f1(self):
                mutexA.acquire()
                print('\033[33m%s 拿到A锁 '%self.name)
                mutexB.acquire()
                print('\033[45%s 拿到B锁 '%self.name)
                mutexB.release()
                mutexA.release()
            def f2(self):
                mutexB.acquire()
                print('\033[33%s 拿到B锁 ' % self.name)
                time.sleep(1)  #睡一秒就是为了保证A锁已经被别人那到了
                mutexA.acquire()
                print('\033[45m%s 拿到B锁 ' % self.name)
                mutexA.release()
                mutexB.release()
        if __name__ == '__main__':
            for i in range(10):
                t = MyThread()
                t.start() #一开启就会去调用run方法
        
        死锁现象
    • So how to solve the deadlock phenomenon?

      Solution, recursive lock: In order to support Python, in the same thread multiple requests for the same resource, python provides reentrant lock RLock.

      This internal RLock maintains a Lock and a counter variable, counter records the number of times acquire, so that resources can be many times require.

      Acquire a thread until all have been release, other threads to get resources. If the above example instead of using RLock Lock, deadlock does not occur

    • mutexA=mutexB=threading.RLock() #一个线程拿到锁,counter加1,该线程内又碰到加锁的情况,<br>则counter继续加1,这期间所有其他线程都只能等待,等待该线程释放所有锁,即counter递减到0为止
    • Resolve a deadlock

    • # 2.解决死锁的方法--------------递归锁
      from  threading import Thread,Lock,RLock
      import time
      mutexB = mutexA = RLock()
      class MyThread(Thread):
          def run(self):
              self.f1()
              self.f2()
          def f1(self):
              mutexA.acquire()
              print('\033[33m%s 拿到A锁 '%self.name)
              mutexB.acquire()
              print('\033[45%s 拿到B锁 '%self.name)
              mutexB.release()
              mutexA.release()
          def f2(self):
              mutexB.acquire()
              print('\033[33%s 拿到B锁 ' % self.name)
              time.sleep(1)  #睡一秒就是为了保证A锁已经被别人拿到了
              mutexA.acquire()
              print('\033[45m%s 拿到B锁 ' % self.name)
              mutexA.release()
              mutexB.release()
      if __name__ == '__main__':
          for i in range(10):
              t = MyThread()
              t.start() #一开启就会去调用run方法
      
      解决死锁
  2. Semaphore Semaphore (actually a lock)

    1. Semaphore management of a built-in counter

    2. Semaphore with the process pool looks similar, but is completely different concepts.

      • Process pool: Pool (4), can only produce a maximum four processes, and the process from start to finish just four, no new.
      • Semaphore: Semaphore is generated by a bunch of processes / threads, which produce a number of tasks that try to steal a lock
    3. Semaphore Example:

      from threading import Thread,Semaphore,currentThread
      import time,random
      sm = Semaphore(5) #运行的时候有5个人
      def task():
          sm.acquire()
          print('\033[42m %s上厕所'%currentThread().getName())
          time.sleep(random.randint(1,3))
          print('\033[31m %s上完厕所走了'%currentThread().getName())
          sm.release()
      if __name__ == '__main__':
          for i in range(20):  #开了5个线程 ,这20人都要上厕所
              t = Thread(target=task)
              t.start()
      
      Semaphore举例
    4. result:

      hread-1上厕所
       Thread-2上厕所
       Thread-3上厕所
       Thread-4上厕所
       Thread-5上厕所
       Thread-3上完厕所走了
       Thread-6上厕所
       Thread-1上完厕所走了
       Thread-7上厕所
       Thread-2上完厕所走了
       Thread-8上厕所
       Thread-6上完厕所走了
       Thread-5上完厕所走了
       Thread-4上完厕所走了
       Thread-9上厕所
       Thread-10上厕所
       Thread-11上厕所
       Thread-9上完厕所走了
       Thread-12上厕所
       Thread-7上完厕所走了
       Thread-13上厕所
       Thread-10上完厕所走了
       Thread-8上完厕所走了
       Thread-14上厕所
       Thread-15上厕所
       Thread-12上完厕所走了
       Thread-11上完厕所走了
       Thread-16上厕所
       Thread-17上厕所
       Thread-14上完厕所走了
       Thread-15上完厕所走了
       Thread-17上完厕所走了
       Thread-18上厕所
       Thread-19上厕所
       Thread-20上厕所
       Thread-13上完厕所走了
       Thread-20上完厕所走了
       Thread-16上完厕所走了
       Thread-18上完厕所走了
       Thread-19上完厕所走了
      
      运行结果
  3. GIL Global Interpreter Lock:

    • Definitions: GIL is essentially a mutex, mutex since it is the essence of all mutex are the same, all become serial will run concurrently, in order to control the sharing of data within the same time a task can only be modified, thus ensuring data security.

    • Py file to run internal execution process

      The computer has 4 cpu, run py file

      First, open up a process space, the thread will execute the python interpreter and the file is loaded into it, the python interpreter includes compiler and virtual machine,

      Escape file by a compiler into bytecode, and then escape into machine code by the virtual machine and the operating system to execute the thread

      Cpython provisions, the same time allowing only one thread to enter the interpreter

      Why lock?

      1. At that time are single-core era, and the price is very expensive cpu

      2. If not global lock interpreter programmers will develop Cpython locked in various internal source, unlock, very troublesome, all kinds of deadlocks, etc., in order to save directly added to the thread lock

      3. Pros: to ensure the security of data resources Cpython interpreter

        Disadvantages: a single multi-threaded process can not take advantage of multi-core

      4. Multithreaded single process can be complicated, but it can not take advantage of multi-core, not parallel, multiple processes concurrently, in parallel

  4. IO-intensive and compute-intensive

    1. IO-intensive

    When the computer is triple-core is to perform multiple tasks in parallel execution, experience blocking waits, it takes a long time

    When the single core cpu to perform multiple tasks, meet blocked on the implementation of non-IO tasks, high efficiency

    IO-intensive process for a single multi-threaded

    1. Compute-intensive

      When the computer is triple-core to perform multiple tasks, there is no obstruction on the implementation of higher efficiency

      When the computer is performing multiple individuals to mononuclear, toggle execution, such as opening and micro channel QQ need to switch execution cpu

  5. GIL difference with lock lock

    • The same point: are the same kinds of locks, mutex
    • difference:
      • GIL Global Interpreter Lock locks to protect internal resources explain data security
      • GIL lock locked and released manually disordered
      • Mutex resource data security protection process defined in their own code
      • Their definition of mutex have to manually lock, the lock is released
  6. IO-intensive and compute-intensive verification efficiency

    • Compute-intensive

      • Multi-process single-threaded speed

        from threading import Thread
        from multiprocessing import Process
        import time
        import random
        
        #计算密集型:单个进程的多线程和多个进程并发并行
        def task():
            count=0
            for i in range(10000000):
                count+=1
        if __name__ == '__main__':
            #多进程的并发并行
            start_time=time.time()
            l1=[]
            for i in range(4):
                p=Process(target=task,)
                l1.append(p)
                p.start()
            for p in l1:
                p.join()
            print(f'{time.time()-start_time}')#1.3144586086273193
      • Single process multi-threaded operating speed

        from threading import Thread
        from multiprocessing import Process
        import time
        import random
        
        def task():
            count = 0
            for i in range(10000000):
                count += 1
        
        
        if __name__ == '__main__':
            #单进程多线程
            start_time=time.time()
            l1=[]
            for i in range(4):
                p=Thread(target=task,)
                l1.append(p)
                p.start()
            for p in l1:
                p.join()
            print(f'{time.time()-start_time}')#2.4723618030548096
    • IO-intensive

      • Multi-process single-threaded speed

        #IO密集型:单进程的多线程并发 和 多个进程的并发并行
        from multiprocessing import Process
        import time
        import random
        def task():
            count=0
            time.sleep(random.randint(1,3))
            count+=1
        if __name__ == '__main__':
            start_time=time.time()
            l1=[]
            for i in range(50):
                p=Process(target=task,)
                l1.append(p)
                p.start()
            for p in l1:
                p.join()
        
            print(f'{time.time()-start_time}')#4.954715013504028
      • Single process multi-threaded operating speed

        from threading import Thread
        from multiprocessing import Process
        import time
        import random
        def task():
            count=0
            time.sleep(random.randint(1,3))
            count+=1
        if __name__ == '__main__':
            start_time=time.time()
            l1=[]
            for i in range(50):
                p=Thread(target=task,)
                l1.append(p)
                p.start()
            for p in l1:
                p.join()
            print(f'{time.time()-start_time}')#3.013162136077881
  7. Multithreading socke socket communication

    • Server:

      import socket
      from threading import Thread
      
      def communicate(conn,addr):
          while 1:
              try:
                  from_client_data=conn.recv(1024)
                  print(f'来自客户端{addr}得消息:{from_client_data.decode("utf-8")}')
                  to_client_data=input(">>>").strip()
                  conn.send(to_client_data.encode('utf-8'))
              except Exception:
                  break
          conn.close()
      
      def _accept():
          server=socket.socket()
          server.bind(('127.0.0.1',8848))
          server.listen(5)
          while 1:
              conn,addr=server.accept()
              t=Thread(target=communicate,args=(conn,addr))
              t.start()
      if __name__ == '__main__':
          _accept()
    • Client:

      import socket
      
      client=socket.socket()
      client.connect(('127.0.0.1',8848))
      
      while 1:
          to_server_data=input(">>>>").strip()
          client.send(to_server_data.encode('utf-8'))
          from_server_date=client.recv(1024)
          print(f'来自客户端的消息{from_server_date.decode("utf-8")}')
      client.close()

Guess you like

Origin www.cnblogs.com/zhangdadayou/p/11431960.html