The server supports concurrent, GIL Global Interpreter Lock, deadlocks and Rlock, semaphores, event event,

Concurrent Server implementation:

Server:

Import Socket
 from Threading Import the Thread 

"" " 
server 
    1 and have a fixed IP PORT 
    2.24 hour service 
    3 can support concurrent 
" "" 

Server = socket.socket () 
server.bind (( ' 127.0.0.1 ' , 8080 )) 
server.listen ( . 5 ) 


DEF Talk (Conn):
     the while True:
         the try : 
            Data = conn.recv (1024 )
             IF len (Data) == 0: BREAK 
            Print (data.decode ( ' UTF-. 8 ')) 
            Conn.send (data.upper ()) 
        the except ConnectionResetError AS E:
             Print (E)
             BREAK 
    conn.Close () 

"" " 
When there is a client over the connection, the first execution code while loop, 
open a thread to serve the client, when the client has come back connected, 
you can then open a thread to service a client 
"" " 
the while True: 
    conn, addr = server.accept ()   # monitor waits for a connection blocking state client 
    print (addr) 
    T = the Thread (target = Talk, args = (Conn,)) 
    t.start ()
View Code

Client:

import socket


client = socket.socket()
client.connect(('127.0.0.1',8080))

while True:
    client.send(b'hello')
    data = client.recv(1024)
    print(data.decode('utf-8'))
View Code

GIL Global Interpreter Lock:

Python Python code executed by a virtual machine (also called the interpreter main loop) controlled. Python early in the design should take into account the main loop, while only one thread execution.

While the Python interpreter can "run" multiple threads, but only one thread running in the interpreter at any time.

Access to the Python virtual machine is controlled by the Global Interpreter Lock (GIL), it is this lock ensures that only one thread is running.

GIL is the essence of a mutex: the concurrent become serial sacrifice efficiency to ensure data security
multiple threads of execution within (the same process is used to prevent multiple threads at the same time under the same process but can not be achieved in parallel concurrency )

GIL's existence is because the memory management CPython interpreter is not thread safe

If you run at the same time, the value of the thread may not be used was recycled garbage collection mechanism, so CPython interpreter memory manager is not thread-safe, there must be GIL.

Garbage collection

Reference count 1.
2. Clear mark
3. recovered generational

Are multi-threaded python useful research needs with different situations:

Four tasks compute-intensive of the
single-core cases
more provincial resources open thread
multi-core cases
open process 
open thread

four tasks IO-intensive of the
single-core cases
save more resources open thread
multicore case
open threads more save resources

Code validation:

Compute-intensive:

计算密集型
from multiprocessing import Process
from threading import Thread
import os,time
def work():
    res=0
    for i in range(100000000):
        res*=i


if __name__ == '__main__':
    l=[]
    print(os.cpu_count())  # 本机为6核
    start=time.time()
    for i in range(6):
        # p=Process(target=work) #耗时  4.732933044433594
        p=Thread(target=work) #耗时 22.83087730407715
        l.append(p)
        p.start()
    for p in l:
        p.join()
    stop=time.time()
    print('run time is %s' %(stop-start))
View Code

IO-intensive:

from multiprocessing Import Process
 from Threading Import the Thread
 Import Threading
 Import OS, Time
 DEF Work (): 
    the time.sleep ( 2 ) 


IF  the __name__ == ' __main__ ' : 
    L = []
     Print (os.cpu_count ()) # native 6 nuclear 
    Start = time.time ()
     for i in the Range (40 ):
         # the p-process = (target = Work) # 9.001083612442017s and more time-consuming, most of the time spent in the creation process
        p=Thread(target=work) #耗时2.051966667175293s多
        l.append(p)
        p.start()
    for p in l:
        p.join()
    stop=time.time()
    print('run time is %s' %(stop-start))
View Code

 Deadlock and Rlock:

A deadlock situation: the situation is different threads to wait for the other to release the lock (mutual stalemate)

Recommendation: Do not try to easily deal with the problem of lock.

from Threading Import the Thread, Lock, current_thread, RLOCK
 Import Time
 "" " 
R lock can be continuously acquire the first person to grab and release locks 
each acquire a lock body count by one 
count for each release time minus the lock body 1 
as long as lock count is not zero else can grab 

"" " 
# mutexA = lock () 
# mutexB = lock () 
mutexA = mutexB = RLOCK ()   # AB is now the same lock 


class MyThread (the Thread):
     DEF RUN (Self ):   # create a thread automatically trigger a call to the run method in the run method is equivalent to func1 func2 automatically trigger 
        self.func1 () 
        self.func2 () 

    DEF func1 (Self): 
        mutexA.acquire () 
        Print ( 'A grab lock% s ' % the self.name)   # the self.name equivalent current_thread (). name 
        mutexB.acquire ()
         Print ( ' % s grab lock B ' % the self.name) 
        mutexB.release () 
        Print ( ' % S B lock release ' % the self.name) 
        mutexA.release () 
        Print ( ' % S a lock release ' % the self.name) 

    DEF func2 (Self): 
        mutexB.acquire () 
        Print ( ' % S grabbed the lock B ' % the self.name) 
        the time.sleep ( . 1 )
        mutexA.acquire () 
        Print ( ' % S grabbed A lock ' % the self.name) 
        mutexA.release () 
        Print ( ' % S A lock release ' % the self.name) 
        mutexB.release () 
        Print ( ' % S B lock release ' % the self.name) 

for I in Range (10 ): 
    T = the MyThread () 
    t.start ()
View Code

signal:

process:

First, five threads simultaneously acquire a lock, then run to completion because different threads sleep so the release time will be different,
after the first release of the lock can allow other threads to go snatch.

# Signal may correspond to different amounts in different areas of knowledge in 
"" " 
mutex: a toilet (a pit bit) 
semaphore: public toilet (s pits bits) 
" "" 
from Threading Import Semaphore, the Thread
 Import Time
 Import Random 


SM = Semaphore (. 5)   # built public toilets containing a five-bit pit 

DEF Task (name): 
    sm.acquire () 
    Print ( ' % S accounted for a pit bit ' % name) 
    the time.sleep ( the random.randint ( l, 3 )) 
    sm.release () 

for I in Range (40 ): 
    T = the Thread (target = Task, args =  (I,))
    t.start ()
View Code

envent event:

from Threading Import the Event, the Thread
 Import Time 

# Mr. into an event object 
E = the Event () 


DEF Light ():
     Print ( ' red lit n ' ) 
    the time.sleep ( . 3 ) 
    e.set ()   # signaling 
    Print ( ' green light ' ) 


DEF CAR (name):
     Print ( ' % S is a red light ' % name) 
    e.wait ()   # wait signal 
    Print ( ' % S fuel door of racing ' % name)

 
T = the Thread (target = Light) 
t.start () 

for I in Range (10 ): 
    T = the Thread (target = CAR, args = ( ' parachute S% ' % I,)) 
    t.start () 
" "" 
Inception a thread, printed lights came on, and go to sleep, from ten threads, waiting for the green light after printing, 
after waiting e.wait () receives the signal, when running e.set () signal when , you can continue to run. 
"" "
View Code

Thread queue:

Import Queue
 "" " 
multiple threads in the same process of data sharing has always been why use a queue 

because the queue is a queue to use the pipeline + lock you do not need to manually operate the lock of the problem 

because of the bad very easy to produce dead lock operations lock phenomenon 
"" " 

# Q = Queue.Queue () 
# q.put ( 'hahha') 
# Print (q.get ()) 


# Q = queue.LifoQueue () 
# LIFO 
# q.put (. 1) 
# q.put (2) 
# q.put (. 3) 
# Print (q.get ()) 


# Q = queue.PriorityQueue () 
# # The smaller the number the higher the priority 
# q.put ((10, 'haha ')) 
# q.put ((100,' hehehe ')) 
# q.put ((0,' XXXX ')) 
# q.put ((- 10,' YYYY '))
# print(q.get())
View Code

Guess you like

Origin www.cnblogs.com/yangjiaoshou/p/11353187.html