Python multi-process and multi-threaded programming and GIL Detailed

Python multi-process programming and multiprocess module

Multi-process programming python rely mainly on multiprocess module. Let's compare two pieces of code to see the advantages of multi-process programming. We simulated a very time-consuming task of computing power 8 of 20, in order to make this appear to be more time-consuming task, we let it sleep 2 seconds. First code is a single calculation process (code shown below), we execute the code sequence is repeated two times is calculated, and print out the total time-consuming.

Time Import
Import OS
DEF long_time_task ():
    Print ( 'current process: {}'. the format (os.getpid ()))
    the time.sleep (2)
    Print ( "Results: {}". format (8 ** 20) )

if __name__ == "__main__":
    print('当前母进程: {}'.format(os.getpid()))
    start = time.time()
    for i in range(2):
        long_time_task()

    end = time.time()
    print("用时{}秒".format((end-start)))

Output results are as follows, took a total of 4 seconds, start to finish only one process 14236. It seems PC computing power of 20 basic 8 is not time-consuming.
The current master process: 14236
Current Process: 14236
Results: 1152921504606846976
current process: 14236
Results: 1152921504606846976
when using 4.01080060005188 seconds

Paragraph 2 of the code is a multi-process computing code. We use multiprocess module Process method creates two new processes p1 and p2 for parallel computing. Process method receives two parameters, the first one is the target, general to a function name when the second args, parameters passed to the function required. For the new process is created, call the start () method to let it start. We can use os.getpid () to print out the name of the current process.

Process multiprocessing Import from
Import OS
Import Time
DEF long_time_task (I):
    Print ( 'sub-process: {} - Task {}'. the format (os.getpid (), I))
    the time.sleep (2)
    Print ( "Results: } { "the format (** 20 is. 8)).
IF __name__ __ == '__ main__':
    Print ( 'the current master process: {}'. the format (os.getpid ()))
    Start the time.time = ()
    P1 = process (target = long_time_task, args = (. 1,))
    P2 = process (target = long_time_task, args = (2,))
    Print ( 'wait for all child process to complete.')
    p1.start ()
    p2.start ()
    P1. the Join ()
    p2.join ()
    End = the time.time ()
    Print ( "second time with a total of {}" .format ((end - start) ))

The output is shown below, takes becomes 2 seconds, the time halved, visible concurrent execution time is significantly faster than the sequential execution to be. You can also see that even though we only created two processes can be included in the actual operation, but a parent process and the two sub-processes. The reason we use the join () method is to let the parent process blocks, waiting for the child process is done after printing out a total time-consuming, or just time out time execution of the parent process.
The current master process: 6920
wait for all children to complete the process.
Child process: 17020-- Task 1
child process: 5904-- Task 2
results: 1152921504606846976
Results: 1152921504606846976
time with a total of 2.131091356277466 seconds

Knowledge points:

• The newly created switch process and the process is to be resource intensive, so the usual number of work processes can not be too loud.


• the number of processes can run simultaneously subject to the general audit of the CPU.


• In addition to using the Process method, we can also create multiple processes using the Pool class.


 

Pool use multiprocess module class to create a multi-process

Many times the system will need to create multiple processes to improve CPU utilization, when the number, you can generate an instance of a Process manual. When the number of processes a lot, perhaps you can take advantage of the cycle, but this requires programmers to manually manage the number of concurrent processes the system, and sometimes will be very troublesome. At this process pool Pool can carry out its function of. By passing parameters to limit the number of concurrent processes, the default value is the number of CPU core.

Pool classes may provide a specified number of processes for the user to call, when a new request is submitted to the Pool, the pool if the process is not yet full, it will create a new process to execute the request. If the pool is full, the request will be told to wait until the end of the process there is the pool, will create a new process to execute these requests.

Here's what a few ways Pool class in multiprocessing module:

1.apply_async

Function prototype: apply_async (func [, args = () [, kwds = {} [, callback = None]]])

Its role is to submit to the process of cell function and parameters need to be performed, each process using non-blocking (asynchronous) calls the way that each child process just to run their own, regardless of whether other processes have been completed.

2.map()

Function prototype: map (func, iterable [, chunksize = None])

map Pool class method, consistent with the built-in map function usage behavior, it will make the process to block until results are returned. Note: While the second argument is an iterator, but in actual use, after the entire queue must be ready, the program will run child process.

3.map_async()

Function prototype: map_async (func, iterable [, chunksize [, callback]])
consistent with the map usage, but it is non-blocking. Its related matters see apply_async.

4.close()

Close process pool (pool), so it does not accept the new task.

5. terminate()

End of the work process, not deal with outstanding tasks.

6.join()

The main process waits for the child process exits blocked, join method to use after the close or terminate.

The following example is an example of a simple multiprocessing.Pool class. Xiao Bian is because my 4-core CPU, can have up to 4 processes running at the same time, so I opened up a capacity to process 4 pool. 4 process requires computation five times, you can imagine after four four parallel process computing tasks, left a computing task (Task 4) is not completed, the system will wait for the process to re-arrange the four completed a process to calculate.

Pool multiprocessing Import from, cpu_count
Import OS
Import Time
DEF long_time_task (I):
    Print ( 'sub-process: {} - Task {}'. the format (os.getpid (), I))
    the time.sleep (2)
    Print ( " results: {} "the format (. 8 ** 20 is)).
IF __name__ __ == '__ main__':
    Print (" the CPU core number:. {} "the format (cpu_count ()))
    Print ( 'the current master process: {}' .format (os.getpid ()))
    Start the time.time = ()
    P = Pool (. 4)
    for I in Range (. 5):
        p.apply_async (long_time_task, args = (I,))
    Print ( 'wait for all children process is complete ').
    p.close ()
    p.join ()
    End = the time.time ()
    Print ( "second time with a total of {}" .format ((end - start) ))

Knowledge points: 

• Pool objects to invoke join () method waits until all the child process is finished, you must first call before calling join () close () or terminate () method, are no longer allowed to accept a new Process.


 The output shown below, five tasks (each task takes approximately 2 seconds) is calculated using the multiple parallel process takes only 4.37 seconds ,, a 60% reduction.
CPU cores: 4
current master process: 2556
wait for all children to complete the process.
Child process: 16480-- task 0
child process: 15216-- Task 1
child process: 15764-- Task 2
sub-processes: 10176-- Task 3
results: 1152921504606846976
Results: 1152921504606846976
child process: 15216-- Task 4
results: 1152921504606846976
results: 1152921504606846976
Results: 1152921504606846976
when spent a total 4.377134561538696 seconds

 I believe we all know the existence of GIL (Global Interpreter Lock) python interpreter, its role is to ensure that only one thread can execute code. Because of the GIL, many people think multithreading in python is not really true multi-threaded, if you want to get the most from multicore CPU resources, in most cases require the use of multiple processes in python. However, this does mean that python multi-threaded programming does not make sense Oh, please continue reading below.

  Data sharing and communication between multiple processes

 In general, independent of each other between processes, each process has its own memory. Through shared memory (nmap module), objects can be shared between processes, so that multiple processes can access the same variable (the same address, the variable name may differ). Multi-process shared resources will inevitably lead to competition among mutual process, so it should be the maximum extent possible to prevent the use of shared state. Another way is to use queue to queue for communication or data sharing between different processes, and it is similar to multi-threaded programming.

Process multiprocessing Import from, Queue
Import OS, Time, Random
# write data process executing code:
DEF Write (Q):
    Print ( 'Process to Write: {}'. the format (os.getpid ()))
    for value in [ 'a', 'B', 'C']:
        Print ( '% S of Put to Queue ...'% value)
        q.put (value)
        the time.sleep (random.random ())
# read data process executing Code:
DEF Read (Q):
    Print ( 'Process to Read: {}' the format (os.getpid ()).)
    the while True:
        value = q.get (True)
        '. S% from the Get Queue' Print (% value)
IF __name__ __ == '__ main__':
    # parent process to create Queue, and passed to each child process:
  q = Queue ()
    pw = process (target = the Write, args = (q,))
    PR = process (target = the Read ,args=(q,))
    # Pw promoter process, writes:
    pw.start ()
    # promoter process pr, reads:
    pr.start ()
    # pw wait for the end:
    pw.join ()
    # PR process is dead in the loop, not wait for it to end only Kill:
    pr.terminate ()

The code in the following example creates two independent processes, one for writing (pw), one for reading (pr), to achieve a shared queue queue.

输出结果如下所示:
Process to write: 3036
Put A to queue...
Process to read:9408
Get A from queue.
Put B to queue...
Get B from queue.
Put C to queue...
Get C from queue.

 Python's multi-threaded programming and threading module

 python multi-process programming 3 mainly rely on threading module. Create new thread create new processes and methods are very similar. threading.Thread method may receive two parameters, the first one is the target, the general point function name, the second parameter when args, needs to be passed to the function. For the new thread is created, call the start () method to let it start. We can also use current_thread (). Name prints out the name of the current thread. In the following example we use to calculate the code before the multi-threading technology reconstruction.

import threading
import time
def long_time_task(i):
    print('当前子线程: {} - 任务{}'.format(threading.current_thread().name, i))
    time.sleep(2)
    print("结果: {}".format(8 ** 20))
if __name__=='__main__':
    start = time.time()
    print('这是主线程:{}'.format(threading.current_thread().name))
    t1 = threading.Thread(target=long_time_task, args=(1,))
    t2 = threading.Thread(target=long_time_task, args=(2,))
    t1.start()
    t2.start()
    end = time.time()
    print("总共用时{}秒".format((end - start)))


Here is the output. Why is actually total time is 0 seconds? We can clearly see that the main thread and the child thread is actually run independently of the main thread simply did not wait for the child threads to complete, but after the end of the print their own time-consuming. After the main thread, the child thread is still run independently, which is obviously not what we want.
This is the main thread: MainThread
current sub-thread: Thread-1 - Task 1
current sub-thread: Thread-2 - Task 2
a total time with 0.0017192363739013672 seconds
Results: 1152921504606846976
Results: 1152921504606846976

To achieve synchronization main thread and a sub thread, we must use the join method (code below).

import threading
import time
def long_time_task(i):
    print('当前子线程: {} 任务{}'.format(threading.current_thread().name, i))
    time.sleep(2)
    print("结果: {}".format(8 ** 20))
if __name__=='__main__':
    start = time.time()
    print('这是主线程:{}'.format(threading.current_thread().name))
    thread_list = []
    for i in range(1, 3):
        t = threading.Thread(target=long_time_task, args=(i, ))
        thread_list.append(t)
    for t in thread_list:
        t.start()
    for t in thread_list:
        t.join()
    end = time.time()
    print("总共用时{}秒".format((end - start)))

FIG outputs the modified code is as follows. Then you can see the main thread after completion of waiting for a child thread promised total elapsed time (2 seconds) than normal sequential code (4 seconds) to perform or to save a lot of time.
This is the main thread: MainThread
current sub-thread: Thread - 1 Task 1
of the current sub-thread: Thread - 2 Task 2
results: 1152921504606846976
results: 1152921504606846976
total time used 2.0166890621185303 seconds

When we set up multiple threads, the main thread creates multiple child threads in python, the default main thread and the child thread running independent non-interference in the case. If you want the main thread to wait for the child threads synchronize threads, we need to use the join () method. If we no longer want to perform sub-thread, how should we do at the end of a main thread? We can use t.setDaemon (True), the code shown below.

import threading
import time
def long_time_task():
    print('当子线程: {}'.format(threading.current_thread().name))
    time.sleep(2)
    print("结果: {}".format(8 ** 20))
if __name__=='__main__':
    start = time.time()
    print('这是主线程:{}'.format(threading.current_thread().name))
    for i in range(5):
        t = threading.Thread(target=long_time_task, args=())
        t.setDaemon(True)
        t.start()
    end = time.time()
    print("总共用时{}秒".format((end - start)))

Create a new thread by Thread class inheritance override the run method

 In addition to creating a new thread using the Thread () method, we also can override the run method creates a new thread through inheritance Thread class, this method is more flexible. Below is an example of a custom class for MyThread, then we create two sub-thread by instantiating the class.

#-*- encoding:utf-8 -*-
import threading
import time
def long_time_task(i):
    time.sleep(2)
    return 8**20
class MyThread(threading.Thread):
    def __init__(self, func, args , name='', ):
        threading.Thread.__init__(self)
        self.func = func
        self.args = args
        self.name = name
        self.result = None
    def run(self):
        print('开始子进程{}'.format(self.name))
        self.result = self.func(self.args[0],)
        print("结果: {}".format(self.result))
        print('结束子进程{}'.format(self.name))
if __name__=='__main__':
    start = time.time()
    threads = []
    for i in range(1, 3):
        t = MyThread(long_time_task, (i,), str(i))
        threads.append(t)
    for t in threads:
        t.start()
    for t in threads:
        t.join()
    end = time.time()
    print("总共用时{}秒".format((end - start)))

The output results are as follows:
Start sub-process 1
starts 2 child process
Results: 1152921504606846976
Results: 1152921504606846976
end of a sub-process
ends 2 sub-process
total time of use 2.005445718765259 seconds

 Data sharing between different threads

Between different threads of a process contained in shared memory, which means that any variable can be any of a thread modified so that data shared between threads biggest danger is that multiple threads simultaneously change a variable, to change the content chaos . If there are variables shared among different threads, one way is to modify it before a lock on the lock, ensure that only one thread can modify it. threading.lock () method can be easily achieved lock on a shared variable, modify the finished release for other threads. Such as account balance balance next embodiment is a shared variable, it does not use the lock may be changed arbitrary.

# - * - Coding: UTF-. 8 - *
Import Threading
class the Account:
    DEF the __init __ (Self):
        self.balance = 0
    DEF the Add (Self, Lock):
        # acquired the lock
        lock.acquire ()
        for I in Range (0, 100000):
            self.balance +. 1 =
        # releases the lock
        lock.release ()
    DEF Delete (Self, lock):
        # acquired the lock
        lock.acquire ()
        for I in Range (0, 100000):
            self.balance -. 1 =
            # release lock
        lock.release ()
IF __name__ == "__main__":
    the Account = the Account ()
    lock = threading.Lock ()
    # create a thread
  thread_add = threading.Thread(target=account.add, args=(lock,), name='Add')
    thread_delete = threading.Thread(target=account.delete, args=(lock,), name='Delete')
    # 启动线程
  thread_add.start()
    thread_delete.start()
    # 等待线程结束
  thread_add.join()
    thread_delete.join()
    print('The final balance is: {}'.format(account.balance))

Another method to achieve different data sharing between threads is to use the message queue queue. Unlike the list, queue is thread-safe, safe to use, see below.

 Use queue queue communication - classic model of producers and consumers

The following example creates two threads, one responsible for generating, a responsible consumer, the resulting product is stored in the queue, the realization of the communication between different threads.

from queue import Queue
import random, threading, time
# 生产者类
class Producer(threading.Thread):
    def __init__(self, name, queue):
        threading.Thread.__init__(self, name=name)
        self.queue = queue
    def run(self):
        for i in range(1, 5):
            print("{} is producing {} to the queue!".format(self.getName(), i))
            self.queue.put(i)
            time.sleep(random.randrange(10) / 5)
        print("%s finished!" % self.getName())
# 消费者类
class Consumer(threading.Thread):
    def __init__(self, name, queue):
        threading.Thread.__init__(self, name=name)
        self.queue = queue

    def run(self):
        for i in range(1, 5):
            val = self.queue.get()
            print("{} is consuming {} in the queue.".format(self.getName(), val))
            time.sleep(random.randrange(10))
        print("%s finished!" % self.getName())
def main():
    queue = Queue()
    producer = Producer('Producer', queue)
    consumer = Consumer('Consumer', queue)
    producer.start()
    consumer.start()
    producer.join()
    consumer.join()
    print('All threads finished!')
if __name__ == '__main__':
    main()

put method queue may be a queue object obj placed in the queue. If the queue is full, this method will block until the queue space becomes available. queue get method returns once a member of the queue. If the queue is empty, this method will block until the queue has members available. queue also comes emtpy (), full () method and the like to determine whether a queue is empty or full, but these methods are not reliable, since the multi-thread, multi-process, and use the results between the return result queue may add / remove a member.

 Python multiprocessing and multithreading which quickly?

 Because of GIL, many people think that Python programming process faster and more for multi-core CPU, multi-process theory is the more effective use of resources. Online a lot of people have done more, I tell you your conclusion.

• For CPU intensive code (such as loops calculation) - higher efficiency multi-process


• For IO intensive code (such as file operations, web crawlers) - multithreading higher efficiency.


Why is this so? In fact, not difficult to understand. For IO intensive operations, most time-consuming is actually waiting time, CPU is not required to work during the waiting time, then you provide dual CPU resources during this time it is not on use, on the contrary for CPU-intensive code 2 CPU work is certainly much faster than a CPU. So why would IO-intensive multi-threaded code is useful to do? This is because python encounter waiting for GIL will release new threads to achieve switching between threads.

What is GIL

First need to clear is not Python's GIL characteristics, it is a concept at the time of implementation of the Python parser (CPython) introduced. It is a C ++ like language (grammar) standard, but may be a different compiler into executable code. Well-known compilers such as GCC, INTEL C ++, Visual C ++ and so on. Python is the same, the same piece of code may be executed by different execution environments Python CPython, PyPy, Psyco like. Like JPython which there is no GIL. However, because CPython is the default under most environmental Python execution environment. So CPython Python is in many people's concept, will assume the defects attributed to the GIL Python language. So here must first be clear: GIL is not a Python features, Python can not rely on the GIL.

GIL: a Mutex to prevent multiple threads concurrently executing a machine code, at first glance is a BUG-like existence of global lock it! Do not worry, we slowly analysis below.

Why GIL

Due to the physical limitations too, each CPU core in the game manufacturers frequency has been replaced by a multi-core. In order to more effectively utilize the performance of multi-core processors, there have been multi-threaded programming, but the attendant is inter-thread synchronization of data consistency and state difficult. Even within the CPU Cache is no exception, in order to effectively solve data between multiple caches synchronized manufacturers spend a lot of thought, but also inevitably bring some performance loss.

Python of course can not escape, in order to take advantage of multi-core, multi-threaded Python began to support. And resolve data integrity between multiple threads and synchronization status of nature is the easiest way to lock. After so with the GIL this super lock, and when more and more code library developers accepted this setting, they began to rely heavily on this feature (which is the default python internal objects are thread-safe and need not be implementation considerations when additional memory locking and synchronization).

Slowly This implementation is found to be inefficient and boring. But when we tried to split and remove the GIL, we found a lot of library code developers are already heavily dependent on GIL and very difficult to removed. How hard? Be analogies, such as MySQL "small project" In order to split Buffer Pool Mutex that the big lock into each small lock it took from 5.5 to 5.6 and then to 5.7 multiple large version of a period of nearly five years, and this continues. The company behind MySQL support and product development team has fixed go so hard, so that they let alone Python core development community and a high degree of contributors team do?

So there is simply more of the GIL is historical. If pushed to again, multithreading problems are still to face, but at least will be more elegant than the current GIL this way.

 GIL impact

From the above description and definition of the official point of view, GIL is undoubtedly a global exclusive lock. There is no doubt that there is a global lock will have no small impact the efficiency of multi-threaded. Even almost equal to Python is a single-threaded program.
So my readers will say, as long as the global lock release diligent efficiency is not bad ah. As long as IO operations during the time-consuming when the release GIL, so it is still possible to enhance the operating efficiency of thing. Or that however bad it will not be worse than the efficiency of single-threaded bar. In theory, but in fact it? Python worse than you think.

Here we compare the next Python The comparative efficiency in multi-threaded and single-threaded. Test method is very simple, a cycle of 100 million counter function. Twice by a single thread of execution, a multi-threaded execution. Finally, compare the total execution time. Test environment for dual-core Mac pro. Note: In order to reduce the impact of the threads library itself performance loss caused by the test results, where single-threaded code also uses threads. Performed only two sequence, single-threaded simulation.

Single thread of execution order (single_thread.py)

#! /usr/bin/python
 
from threading import Thread
import time
 
def my_counter():
    i = 0
    for _ in range(100000000):
        i = i + 1
    return True
 
def main():
    thread_array = {}
    start_time = time.time()
    for tid in range(2):
        t = Thread(target=my_counter)
        t.start()
        t.join()
    end_time = time.time()
    print("Total time: {}".format(end_time - start_time))
 
if __name__ == '__main__':
    main()

Two concurrent threads of execution (multi_thread.py)

#! /usr/bin/python
 
from threading import Thread
import time
 
def my_counter():
    i = 0
    for _ in range(100000000):
        i = i + 1
    return True
 
def main():
    thread_array = {}
    start_time = time.time()
    for tid in range(2):
        t = Thread(target=my_counter)
        t.start()
        thread_array[tid] = t
    for i in range(2):
        thread_array[i].join()
    end_time = time.time()
    print("Total time: {}".format(end_time - start_time))
 
if __name__ == '__main__':
    main()

In the case of multi-threaded python actually slower than single-threaded full 45%. According to previous analysis, even in the presence of GIL Global Lock, serialized multi-threaded and single-threaded should also have the same efficiency son. So how would there be such a bad result?

Let's analyze the reasons which by implementing the principles of the GIL.

The current design flaw GIL

Based on the number of scheduling pcode

Fake code

while True:
    acquire GIL
    for i in 1000:
        do something
    release GIL
    /* Give Operating System a chance to do thread scheduling */

This model is no problem in the case of only one CPU core. Any thread is able to successfully get to the evocation GIL (because only release the GIL will lead to thread scheduling). But when there are multiple CPU cores when the question came. Can be seen from the pseudocode, almost no gap between the release GIL to acquire GIL. So when other threads on other cores to be awakened, the main thread in most cases have to once again get the GIL. This time was awakened thread of execution can only go to waste CPU time, I looked at another thread holding the GIL cheerful execution. After reaching the switching time and then enters the state to be scheduled, and is awakened, wait, this vicious cycle of reciprocation.

PS: Of course, this implementation is primitive and ugly, each version of Python is also gradually improve the interaction between GIL and thread scheduling. For example, to try to hold GIL doing thread context switching, GIL and other attempts to release while waiting for the IO. But can not change is the presence of GIL makes the operating system thread scheduling of the already expensive operation became more extravagant.
GIL about the impact of extended reading

In order to understand the impact of visual performance for multi-threaded brought GIL, a test result chart directly borrowed here (see below). FIG two threads are represented in the dual-core CPU performance obtained. Two threads are CPU-intensive computation thread. Green part represents the thread running and performing useful computation, the red part of the thread is scheduled wake-up, but could not get GIL can not lead to effective operation time to wait.

The figure shows that there GIL led multi-thread can not immediately good concurrent processing of multi-core CPU.

So Python's IO intensive threads can benefit from multiple threads do? Here we look at the results of this test. Meaning represented by the colors and the same figure. IO represents the white part of the thread is waiting. Visible, when the IO thread receives a packet switched terminal cause, there is still a CPU-intensive because the thread, making it impossible to obtain GIL lock, to perform the endless cycle of waiting.


The conclusion is simple: Python multi-threading on multi-core CPU, only for IO-intensive computing have a positive effect; and when there is at least a CPU-intensive thread exists, the multi-threaded efficiency will decline sharply due to the GIL.

How to avoid the influence of GIL

Said so much, if not just a solution to science posts, and then the eggs. GIL so bad, there is no way around it? We look at what the program has ready-made.

Alternative Thread with multiprocess

Multiprocess library appears largely to compensate thread library GIL and inefficient because of the defect. It is a complete copy of the interface provided by a thread to facilitate migration. The only difference is that it uses a multi-process instead of multiple threads. Each process has its own independent GIL, and therefore do not appear GIL competition between processes.

Of course, multiprocess is not a panacea. Its introduction will increase the program implements a thread between data communication and synchronization difficulties. Take the counter here as example, if we are to accumulate the same variable multiple threads, thread for it, declare a global variable, with context thread.Lock wrap the three lines to get. Unable to see each other while multiprocess data between processes, only through a Queue in the main Chengshen Ming, put or get re-used method of share memory. This additional implementation costs makes the already very painful multithreaded program code becomes more painful.

With other parsers

So it can not be saved?

Of course, the Python community is also very hard to continue to improve GIL, and even try to remove the GIL. And with a lot of small improvements in each version.

The GIL Another improvement Reworking
- switching from particle counting opcode changed based on the time slice count
- the new thread priority function (high priority thread may force the other thread to release the lock held by GIL)
a recently released avoided - GIL lock thread is scheduled immediately again

to sum up

Python GIL is actually the product features and performance, the trade-offs, especially in the presence of its rationality, there are objective factors difficult to change. From this analysis asked, we can do some of the following simple conclusion:
◦ because of the GIL, the only scenario under IO Bound thread will get much better performance
◦ If the parallel high performance computing can be considered the core of the program also as part of the C module, or simply to achieve in other languages
◦GIL will continue to exist for a longer period of time, but will continue to improve it

Guess you like

Origin www.linuxidc.com/Linux/2019-07/159448.htm