The difference between the usage of call and popen in subprocess in python

The purpose of subprocess is to start a new process and communicate with it.

There is only one class defined in the subprocess module: Popen. Processes can be created using Popen and have complex interactions with them. Its constructor is as follows:

subprocess.Popen(args, bufsize=0, executable=None, stdin=None, stdout=None, stderr=None, preexec_fn=None, close_fds=False, shell=False, cwd=None, env=None, universal_newlines=False, startupinfo=None, creationflags=0)

Argument args can be a string or sequence type (such as: list, tuple), used to specify the executable file of the process and its parameters. In the case of a sequence type, the first element is usually the path to the executable. We can also explicitly use the executeable parameter to specify the path to the executable.

The parameters stdin, stdout, and stderr represent the standard input, output, and error handle of the program, respectively. They can be PIPEs, file descriptors or file objects, and can also be set to None to indicate inheritance from the parent process.

If the parameter shell is set to true, the program will be executed through the shell.

The parameter env is a dictionary type and is used to specify the environment variables of the child process. If env = None, the child process' environment variables will be inherited from the parent process.

subprocess.PIPE

  When creating a Popen object, subprocess.PIPE can initialize stdin, stdout or stderr parameters. Represents a standard stream for communicating with child processes.

subprocess.STDOUT

  When creating a Popen object, it is used to initialize the stderr parameter, indicating that the error will be output through the standard output stream.

Popen's method:

Popen.poll()

  Used to check if the child process has ended. Sets and returns the returncode property.

Popen.wait()

  Wait for the child process to finish. Sets and returns the returncode property.

Popen.communicate(input=None)

  Interact with child processes. Send data to stdin, or read data from stdout and stderr. The optional parameter input specifies the parameters to send to the child process. Communicate() returns a tuple: (stdoutdata, stderrdata). Note: If you want to send data to the process through its stdin, when creating the Popen object, the parameter stdin must be set to PIPE. Likewise, if you want to get data from stdout and stderr, you must set stdout and stderr to PIPE.

Popen.send_signal(signal)

  Send a signal to the child process.

Popen.terminate()

  Stop (stop) the child process. Under the Windows platform, this method will call the Windows API TerminateProcess() to terminate the child process.

Popen.kill()

  Kill the child process.

Popen.stdin, Popen.stdout, Popen.stderr, the official documentation says:

stdinstdout and stderr specify the executed programs’ standard input, standard output and standard error file handles, respectively. Valid values are PIPE, an existing file descriptor (a positive integer), an existing file object, and None.

Popen.pid

  Get the process ID of the child process.

Popen.returncode

  Get the return value of the process. Returns None if the process has not ended.

---------------------------------------------------------------

Simple usage:

[python]  view plain  copy
 
 
  1. p=subprocess.Popen("dir", shell=True)  
  2. p.wait()  
[python]  view plain copy  
 
  1. p=subprocess.Popen("dir", shell=True)  
  2. p.wait()  

The shell parameters are determined according to the command you want to execute. The above is the dir command, so shell=True must be used, and p.wait() can get the return value of the command.

If the above is written as a=p.wait(), a is the returncode. Then if a is output, it may be 0 [indicating successful execution].

---------------------------------------------------------------------------

process communication

If you want to get the output of a process, pipes are a very convenient method, like this:

[python]  view plain  copy
 
 
  1. p=subprocess.Popen("dir", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)  
  2. (stdoutput,erroutput) = p.<span>commu</span>nicate()  
[python]  view plain copy  
 
  1. p=subprocess.Popen("dir", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)  
  2. (stdoutput,erroutput) = p.<span>commu</span>nicate()  

p.communicate will wait until the process exits and return the standard output and standard error output, so that the output of the child process can be obtained.

Look at another example of communicate.

The above example sends data to stdin via communicate, and then uses a tuple to receive the execution result of the command.

------------------------------------------------------------------------

Above, standard output and standard error output are separate, and can also be combined, just set the stderr parameter to subprocess.STDOUT, like this:

[python]  view plain  copy
 
 
  1. p=subprocess.Popen("dir", shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)  
  2. (stdoutput,erroutput) = p.<span>commu</span>nicate()  
[python]  view plain copy  
 
  1. p=subprocess.Popen("dir", shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)  
  2. (stdoutput,erroutput) = p.<span>commu</span>nicate()  

If you want to process the output of the subprocess line by line, no problem:

[python]  view plain  copy
 
 
  1. p=subprocess.Popen("dir", shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)  
  2. while True:  
  3.     buff = p.stdout.readline()  
  4.     if buff == '' and p.poll() != None:  
  5.         break  
[python]  view plain copy  
 
  1. p=subprocess.Popen("dir", shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)  
  2. while True:  
  3.     buff = p.stdout.readline()  
  4.     if buff == '' and p.poll() != None:  
  5.         break  

------------------------------------------------------

deadlock

But if you use a pipe and don't process the output of the pipe, be careful, if the subprocess outputs too much data, a deadlock will occur, such as the following usage:

[python]  view plain  copy
 
 
  1. p=subprocess.Popen("longprint", shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)  
  2. p.wait()  
[python]  view plain copy  
 
  1. p=subprocess.Popen("longprint", shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)  
  2. p.wait()  

longprint is an imaginary process with a lot of output, so in my xp, Python2.5 environment, when the output reaches 4096, a deadlock occurs. Of course, if we use p.stdout.readline or p.communicate to sanitize the output, then no matter how much output, the deadlock will not happen. Or we don't use pipes, such as no redirection, or redirection to a file, which can also avoid deadlock.

----------------------------------

Subprocess can also chain together multiple commands to execute.

We know in the shell that we can use pipes to connect multiple commands.

In subprocess, the output of the previous command execution can be used as the input of the next execution. Examples are as follows:

In the example, p2 uses the stdout of p1, the result of the first execution of the command, as input data, and then executes the tail command.

- -------------------

Below is a larger example. Used to ping a series of ip addresses, and output whether the host of these addresses is alive. The code refers to the Python  unix Linux  system administration guide.

[python]  view plain  copy
 
 
  1. #!/usr/bin/env python  
  2.   
  3. from threading import Thread  
  4. import subprocess  
  5. from Queue import Queue  
  6.   
  7. num_threads=3  
  8. ips=['127.0.0.1','116.56.148.187']  
  9. q=Queue()  
  10. def pingme(i,queue):  
  11.     while True:  
  12.         ip=queue.get()  
  13.         print 'Thread %s pinging %s' %(i,ip)  
  14.         ret=subprocess.call('ping -c 1 %s' % ip,shell=True,stdout=open('/dev/null','w'),stderr=subprocess.STDOUT)  
  15.         if ret == 0:  
  16.             print '%s is alive!' %ip  
  17.         elif ret == 1:  
  18.             print '%s is down...'%ip  
  19.         queue.task_done()  
  20.   
  21. #start num_threads threads  
  22. for i in range(num_threads):  
  23.     t=Thread(target=pingme,args=(i,q))  
  24.     t.setDaemon(True)  
  25.     t.start()  
  26.   
  27. for ip in ips:  
  28.     q.put(ip)  
  29. print 'main thread waiting...'  
  30. q.join();print 'Done'  
[python]  view plain copy  
 
  1. #!/usr/bin/env python  
  2.   
  3. from threading import Thread  
  4. import subprocess  
  5. from Queue import Queue  
  6.   
  7. num_threads=3  
  8. ips=['127.0.0.1','116.56.148.187']  
  9. q=Queue()  
  10. def pingme(i,queue):  
  11.     while True:  
  12.         ip=queue.get()  
  13.         print 'Thread %s pinging %s' %(i,ip)  
  14.         ret=subprocess.call('ping -c 1 %s' % ip,shell=True,stdout=open('/dev/null','w'),stderr=subprocess.STDOUT)  
  15.         if ret == 0:  
  16.             print '%s is alive!' %ip  
  17.         elif ret == 1:  
  18.             print '%s is down...'%ip  
  19.         queue.task_done()  
  20.   
  21. #start num_threads threads  
  22. for i in range(num_threads):  
  23.     t=Thread(target=pingme,args=(i,q))  
  24.     t.setDaemon(True)  
  25.     t.start()  
  26.   
  27. for ip in ips:  
  28.     q.put(ip)  
  29. print 'main thread waiting...'  
  30. q.join();print 'Done'  

The main benefit of using subprocess in the above code is that using multiple threads to execute the ping command saves a lot of time.

Suppose we use a thread to process, then each ping has to wait for the previous one to finish before pinging other addresses. Then if there are 100 addresses, the total time required = 100*average time.

If multiple threads are used, the thread with the longest execution time is the total time the entire program runs. [Time is saved more than a single thread]

Here we should pay attention to the learning of the Queue module.

The execution of the pingme function is as follows:

The started thread will execute the pingme function.

The pingme function checks if there is an element in the queue. If there is, take it out and execute the ping command.

This queue is shared by multiple threads. So here we don't use lists. [Assuming we use a list here, we need to do synchronization control ourselves. Queue itself has already done synchronization control through semaphores, saving us the work of doing synchronization control ourselves =. =]

The join function of q in the code blocks the current thread. The following is the e-text note

 Queue.join()

  Blocks until all items in the queue have been gotten and processed(task_done()).

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324853477&siteId=291194637