Meaning of >/dev/null 2>&1 in linux shell

You may often see in the shell: >/dev/null 2>&1 The result of the command can be defined in the form of %> The output /dev/null represents an empty device file > Represents where to redirect, for example: echo "123" > /home/123.txt 1 means stdout standard output, the system default value is 1, so ">/dev/null" is equivalent to "1>/dev/null" 2 means stderr standard error & means equivalent, 2> &1, indicating that the output redirection of 2 is equivalent to 1 , then the statement in the title of this article: 1>/dev/null First, it means that the standard output is redirected to an empty device file, that is, no information is output to the terminal, and to put it bluntly, no information is displayed. . 2>&1 Then, standard error output redirection is equivalent to standard output, because standard output has been redirected to an empty device file before, so standard error output is also redirected to an empty device file. A. 1> /dev/null means redirect the standard output of the command to /dev/null 2>/dev/null means redirect the error output of the command to /dev/null1 - denotes stdout ( standard output )2 - denotes stderr ( standard error )/dev/null is equivalent to the recycle bin in windows, but it can't come out again after entering it. >/dev/null is to block the standard output and standard error information from being displayed 

 

 
 
 
 
 

 
 
 
 
B.>/dev/null 2>&1 also can write as 1>/dev/null 2>&1 - stdout redirect to /dev/null (no stdout) ,and redirect stderr to stdout (stderr gone as well) . end up it turns both stderr and stdout off C.a little practice may help to undstand above . #ls /usr /nothing #ls /usr /nothing 2>/dev/null #ls /usr /nothing >/dev/null 2>&1We often You will see usages like "2>&1" in some scripts under the UNIX system, such as "/path/to/prog 2>&1 > /dev/null &", what is its specific meaning?    UNIX has several input and output streams, which correspond to several numbers as follows: 0 - standard input stream (stdin), 1 - standard output stream (stdout), 2 - standard error stream (stderr). "2>&1" means redirect stderr to stdout and display it on the screen together. If no number is added, the default redirection action is for stdout(1), for example "ls -l > result" is equivalent to "ls -l 1 > result". This makes it easier for us to understand the redirection process more generally.   Here is an example: #cat std.sh #!/bin/sh echo “stdout” echo “stderr” > 
 

 
 
 
 
 
 
 

 
stderr #/bin/sh std.sh > /dev/null 2>&1    The output of the first command is stderr, because stdout and stderr are combined and redirected to /dev/null, but stderr is not cleared, so will still be displayed on the screen; the second command has no output because when stdout is redirected to /dev/null, stderr is redirected to stdout, so stderr is also output to /dev/null. When I was doing routine work today, I found that there were so many sendmail processes on the machine, and the IO of the machine seemed to be very slow. Later, I found that ls is almost dead in the /var/spool/clientmqueue directory – there are at least 100,000 files ps|grep sendmail to see that these sendmail processes all have /var/spool/clientmqueue cd In the past, I just opened a file and looked at it, I found that it was the exception of the program executed in my crontab. It is estimated that every time my crontab is executed, linux tries to send an email to the crontab user but there is no sendmail, so everything is thrown under /var/spool/clientmqueue. . Then I understood why the crontab written by others had to add > /dev/null 2>&1, so that every time the crontab was executed, the result or exception would not be sent by email. After deleting these 100,000 files, everything returned to normal . Problem phenomenon: There are a lot of files in the /var/spool/clientmqueue/ directory in the Linux operating system. Reason analysis: Some users in the system have opened cron, and the programs executed in cron have output content, and the output content will be sent to cron users in the form of emails, but sendmail is not started, so these files are generated; 

 

 

 

 

 

 

 
 
 
Solution: 1. Add > /dev/null 2>&1 to the command in crontab 2. Knowledge points: 2>: redirection error. 2>&1: redirect the error to where the output is to be sent. That is, the execution result of the above command is redirected to /dev/null, that is, discarded, and at the same time, the generated error is also discarded. 3. Specific code: (1), # crontab -u cvsroot -l 01 01 * * * /opt/bak/backup 01 02 * * * /opt/bak/backup2 (2), # vi /opt/bak/backup #!/bin/sh cd / getfacl -R repository > /opt/bak/backup.acl (3), # vi /opt/bak/backup2 #!/bin/sh week=`date +%w` tar zcvfp / opt/bak/cvs$week/cvs.tar.gz /repository >/dev/null 2>&1 4. Clear the files in the /var/spool/clientmqueue/ directory: # cd /var/spool/clientmqueue # rm -rf * If there are too many files and take up too much space, and it is slow to delete with the above command, execute the following command: # cd /var/spool/clientmqueue # ls | xargs rm -f 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
On a sunny and sunny night, I was sitting at home watching TV, and then my phone rang, and it turned out that Mr. Yang found that a host was abnormal, and the /var/spool/mqueue directory of the server was stuffed with a bunch of unknown The letter was sent, and /var/spool was not divided at that time, so it also affected the root (/) block of the system. There are only more than 600 MB left to use. At this time, there are several possibilities. This server Some help the school's PC to send letters, so it may be an advertisement letter being sent. Using this server as a machine for mail sending may be poisoned, so it keeps sending letters. At first, I only thought of these two reasons, But we need to spit out the swallowed space, so we plan to cut all mail queues first. Of course, we must stop the mail service first. When cutting these queued letters, I found one thing, that is There are too many files in it. Using the ls command makes it super slow and no response. I can't use mailq to see which letters are in the queue. After playing, I easily downloaded rm -rf * Now, a very strange thing happened, there are too many files to delete, the first time I heard rm complaining (I heard it, Mr. Yang is The real author, so he has seen ^^). The error is: bash: /bin/rm: Argument list too long Although it cannot be deleted, Brother Yang did not give up, went to the host, opened X Window and used the Linuxer The most commonly used nautilus (nautilus) is opened to /var/spool/mqueue. Oh~ You can use X Window to delete it! Later, I wanted to say that since X Window has such a great ability, then I use it to delete other The queue files is fine, so I hung up the phone and let Brother Yang work hard to delete it in the computer room... 

 

 
 
 
 
 
Of course, I wasn't idle either. The TV series just finished, so I started my work buddy and became an Internet submarine again... I was swimming and suddenly thought, why not use find to delete it? So I deleted the history file, I found that a command is find ./ | xargs rm -rf Don't underestimate this small command, because shortly after I read it, Brother Yang called in and said that it had been deleted, and it was ten o'clock in the evening, so I recommended this command. Well, it's very good. It's all deleted, and it's quite fast... Oh, I haven't said why I deleted it softly. It's because nautilus is in batches when loading the directory, not I read them all at one time, so I was reading about a few thousand at a time. After I deleted them, I didn’t expect that there would be thousands more... It was really scary, and it was deduced that it should be a batch relationship. I found it. / | After xargs rm -rf, I was still amazed at how fast it was, but I found that time was running out and the school had to close, so I said bye bye first, and Brother Yang, who was working hard at the scene, also went home to rest. Analysis: rm has the biggest The number of deletions at one time, so when there are too many files or directories in a directory, an error will occur. The brother tried it should be less than 20,000, and the purpose of using find ./ | xargs rm -rf is to use find first List the files, then redirect to xargs, and then feed xargs to rm. Here, xargs will be fed to rm in batches according to the maximum number of rms, and then the files can be deleted smoothly . The real reason may be the version of rm or the file system. I will not continue to follow it. Anyway, if I can do a good job, I will provide a small shell script download that my brother tested at that time: mk-file.sh (This shell script will generate 20,000 files in the directory.) Next, do a little test: root # mkfile.sh 
 
 
 
 
 
 
 
 
 
 
 
root # will generate 20000 small files named test-file-{1~19999} and use rm to delete them directly: root # rm -rf test-file-* -bash: /bin/rm: Argument list too long (will In response to the message that the argument is too long) use find to delete root # find ./ -iname 'test-file-*' | xargs rm -rf root # ls mk-file.sh root # This is successfully deleted. --------------------------------- #tool_action 45 4 * * * /bin/sh /data/stat/crontab /exec_tool_action_analysis_db.sh >> /data/stat/logs/exec_tool_action_analysis_db.sh.log > /dev/null 2>&1 45 5 * * * /bin/sh /data/stat/crontab/exec_tool_action_analysis_user.sh >> /data/ stat/logs/exec_tool_action_analysis_user.sh.log > /dev/null 2>&1 otherwise the following files will be generated under /var/spool/clientmqueue: 
 
 
 
 
 
 
 
 
 
 

 
 
 

 

 
-rw-rw---- 1 smmsp   smmsp  975 Jan 17 10:50 qfq0H2o4ei031197

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326758505&siteId=291194637