Learn a little shell every day: asynchronous execution of shell scripts

shell pipeline "|"

Pipe usage: command1 | command2 | command3etc.
Examples of usage:

[root@hadoop-master shell-test]# ps -ef|grep java
root       5304   2878  0 05:58 pts/1    00:00:00 grep --color=auto java

The simple understanding is to use the output result of the previous command as the data of the next command. It is more vivid with pipes. The data flows from one water pipe to the next water pipe like water flow.
Give another example:

[root@hadoop-master shell-asy]# ls -s|sort -nr
4 test3.sh
4 test2.sh
4 test1.sh
4 start-syn.sh
4 start-asy.sh
总用量 20

-s is file size, -n is numeric-sort, -r is reverse, reverse
This command is to sort the files according to the size of the data file from largest to smallest and output.

Shell parallel execution "&"

The shell uses "&" to execute shell scripts in parallel, which is equivalent to each script being a separate process.
Examples are as follows:
script 1

[root@hadoop-master shell-asy]# cat test1.sh 
#!/bin/bash
echo "脚本1开始执行 "`date +"%Y%m%d %H:%M:%S"`
sleep 5
echo "脚本1执行结束 "`date +"%Y%m%d %H:%M:%S"`

Script 2

[root@hadoop-master shell-asy]# cat test2.sh 
#!/bin/bash
echo "脚本2开始执行 "`date +"%Y%m%d %H:%M:%S"`
sleep 3
echo "脚本2执行结束 "`date +"%Y%m%d %H:%M:%S"`

Script 3

[root@hadoop-master shell-asy]# cat test3.sh 
#!/bin/bash
echo "脚本3开始执行 "`date +"%Y%m%d %H:%M:%S"`
sleep 5
echo "脚本3执行结束 "`date +"%Y%m%d %H:%M:%S"`

Run scripts asynchronouslystart-asy.sh

[root@hadoop-master shell-asy]# cat start-asy.sh 
#!/bin/bash
echo "并行执行"
sh ./test1.sh &
sh ./test2.sh &
sh ./test3.sh &

wait
echo "主线程执行结束"

The execution results are as follows:

[root@hadoop-master shell-asy]# sh start-asy.sh 
并行执行
脚本1开始执行 20200917 07:21:00
脚本2开始执行 20200917 07:21:00
脚本3开始执行 20200917 07:21:00
脚本2执行结束 20200917 07:21:03
脚本1执行结束 20200917 07:21:05
脚本3执行结束 20200917 07:21:05
主线程执行结束

Shell serial execution "&&"

The default in the shell is to execute the script serially. If two commands are combined into one execution, you can add && as a connection.
For example: execute the script and output the execution time

[root@hadoop-master shell-asy]# cat start-syn.sh 
#!/bin/bash
echo "串行执行"
sh ./test1.sh
sh ./test2.sh
sh ./test3.sh

The following is to execute the script sequentially and output the execution time

[root@hadoop-master shell-asy]# sh start-syn.sh && date +"%Y-%m-%d %T" 
串行执行
脚本1开始执行 20200917 07:26:52
脚本1执行结束 20200917 07:26:57
脚本2开始执行 20200917 07:26:57
脚本2执行结束 20200917 07:27:00
脚本3开始执行 20200917 07:27:00
脚本3执行结束 20200917 07:27:05
2020-09-17 07:27:05

Guess you like

Origin blog.csdn.net/u011047968/article/details/108636325