Simple batch curl interface script for shell

Shell scripts can be said to be very useful. In the server field, using shell to operate transactions is much more convenient and faster than manual clicks. Although it is only a text interface, its powerful processing function will make various operations beyond imagination. Moreover, these habits can also be transplanted into daily work to improve work efficiency.

  In fact, the shell syntax is very simple. Basically, it is a combination of command sets under the command line, and then a shell script is formed. Of course, if you don't understand grammar, you can just search on Baidu. After all, the important thing is the idea, not the grammar.

  Recently, I just received a request, as follows:

    The DBA will export the data of some service rules, and then manually curl an application interface one by one to complete the corresponding business requirements.

  Then the problem comes, the data exported by DBA is formatted, and the interface to curl is also formatted. All that is needed is to replace the corresponding data with the corresponding value. Note that it is not guaranteed that all commands can be executed successfully, and it may be necessary to re-run the interface.

  Obviously, manually write curl commands one by one, then execute them one by one, and then observe the results and make judgments, which is feasible for a few data. But assuming that there are hundreds, thousands and tens of thousands of data, it is impossible to do it manually. Therefore, the shell script should appear (of course, some students said that I can use other languages, and even said that I can write this function into the code, but these special meaningless codes do not need to be retained for a long time. down).

  The shell script only needs to do three things:

    1. Read the content of the source data file and replace the data format of the interface;

    2. Execute commands to complete business operations;

    3. Record a complete log for later investigation and comparison;

  The requirements are very simple, it doesn't matter if you don't understand the grammar, check it out. The reference code is as follows:

Copy code
#!/bin/bash
log_file='result.log'
param_file=$1 # The source data is specified on the command line

log_cmd="tee -a $log_file"
i=1
for line in `cat $param_file`;
do
   echo "read line" $i ":" $line | tee -a $log_file
   let "i=$i+1"
   OLD_IFS =$IFS;IFS=",";
   arr=($line) # Split data into array
   IFS=$OLD_IFS;
   curl_cmd="curl -d 'uId=${arr[0]}&bid=${arr[1] }&bA=${arr[2]}&to=6&bP=30&fddays=5' http://localhost:8080/mi/api/ss/1.0.1/co/apply "
   echo `date "+%Y-%m -%d %H:%M:%S"` "start ===>> " $curl_cmd | tee -a $log_file
   eval "$curl_cmd 2>&1" | tee -a $log_file # Use the eval command to remove errors The log and the interface return results are brought back together, and the subsequent console and log storage
   echo `date "+%Y-%m-%d %H:%M:%S"` "end <<===" $curl_cmd | tee -a $log_file
done

echo `date "+%Y-%m-%d %H:%M:%S"` "over: end of shell" | tee -a   $
log_file

234, 201708222394083443 , 5000 4211,
201782937493274932, 3000 23, 201749379583475934,
2000
  When the file format to be read is a space-separated file, an exception will occur in the read, and another way to read the line:

Copy code
#!/bin/bash
log_file='result.log'
param_file=$1


log_cmd="tee -a $log_file"
i=1
while read line;
do
   echo "read line" $i ":" $line | tee -a $log_file
   let "i=$i+1"
   arr=($line)
   curl_cmd="curl -d 'uId=${arr[0]}&bid=${arr[1]}&bt=${arr[2]}&toBorrowType=6&borrowPeriod=30&fddays=5' http://localhost/mi/c/1.0.1/c/n"
   echo `date "+%Y-%m-%d %H:%M:%S"` "start ===>> " $curl_cmd | tee -a $log_file
   #`$curl_cmd` 2>&1 $log_file | tee -a $log_file
   eval "$curl_cmd 2>&1" | tee -a $log_file
   echo `date "+%Y-%m-%d %H:%M:%S"` "end <<===" $curl_cmd | tee -a $log_file
done < $param_file

echo `date "+%Y-%m-%d %H:%M:%S"` "over: end of shell" | tee -a $log_file
There
  is a trick here, that is to use the tee command, both in The access log is displayed on the console, and records are also written to the file. There is manual observation and log storage for review.

  In this way, it is realized that everyone does not have to manually type in the data, so that it is possible to make mistakes in this. The DBA derives the formatted data from the data, and the shell script directly reads the formatted data and keeps records. That's what programs are supposed to do.

  In a word, find a way to be lazy, this is what we should do.

  However, it should be noted that when an interface is executed by a script, you can consider the concurrency problem, the server pressure problem, and don't trust the code too much. Prepare for the worst.

  Please refer to the curl command: https://curl.haxx.se/docs/manpage.html    (you can search for a brief Chinese description, of course)

 

  I used to think that the processing of 1 and 2G log files was a headache. Later, I found that the combination of grep, awk, sed, less, salt and other tools can allow you to directly extract files from dozens of gigabytes or more. key. This is the power of linux.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326308749&siteId=291194637