Linux log depth operation command


Every programmer, at some point in their career, will find themselves needing to know something about Linux. I'm not saying you should be a linux expert, I mean, when it comes to linux command line tasks, you should be pretty good at it. In fact, having learned the following 8 commands, I can basically accomplish any task that needs to be done.

Note: Each of the commands below are well documented. This article is not intended to be an exhaustive presentation of the various functions of each command. What I'm going to talk about here is the most common usage of these few most commonly used commands. If you don't know much about linux commands and you want to find some information about this, then this article will give you a basic guide.

Let's start by processing some data. Suppose we have two files, the recorded order list and the order processing result.

order.out.log
  8:22:19 111, 1, Patterns of Enterprise Architecture, Kindle edition, 39.99
  8:23:45 112, 1, Joy of Clojure, Hardcover, 29.99
  8:24:19 113, -1, Patterns of Enterprise Architecture, Kindle edition, 39.99

  order.in.log
  8:22:20 111, Order Complete
  8:23:50 112, Order sent to fulfillment
  8:24:20 113, Refund sent to processing
cat

cat - concatenated files, and output the result

The cat command is very simple, as you can see from the example below.

jfields$ cat order.out.log
8:22:19 111, 1, Patterns of Enterprise Architecture, Kindle edition, 39.99
8:23:45 112, 1, Joy of Clojure, Hardcover, 29.99
8:24:19 113, - 1, Patterns of Enterprise Architecture, Kindle edition, 39.99
As its description describes, you can use it to concatenate multiple files.

jfields$ cat order.*
8:22:20 111, Order Complete
8:23:50 112, Order sent to fulfillment
8:24:20 113, Refund sent to processing
8:22:19 111, 1, Patterns of Enterprise Architecture , Kindle edition, 39.99
8:23:45 112, 1, Joy of Clojure, Hardcover, 29.99
8:24:19 113, -1, Patterns of Enterprise Architecture, Kindle edition, 39.99
If you want to see the contents of these log files, you can concatenate them and output to standard output, as the example above shows. This is useful, but the output could be more logical.

sort

sort - the text in the file is sorted by line

At this point sort command is obviously your best choice.

jfields$ cat order.* | sort
8:22:19 111, 1, Patterns of Enterprise Architecture, Kindle edition, 39.99
8:22:20 111, Order Complete
8:23:45 112, 1, Joy of Clojure, Hardcover, 29.99
8:23:50 112, Order sent to fulfillment
8:24:19 113, -1, Patterns of Enterprise Architecture, Kindle edition, 39.99
8:24:20 113, Refund sent to processing
As the example above shows, the file The data in is already sorted. For some small files, you can read the whole file to process them, however, real log files usually have a lot of content and you can't ignore this situation. At this point, you should consider filtering out some content, and pass the cat and sorted content to the filter tool through the pipeline.

grep

grep, egrep, fgrep - print lines matching a condition

Suppose we are only interested in orders for the book Patterns of Enterprise Architecture. Using grep, we can limit the output to only orders containing Patterns characters.

jfields$ cat order.* | sort | grep Patterns 8:22:19
111, 1, Patterns of Enterprise Architecture, Kindle edition, 39.99
8:24:19 113, -1, Patterns of Enterprise Architecture, Kindle edition, 39.99
Something went wrong with order 113 and you want to see all related orders - you need to grep again.

jfields$ cat order.* | sort | grep ":\d\d 113, "
8:24:19 113, -1, Patterns of Enterprise Architecture, Kindle edition, 39.99
8:24:20 113, Refund sent to processing
you You will find that the matching pattern on grep has something else besides "113". This is because 113 can also match the bibliography or price, and after adding extra characters, we can search exactly what we want.

Now that we know the details of the returns, we also want to know the total daily sales and refunds. But we only care about the information in the book Patterns of Enterprise Architecture, and only about quantity and price. What I'm trying to do now is cut out any information we don't care about.

cut

cut - removes certain regions on lines of characters in a file

Again using grep, we use grep to filter out the lines we want. With the row information we want, we can cut them into small pieces and remove parts of the data that we don't need.

jfields$ cat order.* | sort | grep Patterns
8:22:19 111, 1, Patterns of Enterprise Architecture, Kindle edition, 39.99
8:24:19 113, -1, Patterns of Enterprise Architecture, Kindle edition, 39.99

jfields$ cat order.* | sort | grep Patterns | cut -d"," -f2,5
1, 39.99
-1, 39.99
Now we reduce the data to the form we want for our calculations Can get results.

cut is used to reduce information and simplify tasks, but for output content, we usually have more complex forms. Suppose we also need to know the ID of the order, which can be used to correlate other related information. We can get ID information with cut, but we want to put the ID at the end of the line, wrapped in single quotes.

sed

sed - A stream editor. It is used to perform basic text transformations on the input stream.

The following example shows how to use the sed command to transform our file lines, and then we use cut to remove useless information.

jfields$ cat order.* | sort | grep Patterns \
>| sed s/"[0-9\:]* \([0-9]*\)\, \(.*\)"/"\2, '\1'"/
1, Patterns of Enterprise Architecture , Kindle edition, 39.99, '111'
-1, Patterns of Enterprise Architecture, Kindle edition, 39.99, '113'

lmp-jfields01:~ jfields$ cat order.* | sort | grep Patterns \
>| sed s/"[0 -9\:]* \([0-9]*\)\, \(.*\)"/"\2, '\1'"/ | cut -d"," -f1,4,5
1 , 39.99, '111'
-1, 39.99, '113'
Let's say a little more about the regular expressions used in the example, but nothing complicated. Regular expressions do the following things:

Remove timestamps
Capture order numbers
Remove commas and spaces after order numbers
Capture the rest of the line information The quotes and backslashes
inside are a bit messy, but you must use them when using the command line.

Once we have captured the data we want, we can use \1 & \2 to store them and output them in the format we want. We also included the required single quotes, and to keep the format consistent, we also included commas. Finally, use the cut command to delete unnecessary data.

Now we are in trouble. We have already demonstrated how to reduce the log file to a more concise form of order, but our financial department needs to know which books are in the order.

uniq

uniq - remove duplicate lines

The following example shows how to filter out transactions related to books, remove unnecessary information, and get a unique information.

jfields$ cat order.out.log | grep "\(Kindle\|Hardcover\)" | cut -d"," -f3 | sort | uniq -c
   1 Joy of Clojure
   2 Patterns of Enterprise Architecture
It seems this is a very simple task.

These are all great commands, but only if you can find the file you want. Sometimes you will find some files hidden in deep folders and you have no idea where they are. But if you know the name of the file you're looking for, this shouldn't be a problem for you.

find

find - search for a file in a file directory

In the above example we dealt with two files order.in.log and order.out.log. These two files are placed in my home directory. The following example will show you how to find such files in a deep directory structure.

jfields$ find /Users -name "order*"
Users/jfields/order.in.log
Users/jfields/order.out.log The
find command has many other parameters, but 99% of the time I only need this one .

With one simple line, you can find the file you want, then you can view it with cat and trim it with cut. But when the files are small, it's fine for you to pipe them to the screen, but when the files are too large to fit on the screen, you should probably pipe them to the less command.

less

less - move forward or backward in the file

Let 's go back to the simple cat | sort example, the following command is to output the merged and sorted content to the less command. In the less command, use "/" to perform a forward search, and use the "?" command to perform a backward search. The search condition is a regular expression.

jfields$ cat order* | sort | less
If you use /113.* in the less command, all 113 order information will be highlighted. You can also try ?.*112, all timestamps related to order 112 will be highlighted. Finally you can use 'q' to quit the less command.

There are a lot of various commands in linux, some of which are difficult to use. However, after learning the above 8 commands, you can already handle a large number of log analysis tasks, and you don't need to write programs in a scripting language to handle them.

Reprint address: http://www.vaikan.com/8-linux-commands-every-developer-should-know/?_biz=MjM5OTA1MDUyMA==&mid=407358558&idx=2&sn=b21877f23bf4063fa311185009c1f0b7&scene=0#wechat_redirect1463619155519

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326632172&siteId=291194637