The text processing tools under Linux are very rich and powerful. The following methods can be used to remove duplicate lines:
1
2
3
4
|
cat log
www.haiyun.me 192.168.1.1
www.haiyun.me 192.168.1.1
www.haiyun.me 192.168.1.2
|
Use uniq / sort to remove duplicate lines, only operate on whole lines.
1
2
3
4
5
|
uniq log
sort -u log
uniq log
www.haiyun.me 192.168.1.1
www.haiyun.me 192.168.1.2
|
Use awk to delete duplicate rows by column, and also by multiple columns or entire rows.
1
2
|
awk '!i[$1]++' log
www.haiyun.me 192.168.1.1
|
Use sed to remove duplicate lines:
1
2
3
|
sed '$!N; /^\(.*\)\n\1$/!P; D' log
www.haiyun.me 192.168.1.1
www.haiyun.me 192.168.1.2
|