linux deduplication

linux deduplication

1. uniq parameter analysis

parameter illustrate
-c Count the number of times a line appears
-d Show only duplicate rows and remove duplicates
-in Show only unique rows
-i Ignore case
-f Ignore the first N fields (separate fields with whitespace characters)

2. Use awk to remove duplicates

1️⃣ Deduplication of two files

Example:

  • aa file content

    123
    234
    345
    456
    123
    234

  • bb file content

    123
    234
    aaa
    345
    456
    ccc
    123
    234
    bbb

# 使用 awk 对两个文件进行去重
[root@localhost ~]# awk '{print $0}' aa bb | sort | uniq -u
aaa
bbb
ccc
2️⃣ Deduplicate a file

Example:

  • aa file content

    123
    234
    345
    456
    123
    234

# 使用 awk 对单个文件去重(去重后不会打乱顺序)
[root@localhost ~]# awk '!x[$0]++'  aa
123
234
345
456

# 查找文件中的一行
# 保留重复项
[root@localhost ~]# awk '{print $0}' aa | sort | uniq -d
123
234

# 保留未重复项
[root@localhost ~]# awk '{print $0}' aa | sort | uniq -u
345
456

3. Sort combined with uniq to remove duplicates

Example:

  • bb file content

    123
    234
    aaa
    345
    456
    ccc
    123
    234
    bbb

# 查看文件中重复的行
[root@localhost ~]# sort bb | uniq -d
123
234

# 查看文件中不重复的行
[root@localhost ~]# sort bb | uniq -u
345
456
aaa
bbb
ccc

# 统计出现的行数
[root@localhost ~]# sort bb | uniq -c
      2 123
      2 234
      1 345
      1 456
      1 aaa
      1 bbb
      1 ccc
      

 
 
 
 
 

Guess you like

Origin blog.csdn.net/D1179869625/article/details/131225915