Python quickly counts high-frequency words in files

Ideas:

1. Obtain a list of all words through the jieba library;
2. Calculate the frequency of words and their corresponding words in the list and store them in the dictionary;
3. Arrange the words in the dictionary according to frequency;
4. Output the top ten words and their frequencies frequency;

Installation of jieba library

You need cmd to enter the command prompt window and enter pip install jieba to install

The source code is as follows

import jieba                # 调用jieba库
f_name = '斗破苍穹.txt'      # 文件地址
with open(f_name, encoding='utf-8')as a:    # 将文件放入a中
    b = a.read()            # 对文件进行读操作
words = jieba.lcut(b)       # words是直接生成一个装有词的列表,即list
count = {
    
    }  # 定义一个字典
for word in words:          # 枚举在文章中出现的词汇
    if len(word) < 2:       # 排除字长小于2的词
        continue
    else:                   # 统计词频
        count[word] = count.get(word, 0)+1
list1 = list(count.items())     # 将字典中的键值对转化为列表
list1.sort(key=lambda x: x[1], reverse=True)    # 对列表按照词频从大到小排列
for i in range(10):
    word, number = list1[i]     # 将列表中的word与number提取出来
    print("关键字:{:-<10}频次:{:+>8}".format(word, number))      # 输出word与number值、

Guess you like

Origin blog.csdn.net/weixin_52031478/article/details/109357103