jieba word segmentation to extract novel names

With "evil" as an example:

the use of jieba.possegacquiring speech, for the names of the parts of speechnr

1. Read text

import jieba.posseg as psg
with open('shendiaoxialv.txt',encoding='utf-8') as f:
    text = f.readlines()
print(text[:10])

Output:

['\ufeff 第 一 回\u3000风月无情\n', '\n', '    “越女采莲秋水畔,窄袖轻罗,暗露双金钏。\n', '\n', '    照影摘花花似面,芳心只共丝争乱。\n', '\n', '    鸡尺溪头风浪晚,雾重烟轻,不见来时伴。\n', '\n', '    隐隐歌声归掉远,离愁引看江南岸。”\n', '\n']
len(text)

Output: 16741, the text has more than 16,000 lines

2. Participle

for t in text:
    res = psg.cut(t)
    print([(item.word, item.flag) for item in res])

Output:

Building prefix dict from the default dictionary ...
Loading model from cache C:\Users\computer~1\AppData\Local\Temp\jieba.cache
Loading model cost 1.023 seconds.
Prefix dict has been built succesfully.
[('\ufeff', 'x'), (' ', 'x'), ('第', 'm'), (' ', 'x'), ('一', 'm'), (' ', 'x'), ('回', 'v'), ('\u3000', 'x'), ('风月', 'n'), ('无情', 'n'), ('\n', 'x')]
[('\n', 'x')]
[(' ', 'x'), (' ', 'x'), (' ', 'x'), (' ', 'x'), ('“', 'x'), ('越女', 'nr'), ('采莲', 'nr'), ('秋水', 'nr'), ('畔', 'ng'), (',', 'x'), ('窄', 'a'), ('袖轻罗', 'i'), (',', 'x'), ('暗露', 'v'), ('双金钏', 'nr'), ('。', 'x'), ('\n', 'x')]
[('\n', 'x')]
[(' ', 'x'), (' ', 'x'), (' ', 'x'), (' ', 'x'), ('照影', 'n'), ('摘花', 'n'), ('花', 'v'), ('似面', 'd'), (',', 'x'), ('芳心', 'n'), ('只', 'm'), ('共丝', 'n'), ('争乱', 'v'), ('。', 'x'), ('\n', 'x')]
[('\n', 'x')]
[(' ', 'x'), (' ', 'x'), (' ', 'x'), (' ', 'x'), ('鸡尺', 'n'), ('溪头', 'n'), ('风浪', 'n'), ('晚', 'tg'), (',', 'x'), ('雾', 'n'), ('重烟', 'n'), ('轻', 'd'), (',', 'x'), ('不见', 'v'), ('来时', 't'), ('伴', 'v'), ('。', 'x'), ('\n', 'x')]

3. Count

dict = {
    
    }
for t in text:
    res = psg.cut(t)
    for item in res:
        if item.flag == 'nr' and item.word in dict:
            dict[item.word] += 1
        elif item.flag == 'nr' and item.word not in dict:
            dict[item.word] = 1
print(dict)

Output:

{
    
    '越女': 1, '采莲': 3, '秋水': 3, '双金钏': 1, '水蒙蒙': 1, '欧阳修': 2, ..省略.. '杜': 1, '须髯戟': 1, '掌力直': 1, '后平飞': 1, '古语云': 1, '秦失其鹿': 1, '冷森森': 1, '子双掌': 1, '掌力击': 1, '齐口': 1, '苍猿': 2, '叶': 1, '秋风': 1, '秋月明': 1, '屠龙记': 1}

4. Sort

name_count = sorted(dict.items(), key=lambda x : x[1], reverse=True)
print(name_count[:30])

Output: Top 30 people with the highest frequency

[('杨', 4749), ('小龙女', 2003), ('郭靖', 972), ('李莫愁', 938), ('武功', 932), 
('黄蓉', 871), ('陆无双', 574), ('周伯通', 554), ('赵志敬', 482), ('郭襄', 386), 
('郭芙', 366), ('裘千尺', 325), ('郭', 283), ('耶律齐', 272), ('尹志平', 259), 
('欧阳锋', 251), ('武三通', 240), ('黄药师', 239), ('杨过心', 239), ('公孙止', 234), 
('尼摩星', 229), ('程英', 226), ('武修文', 226), ('武氏兄弟', 206), ('朱子柳', 203), 
('尹克西', 201), ('杨过见', 188), ('洪七公', 186), ('孙婆婆', 185), ('明白', 173)]

I found that the first name is , not杨过

5. Add user dictionary

import jieba
jieba.load_userdict('mydict.txt')


Run the program again

The final output:

[('杨过', 4586), ('小龙女', 2010), ('郭靖', 982), ('李莫愁', 938), ('武功', 932), 
('黄蓉', 932), ('陆无双', 574), ('周伯通', 554), ('赵志敬', 482), ('郭襄', 386), 
('郭芙', 366), ('裘千尺', 325), ('郭', 282), ('耶律齐', 272), ('尹志平', 259), 
('欧阳锋', 251), ('武三通', 240), ('黄药师', 239), ('杨过心', 239), ('公孙止', 234), 
('尼摩星', 229), ('程英', 226), ('武修文', 226), ('武氏兄弟', 206), ('朱子柳', 203), 
('尹克西', 201), ('杨过见', 188), ('洪七公', 186), ('孙婆婆', 185), ('明白', 173)]

Guess you like

Origin blog.csdn.net/qq_21201267/article/details/109307988