python learning-102-text data preprocessing-word segmentation

Foreword:

  For natural language processing, in some cases, you need to build your own corpus and train it as a model. This article is to segment the sorted data and remove messy characters. Use the stuttering word segmentation tool for word segmentation, and load a custom stop vocabulary list (stop vocabulary content = Chinese Academy of Sciences + custom)

Don't spray if you don't like it^-^

The data is saved in the TXT file as follows:

Word segmentation completed:

Code:

# coding:utf8
import utils as util
import jieba

# 1读入文件分词之后存入文件
def readCutRemovewrite(readfile_path, writefile_path):
    inputs = open(readfile_path, 'r', encoding='utf-8')
    outputs = open(writefile_path, 'w', encoding='utf8')
    for line in inputs:
        line_seg = seg_sentence(line)  # 这里的返回值是字符串
        outputs.write(line_seg + '\n')
    outputs.close()
    inputs.close()

# 2句子分词并去停用词
def seg_sentence(sentence):
    # 2创建停用词list
    stopWords = [line.strip() for line in open('data/stopWord.txt', 'r', encoding='utf-8').readlines()]
    sentence_seged = jieba.cut(sentence.strip())
    outstr = ''
    for word in sentence_seged:
        if word not in stopWords:
            if word != '\t':
                outstr += word
                outstr += " "
    return outstr

if __name__ == '__main__':

    readfile_path =r'F:\data\test1.txt'
    #工具类方法 读入 分词 写入
    readCutRemovewrite(readfile_path,writefile_path)
    print('数据预处理完成')

 

Guess you like

Origin blog.csdn.net/u013521274/article/details/84994835