使用gensim训练中文语料word2vec

版权声明:本文为博主原创文章,未经博主允许不得转载(pan_jinquan) https://blog.csdn.net/guyuealian/article/details/73718773

使用gensim训练中文语料word2vec

目录

使用gensim训练中文语料word2vec

1、项目目录结构

1.1 文件说明:

1.2 项目下载地址

2、使用jieba中文切词工具进行切词

2.1 添加自定义词典

2.2 添加停用词

2.3 jieba中文分词 

2.4 完整代码和测试方法 

扫描二维码关注公众号,回复: 4437258 查看本文章

3、gensim训练模型


1、项目目录结构

1.1 文件说明:

data:存放数据文件,其中source文件夹是存放语料文件,可以是多个文件,segment文件夹是对应source文件夹存放字词分割后的文件

models:保存gensim训练后的模型文件

segment.py: 用于分割中文字词的Python文件

word2vec.py: gensim训练Python文件

1.2 项目下载地址

项目下载地址:Githubhttps://github.com/PanJinquan/nlp-learning-tutorials/tree/master/word2vec  ,“觉得不错,给个Star哈”


2、使用jieba中文切词工具进行切词

 使用《人民的名义》的小说原文作为语料,语料在这里下载。拿到了语料,我们首先要进行分词,这里使用jieba分词完成。

2.1 添加自定义词典

  • 开发者可以指定自己自定义的词典,以便包含 jieba 词库里没有的词。虽然 jieba 有新词识别能力,但是自行添加新词可以保证更高的正确率
  • 用法: jieba.load_userdict(file_name) # file_name 为文件类对象或自定义词典的路径
  • 词典格式和 dict.txt 一样,一个词占一行;每一行分三部分:词语、词频(可省略)、词性(可省略),用空格隔开,顺序不可颠倒。file_name 若为路径或二进制方式打开的文件,则文件必须为 UTF-8 编码。
  • 词频省略时使用自动计算的能保证分出该词的词频。

例如:

沙瑞金 5
田国富 5
高育良 5
侯亮平 5
钟小艾 5
陈岩石 5
欧阳菁 5
易学习 5
王大路 5
蔡成功 5
孙连城 5
季昌明 5
丁义珍 5
郑西坡 5
赵东来 5
高小琴 5
赵瑞龙 5
林华华 5
陆亦可 5
刘新建 5
刘庆祝 5

2.2 添加停用词

def getStopwords(path):
    '''
    加载停用词
    :param path:
    :return:
    '''
    stopwords = []
    with open(path, "r", encoding='utf8') as f:
        lines = f.readlines()
        for line in lines:
            stopwords.append(line.strip())
    return stopwords

2.3 jieba中文分词 

def segment_line(file_list,segment_out_dir,stopwords=[]):
    '''
    字词分割,对每行进行字词分割
    :param file_list:
    :param segment_out_dir:
    :param stopwords:
    :return:
    '''
    for i,file in enumerate(file_list):
        segment_out_name=os.path.join(segment_out_dir,'segment_{}.txt'.format(i))
        segment_file = open(segment_out_name, 'a', encoding='utf8')
        with open(file, encoding='utf8') as f:
            text = f.readlines()
            for sentence in text:
                # jieba.cut():参数sentence必须是str(unicode)类型
                sentence = list(jieba.cut(sentence))
                sentence_segment = []
                for word in sentence:
                    if word not in stopwords:
                        sentence_segment.append(word)
                segment_file.write(" ".join(sentence_segment))
            del text
            f.close()
        segment_file.close()

def segment_lines(file_list,segment_out_dir,stopwords=[]):
    '''
    字词分割,对整个文件内容进行字词分割
    :param file_list:
    :param segment_out_dir:
    :param stopwords:
    :return:
    '''
    for i,file in enumerate(file_list):
        segment_out_name=os.path.join(segment_out_dir,'segment_{}.txt'.format(i))
        with open(file, 'rb') as f:
            document = f.read()
            # document_decode = document.decode('GBK')
            document_cut = jieba.cut(document)
            sentence_segment=[]
            for word in document_cut:
                if word not in stopwords:
                    sentence_segment.append(word)
            result = ' '.join(sentence_segment)
            result = result.encode('utf-8')
            with open(segment_out_name, 'wb') as f2:
                f2.write(result)

2.4 完整代码和测试方法 

# -*-coding: utf-8 -*-
"""
    @Project: nlp-learning-tutorials
    @File   : segment.py
    @Author : panjq
    @E-mail : [email protected]
    @Date   : 2017-05-11 17:51:53
"""

##
import jieba
import os
from utils import files_processing

'''
read() 每次读取整个文件,它通常将读取到底文件内容放到一个字符串变量中,也就是说 .read() 生成文件内容是一个字符串类型。
readline()每只读取文件的一行,通常也是读取到的一行内容放到一个字符串变量中,返回str类型。
readlines()每次按行读取整个文件内容,将读取到的内容放到一个列表中,返回list类型。
'''
def getStopwords(path):
    '''
    加载停用词
    :param path:
    :return:
    '''
    stopwords = []
    with open(path, "r", encoding='utf8') as f:
        lines = f.readlines()
        for line in lines:
            stopwords.append(line.strip())
    return stopwords

def segment_line(file_list,segment_out_dir,stopwords=[]):
    '''
    字词分割,对每行进行字词分割
    :param file_list:
    :param segment_out_dir:
    :param stopwords:
    :return:
    '''
    for i,file in enumerate(file_list):
        segment_out_name=os.path.join(segment_out_dir,'segment_{}.txt'.format(i))
        segment_file = open(segment_out_name, 'a', encoding='utf8')
        with open(file, encoding='utf8') as f:
            text = f.readlines()
            for sentence in text:
                # jieba.cut():参数sentence必须是str(unicode)类型
                sentence = list(jieba.cut(sentence))
                sentence_segment = []
                for word in sentence:
                    if word not in stopwords:
                        sentence_segment.append(word)
                segment_file.write(" ".join(sentence_segment))
            del text
            f.close()
        segment_file.close()

def segment_lines(file_list,segment_out_dir,stopwords=[]):
    '''
    字词分割,对整个文件内容进行字词分割
    :param file_list:
    :param segment_out_dir:
    :param stopwords:
    :return:
    '''
    for i,file in enumerate(file_list):
        segment_out_name=os.path.join(segment_out_dir,'segment_{}.txt'.format(i))
        with open(file, 'rb') as f:
            document = f.read()
            # document_decode = document.decode('GBK')
            document_cut = jieba.cut(document)
            sentence_segment=[]
            for word in document_cut:
                if word not in stopwords:
                    sentence_segment.append(word)
            result = ' '.join(sentence_segment)
            result = result.encode('utf-8')
            with open(segment_out_name, 'wb') as f2:
                f2.write(result)


if __name__=='__main__':


    # 多线程分词
    # jieba.enable_parallel()
    # 加载自定义词典
    user_path = 'data/user_dict.txt'
    jieba.load_userdict(user_path)

    stopwords_path='data/stop_words.txt'
    stopwords=getStopwords(stopwords_path)

    file_dir='data/source'
    segment_out_dir='data/segment'
    file_list=files_processing.get_files_list(file_dir,postfix='*.txt')
    segment_lines(file_list, segment_out_dir,stopwords)

3、gensim训练模型

# -*-coding: utf-8 -*-
"""
    @Project: nlp-learning-tutorials
    @File   : word2vec_gensim.py
    @Author : panjq
    @E-mail : [email protected]
    @Date   : 2017-05-11 17:04:35
"""

from gensim.models import word2vec
import multiprocessing

def train_wordVectors(sentences, embedding_size = 128, window = 5, min_count = 5):
    '''

    :param sentences: sentences可以是LineSentence或者PathLineSentences读取的文件对象,也可以是
                    The `sentences` iterable can be simply a list of lists of tokens,如lists=[['我','是','中国','人'],['我','的','家乡','在','广东']]
    :param embedding_size: 词嵌入大小
    :param window: 窗口
    :param min_count:Ignores all words with total frequency lower than this.
    :return: w2vModel
    '''
    w2vModel = word2vec.Word2Vec(sentences, size=embedding_size, window=window, min_count=min_count,workers=multiprocessing.cpu_count())
    return w2vModel

def save_wordVectors(w2vModel,word2vec_path):
    w2vModel.save(word2vec_path)

def load_wordVectors(word2vec_path):
    w2vModel = word2vec.Word2Vec.load(word2vec_path)
    return w2vModel

if __name__=='__main__':

    # [1]若只有一个文件,使用LineSentence读取文件
    # segment_path='./data/segment/segment_0.txt'
    # sentences = word2vec.LineSentence(segment_path)

    # [1]若存在多文件,使用PathLineSentences读取文件列表

    segment_dir='./data/segment'
    sentences = word2vec.PathLineSentences(segment_dir)

    # 简单的训练
    model = word2vec.Word2Vec(sentences, hs=1,min_count=1,window=3,size=100)
    print(model.wv.similarity('沙瑞金', '高育良'))
    # print(model.wv.similarity('李达康'.encode('utf-8'), '王大路'.encode('utf-8')))

    # 一般训练,设置以下几个参数即可:
    word2vec_path='./models/word2Vec.model'
    model2=train_wordVectors(sentences, embedding_size=128, window=5, min_count=5)
    save_wordVectors(model2,word2vec_path)
    model2=load_wordVectors(word2vec_path)
    print(model2.wv.similarity('沙瑞金', '高育良'))

运行结果:

0.968616
0.994922

猜你喜欢

转载自blog.csdn.net/guyuealian/article/details/73718773