实战腾讯开源文本分类工具NeuralNLP-NeuralClassifier

背景:想用腾讯文本开源工具做一个XX资讯的二分器,区分是不是XX资讯。

NeuralNLP-NeuralClassifier的github在这里:https://github.com/Tencent/NeuralNLP-NeuralClassifier

数据格式如下:csv文件,字段是label,title。数据举例如下:

1,美女模特走秀,这纤细的小蛮腰,真想拥入怀中
0,老厂房变健康产业园!大亚湾科创园去年又有20家孵化企业毕业

具体步骤:

1.将源数据转化成开源工程需要的数据格式。

因为该开源工程标准的数据格式是这样的:

{"doc_label": ["C31", "C312", "CCAT"], "doc_token": ["The", "Australian", "government", "said", "Monday", "Frances", "approval", "last", "week", "sale", "kangaroo", "meat", "human", "consumption", "would", "open", "new", "export", "market", "worth", "A", "million", "year", "Australian", "exporters", "The", "move", "followed", "months", "negotiations", "Australian", "French", "officials", "Australia", "sought", "expand", "A", "millionayear", "European", "market", "Primary", "Industries", "Minister", "John", "Anderson", "Trade", "Minister", "Tim", "Fischer", "said", "statement", "Australia", "able", "present", "France", "extensive", "scientific", "technical", "information", "kangaroo", "meat", "high", "standards", "game", "processing", "plants", "country", "enabling", "arrive", "decision", "Anderson", "said", "Fischer", "said", "French", "decision", "brought", "France", "line", "several", "European", "countries", "including", "Germany", "Britain", "long", "allowed", "imports", "kangaroo", "meat", "human", "consumption", "Europe", "Australias", "major", "market", "game", "meat", "exports", "Anderson", "said", "French", "move", "would", "boost", "Australias", "kangaroo", "meat", "industry", "Canberra", "Bureau"], "doc_keyword": [], "doc_topic": []}

其中有四个字段,第一个是doc_label标签名,第二个是doc_token文本,第三个是doc_keyword(可选),第四个是doc_topic(可选),所以第一步是将源数据转化成这种json格式,具体的代码如下:

import numpy as np
import pandas as pd
import jieba
import jieba.analyse
import codecs

import sys
import json
import re
# 目标 转化成
"""
JSON example:
{
    "doc_label": ["Computer--MachineLearning--DeepLearning", "Neuro--ComputationalNeuro"],
    "doc_token": ["I", "love", "deep", "learning"],
    "doc_keyword": ["deep learning"],
    "doc_topic": ["AI", "Machine learning"]
}
"""

# 读入数据
input_path ="../data/"
file_name1 = input_path+"train_set.csv"
df1 = pd.read_csv(file_name1,header=None)
df1.columns = ['doc_label','doc_token']
df1.shape

df1

# 将df转换成两份  并且把数据打乱
df2 = df1.sample(frac=1)
df2

# 一部分作为训练街
train_df = df2[:20000]
# 一部分作为验证集
valid_df = df2[20000:]

train_df.shape

valid_df.shape

# 停用词
def stop_words(path):
    with open(path) as f:
        return [l.strip() for l in f]

output_path = '../data/train_set.json'

with open(output_path,"w+",encoding='utf-8') as f:
#with open(output_path, "w") as fw:
    for indexs in train_df.index:
        dict1 = {}
        dict1['doc_label'] = [str(train_df.loc[indexs].values[0])]
        doc_token = train_df.loc[indexs].values[1]
        # 只保留中文、大小写字母和阿拉伯数字
        reg = "[^0-9A-Za-z\u4e00-\u9fa5]"
        doc_token =re.sub(reg, '', doc_token)
        print(doc_token)
        # 中文分词
        seg_list  = jieba.cut(doc_token, cut_all=False)
        # 去除停用词
        content = [x for x in seg_list if x not in stop_words('../data/stop_words.txt')]
        dict1['doc_token'] = content
        dict1['doc_keyword'] = []
        dict1['doc_topic'] = []
        # 组合成字典
        print(dict1)
        # 将字典转化成字符串
        json_str = json.dumps(dict1, ensure_ascii=False)
        f.write('%s\n' % json_str)
        # 将字符串 转换为 字典
        #new_dict = json.loads(json_str)
        #将数据写入json文件中
        #f.write('%s\n' % new_dict)
        #json.dump(new_dict,f,ensure_ascii=False,sort_keys=True, indent=4)
        #json.dump(new_dict,fw)

output_path = '../data/valida_set.json'

with open(output_path,"w+",encoding='utf-8') as f:
#with open(output_path, "w") as fw:
    for indexs in valid_df.index:
        dict1 = {}
        dict1['doc_label'] = str(valid_df.loc[indexs].values[0])
        doc_token = valid_df.loc[indexs].values[1]
        # 去除空格
        reg = "[^0-9A-Za-z\u4e00-\u9fa5]"
        doc_token =re.sub(reg, '', doc_token)
        print(doc_token)
        # 中文分词
        seg_list  = jieba.cut(doc_token, cut_all=False)
        # 去除停用词
        content = [x for x in seg_list if x not in stop_words('../data/stop_words.txt')]
        dict1['doc_token'] = content
        dict1['doc_keyword'] = []
        dict1['doc_topic'] = []
        # 组合成字典
        print(dict1)
        # 将字典转化成字符串
        json_str = json.dumps(dict1, ensure_ascii=False)
        f.write('%s\n' % json_str)
        # 将字符串 转换为 字典
        #new_dict = json.loads(json_str)
        #将数据写入json文件中
        #f.write('%s\n' % new_dict)
        #json.dump(new_dict,f,ensure_ascii=False,sort_keys=True, indent=4)
        #json.dump(new_dict,fw)

2.这里我们已经得到了训练数据。然后修改下相关的配置,因为是二分类任务,

所以将train.json文件里面的task_info中的label_type配置为single_label。这里还有个要注意的地方,

如果用原来的数据集num_worker默认为4可以跑通,但是现在是中文数据集之后要修改为0,不然会报错。

还要修改个文件rcv1.taxonomy,把里面配置成 Root 0 1即可,因为是二分类任务。

3.最后执行python train.py conf/train.json  即可跑模型啦。

发布了80 篇原创文章 · 获赞 27 · 访问量 6万+

猜你喜欢

转载自blog.csdn.net/abc50319/article/details/102572568