transformer学习 比赛实战 roberta_large 英文NER

roberta_large构建英文Ner模型实战

流程简述:处理训练数据(txt),以英文文本为例,构建dataloader,构建模型,构建训练评价体系以及模型保存。
难点简介:英文ner过程中,roberta_large的tokenizer.tokenizer会切开英文单词,导致label一一对应并不简便,需要自定函数,与之对应。

  1. 依赖库
from transformers import RobertaConfig,RobertaTokenizer,RobertaForTokenClassification
from sklearn.model_selection import KFold
import pandas as pd
import os
import json
from torch.utils.data import TensorDataset, RandomSampler, DataLoader, SequentialSampler
import torch
from transformers import AdamW
from datetime import datetime
import time
from tqdm import tqdm
from sklearn.metrics import classification_report
import numpy as np
from torch import nn
from itertools import chain

说明:主要模型代码依赖于transformer库

2.构建模型的基本配置类

class Config:
    train_data_file = 'data/task1_public/task1_public/new_train.json'  # 训练集地址
    test_data_file = 'data/task1_public/task1_public/new_val.json'  # 测试集地址
    output_dir = 'output'  # 输出地址
    max_len = 256  # 截断或填充的最大长度
    verbose_step = 1089  # 执行记录的步数
    MODEL_PATH = r'../../en_Roberta_large'  # model地址的路径
    num_works = 0  # 进程数,需要大于等于0, 若数据集大小小于2w时,建议默认,即为0
    batch_size = 4  # 一次训练使用的样本数量,一般为2的次方,速度快,符合进程要求
    backward_step = 1  # 更新网络参数的步数,控制回滚,通常设置为1
    ckpt_step = 1089  # 控制更新验证集的步数,一样取训练次数的中间值
    k_fold = 10  # 如做交叉验证时的折数
    n_epochs = 5  # 完成所有样本训练即为1个epoch,设置需进行几个epoch数,通常取3-5个
    lr = 5e-6  # 学习率
    num_labels = 33  # 总的标签数目

3.介绍数据
{“text”: “Safety and efficacy of intravenous bimagrumab in inclusion body myositis (RESILIENT): a randomised, double-blind, placebo-controlled phase 2b trial\tBimagrumab showed a good safety profile, relative to placebo, in individuals with inclusion body myositis but did not improve 6MWD. The strengths of our study are that, to the best of our knowledge, it is the largest randomised controlled trial done in people with inclusion body myositis, and it provides important natural history data over 12 months.”, “entities”: [{“entity”: “bimagrumab”, “type”: “Drug”, “start”: 35, “end”: 45}, {“entity”: “inclusion body myositis”, “type”: “Disease”, “start”: 49, “end”: 72}, {“entity”: “inclusion body myositis”, “type”: “Disease”, “start”: 230, “end”: 253}, {“entity”: “inclusion body myositis”, “type”: “Disease”, “start”: 413, “end”: 436}, {“entity”: “Bimagrumab”, “type”: “Drug”, “start”: 148, “end”: 158}, {“entity”: “6MWD”, “type”: “Gene”, “start”: 274, “end”: 278}, {“entity”: “pyrimidine nucleoside derivatives”, “type”: “ChemicalCompound”, “start”: 273, “end”: 306}]}

数据形如上述结构,为一个字典,其中第一个k,为text,为英文文本,第二个k为entities 为实体集合,其中entity 为实体名称,type 为实体类型,start与end为实体名称在text中的起始位置与结束位置。

4.准备工作,包括处理文本数据为df,用于张量处理,以及用于交叉验证,分为处理train.txt,test.txt。定义Pretreatment_data,用于处理train,形成xlsx文件,构建train的df结构。代码如下:

def Pretreatment_data(path):
    if not os.path.exists(config.output_dir):
        os.makedirs(config.output_dir)
    if not os.path.isfile(os.path.join(config.output_dir, 'goal.xlsx')):
        with open(path, 'r', encoding='latin1') as f:
            load_dict = f.readlines()
        f.close()
        goal = pd.DataFrame(columns=['text', 'entity'])
        for i in range(len(load_dict)):
            dict_line = json.loads(load_dict[i])
            text = dict_line['text']
            entity_list = dict_line['entities']
            goal = goal.append({
    
    'text': text, 'entity': entity_list}, ignore_index=True)
        goal.to_excel(os.path.join(config.output_dir, 'goal.xlsx'), index=False)
    else:
        goal = pd.read_excel(os.path.join(config.output_dir, 'goal.xlsx'))
    return goal

对于test的处理,由于test.txt中无entities部分,自定义Pretreatment_val_data函数,处理txt,形成df。代码如下:

def Pretreatment_val_data(path):
    if not os.path.isfile(os.path.join(config.output_dir, 'val_goal.xlsx')):
        with open(path, 'r', encoding='latin1') as f:
            load_dict = f.readlines()
        f.close()
        goal = pd.DataFrame(columns=['text', 'entity'])
        for i in range(len(load_dict)):
            dict_line = json.loads(load_dict[i])
            text = dict_line['text']
            goal = goal.append({
    
    'text': text, 'entity': 'None'}, ignore_index=True)
        goal.to_excel(os.path.join(config.output_dir, 'val_goal.xlsx'), index=False)
    else:
        goal = pd.read_excel(os.path.join(config.output_dir, 'val_goal.xlsx'))
    return goal

本代码,以BMEO构建label形式,先构建senti_label字典,用于转换label,使得从label->num,代码如下:

def Pretreatment_sent_id(path):
    if not os.path.isfile(os.path.join(config.output_dir, 'sent_id.txt')):
        with open(path, 'r', encoding='latin1') as f:
            load_dict = f.readlines()
        f.close()
        temp_label = []
        for dic in load_dict:
            dict_line = json.loads(dic)
            # text = dict_line['text']
            entity_list = dict_line['entities']
            for index in range(len(entity_list)):
                temp_label.append(entity_list[index]['type'])
        sent = list(set(temp_label))
        sent_id = []
        for ind in sent:
            sent_id.append('W_' + ind)
            sent_id.append('B_' + ind)
            sent_id.append('M_' + ind)
            sent_id.append('E_' + ind)
        sent_id.append('O')
        tag2idx = {
    
    t: i for i, t in enumerate(sent_id)}
        with open(os.path.join(config.output_dir, 'sent_id.txt'), 'w', encoding='utf-8') as f:
            f.write(str(tag2idx))
        f.close()
    else:
        with open(os.path.join(config.output_dir, 'sent_id.txt'), 'r', encoding='utf-8') as f:
            sent_id = eval(f.read())
    return sent_id

6.处理标签与token一一对应的方法如下

def label_precess(ent_list, tokens, tokenizer):  # 标签处理
    tok = tokens.copy()
    lab = ['O'] * len(tok)
    for ent in ent_list:
        pos_list = []
        entity = ent['entity']
        ent_type = ent['type']
        ent_l = len(entity.split(' '))
        for ind, value in enumerate(tok):
            value = value.strip('Ġ')
            if value not in entity:
                pass
            else:
                pos = ind
                par = value
                pos_list.append(pos)
                if par == entity:
                    if ent_l == 1:
                        lab_t = 'W_' + ent_type
                        for po in pos_list:
                            tok[po] = '**'
                            lab[po] = lab_t
                        break
                    else:
                        list_b = []
                        list_e = []
                        list_m = []
                        lab_b = 'B_' + ent_type
                        lab_m = 'M_' + ent_type
                        lab_e = 'E_' + ent_type
                        entity = 'a '+ entity
                        ent_cut = entity.split(' ')
                        ent_b = ent_cut[0:2]  # B的str
                        ent_b = ' '.join(ent_b)
                        ent_e = 'a ' + ent_cut[-1]  # E的str
                        ent_b_par = tokenizer.tokenize(ent_b)
                        ent_b_par_l = len(ent_b_par) -1
                        list_b.extend(pos_list[:ent_b_par_l])
                        ent_e_par = tokenizer.tokenize(ent_e)
                        ent_e_par_l = len(ent_e_par) - 1
                        list_e.extend(pos_list[-1 * ent_e_par_l:])
                        if ent_b_par_l + ent_e_par_l < len(pos_list):
                            list_m.extend(pos_list[ent_b_par_l:-1 * ent_e_par_l])
                            for pos_m in list_m:
                                tok[pos_m] = '**'
                                lab[pos_m] = lab_m
                            for pos_b in list_b:
                                tok[pos_b] = '**'
                                lab[pos_b] = lab_b
                            for pos_e in list_e:
                                tok[pos_e] = '**'
                                lab[pos_e] = lab_e
                            break
                        else:
                            for pos_b in list_b:
                                tok[pos_b] = '**'
                                lab[pos_b] = lab_b
                            for pos_e in list_e:
                                tok[pos_e] = '**'
                                lab[pos_e] = lab_e
                            break
                    # 这里补充变迁处理,改变tok中的元素,防止二次搜索问题,以及替换lab,并且更新pos_list
                else:
                    while par != entity and par in entity:
                        pos = pos + 1
                        if pos >= len(tok):
                            break  # 检索错误问题
                        else:
                            value_temp = tok[pos]
                            if value_temp.strip('Ġ') in entity:
                                value = value_temp
                                pos_list.append(pos)
                                if 'Ġ' in value:
                                    value = value.strip('Ġ')
                                    par = ' '.join([par, value])
                                else:
                                    par = ''.join([par, value])
                            else:
                                break
                    if par != entity:
                        pos_list = []
                        continue
                    else:  # 补充变迁处理,改变tok中的元素,防止二次搜索问题,以及替换lab,并且更新pos_list
                        if ent_l == 1:
                            lab_t = 'W_' + ent_type
                            for po in pos_list:
                                tok[po] = '**'
                                lab[po] = lab_t
                            break
                        else:
                            list_b = []
                            list_e = []
                            list_m = []
                            lab_b = 'B_' + ent_type
                            lab_m = 'M_' + ent_type
                            lab_e = 'E_' + ent_type
                            entity = 'a ' + entity
                            ent_cut = entity.split(' ')
                            ent_b = ent_cut[0:2]  # B的str
                            ent_b = ' '.join(ent_b)
                            ent_e = 'a ' + ent_cut[-1]  # E的str
                            ent_b_par = tokenizer.tokenize(ent_b)
                            ent_b_par_l = len(ent_b_par) -1
                            list_b.extend(pos_list[:ent_b_par_l])
                            ent_e_par = tokenizer.tokenize(ent_e)
                            ent_e_par_l = len(ent_e_par) -1
                            list_e.extend(pos_list[-1 * ent_e_par_l:])
                            if ent_b_par_l + ent_e_par_l < len(pos_list):
                                list_m.extend(pos_list[ent_b_par_l:-1 * ent_e_par_l])
                                for pos_m in list_m:
                                    tok[pos_m] = '**'
                                    lab[pos_m] = lab_m
                                for pos_b in list_b:
                                    tok[pos_b] = '**'
                                    lab[pos_b] = lab_b
                                for pos_e in list_e:
                                    tok[pos_e] = '**'
                                    lab[pos_e] = lab_e
                                break
                            else:
                                for pos_b in list_b:
                                    tok[pos_b] = '**'
                                    lab[pos_b] = lab_b
                                for pos_e in list_e:
                                    tok[pos_e] = '**'
                                    lab[pos_e] = lab_e
                                break
    return lab

上述代码主要解决难点问题,使得标签与token,ids能够一一对应上。

7.特征提取部分仅以处理train,也就是get_train_loader的构建为例,对于val以及test构建方式类似,不在赘述。首先自定义两个数据结构,方便数据的保存。
自定义类为:

扫描二维码关注公众号,回复: 12076119 查看本文章
class Input_Example(object):
    def __init__(self, text, entity):
        self.text = text
        self.entity_list = entity


class Input_Feature(object):
    def __init__(self, input_ids, segment_ids, input_mask, label):
        self.input_ids = input_ids
        self.segment_ids = segment_ids
        self.input_mask = input_mask
        self.label = label

Input_Example类用于从文本中保存两个k,而Input_Feature用于数据的转换,保存从k到ids的数据。
首先从df获取数据

def read_examples(data):
    # with open(config.train_data_file, 'r', encoding='latin1') as f:
    #     load_dict = f.readlines()
    # f.close()
    examples = []
    list_index = list(data['text'].index)
    for mar in list_index:
        text = data['text'][mar]
        entity_list = eval(str(data['entity'][mar]))
        examples.append(Input_Example(text, entity_list))
    return examples

在将数据转换为ids

def convert_examples_to_features(examples, tokenizer, max_length):
    features = []
    for example_index, example in enumerate(examples):
        text_tokens = tokenizer.tokenize(example.text)
        text_tokens = text_tokens[:(max_length - 2)]
        tokens = ["[CLS]"] + text_tokens + ["[SEP]"]
        segment_ids = [0] * (len(text_tokens) + 2)
        input_ids = tokenizer.convert_tokens_to_ids(tokens)
        input_mask = [1] * len(input_ids)
        padding_length = max_length - len(input_ids)
        input_ids += ([0] * padding_length)
        input_mask += ([0] * padding_length)
        segment_ids += ([0] * padding_length)
        if example.entity_list is not None:
            entity_list = clear_entity(example)
            label = label_precess(entity_list, text_tokens, tokenizer)
            label = label[:(max_length - 2)]
            label.append('O')
            label.insert(0, 'O')
            label += (['O'] * padding_length)
            label = [sent_id.get(mar) for mar in label]
        else:
            label = None
        features.append(Input_Feature(input_ids, segment_ids, input_mask, label))
    return features

之后转换张量,如下所示:

def get_data_loader(data, tokenizer, maxlen):
    train_examples = read_examples(data)
    train_features = convert_examples_to_features(train_examples, tokenizer, maxlen)
    all_input_ids = torch.tensor([f.input_ids for f in train_features], dtype=torch.long)  # input_ids
    input_mask = torch.tensor([f.input_mask for f in train_features], dtype=torch.long)  # mask
    segment_ids = torch.tensor([f.segment_ids for f in train_features], dtype=torch.long)  # seg
    all_label = torch.tensor([f.label for f in train_features], dtype=torch.long)  # label
    train_data = TensorDataset(all_input_ids, input_mask, segment_ids, all_label)  # 利用函数打包4个张量成集合(数据源)
    train_sampler = RandomSampler(train_data)  # 对数据集中的数据随机采样(采样器)
    train_dataloader = DataLoader(train_data, sampler=train_sampler,
                                  batch_size=config.batch_size)
    return train_dataloader

至此数据构建环节结束。其中,预处理部分略有简述,如:实体在原txt位置不对,删除实体。

7.模型训练部分以及评价体系构建
首先对数据划分交叉验证

if __name__ == '__main__':
    config = Config()
    tokenizer = RobertaTokenizer.from_pretrained(config.MODEL_PATH)
    config_bert = RobertaConfig.from_pretrained(config.MODEL_PATH, num_labels=config.num_labels, output_hidden_states=True)
    train_df = Pretreatment_data(config.train_data_file)
    sent_id = Pretreatment_sent_id(config.train_data_file)
    kfold = KFold(n_splits=config.k_fold, shuffle=True)
    index = kfold.split(X=train_df)
    for k, (train_index, test_index) in enumerate(index):
        model = RobertaForTokenClassification.from_pretrained(config.MODEL_PATH, config=config_bert)
        train_data = train_df.iloc[train_index, :]
        test_data = train_df.iloc[test_index, :]
        train_data_loader = get_data_loader(train_data, tokenizer, config.max_len)
        test_data_loader = get_data_loader(test_data, tokenizer, config.max_len)
        val_data_loader = get_val_loader(config.test_data_file, tokenizer, config.max_len)
        bert_filter = BERT_filter(config, model, num=k)
        bert_filter.fit(train_data_loader, test_data_loader)
        bert_filter.run_inference(val_data_loader)  # 实际的test
        bert_filter.run_valid(test_data_loader)  # 实际的valid

其中模型重构的Bert_filter结构为:

class BERT_filter(object):
    def __init__(self, config, model, num):
        if not os.path.exists(config.output_dir):
            os.makedirs(config.output_dir)
        self.config = config
        self.best_score = 0
        self.epoch = 0
        self.log_path = f'{config.output_dir}/log.txt'
        self.log(get_config_str(config))  # 后续补充log函数
        self.model = model
        self.device = 'cpu'  # 后续改cuda
        self.model.to('cpu')  # 同理改cuda
        # self.device = 'cuda'  # 后续改cuda
        # self.model.to('cuda')  # 同理改cuda
        self.k = num

        param_optimizer = list(self.model.named_parameters())  # 将模型的所有参数带名字存放人列表
        no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight']
        optimizer_grouped_parameters = [
            {
    
    'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay': 0.001},
            {
    
    'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}  # ?????
        ]

        self.optimizer = AdamW(optimizer_grouped_parameters, lr=config.lr)  # 优化器用AdamW

Bert_filter中即可构建训练与验证的方法。仅介绍训练代码,验证代码类似,不做赘述。

def fit(self, train_loader, test_loader):
    for e in range(self.config.n_epochs):
        print(f'Epoch:{e + 1}/{self.config.n_epochs}')
        lr = self.optimizer.param_groups[0]['lr']
        timestamp = datetime.utcnow().isoformat()
        self.log(f'\n{timestamp}\nLR:{lr}\n')

        t = time.time()
        final_scores = self.train_one_epoch(train_loader,test_loader)
        self.log(
            f'[RESULT]: Train. Epoch: {self.epoch}, final_score: {final_scores.avg:.5f}, time: {(time.time() - t):.5f}\n')

        t = time.time()
        final_scores = self.validation(test_loader)  # 返回验证集F1的分数
        self.log(
            f'[RESULT]: Validation. Epoch: {self.epoch}, final_score: {final_scores.avg:.5f}, time: {(time.time() - t):.5f}\n')
        self.epoch += 1
        if final_scores.avg > self.best_score:  # 如果训练后在验证集上的分数大于目前的最高分,则记录分数改善以及保存模型
            self.log(f'flnal_score improved from {self.best_score} to {final_scores.avg},save model')
            print(f'flnal_score improved from {self.best_score} to {final_scores.avg},save model')
            self.best_score = final_scores.avg
            self.save(f'{config.output_dir}/best_model{k}.bin')
        else:
            self.log(f'flnal_score did not improved from {self.best_score}')
            print(f'flnal_score did not improved from {self.best_score}')

def train_one_epoch(self,train_loader,test_loader):
    self.model.train()#开启训练模式
    t = time.time()
    final_scores = F1_score()
    losses = AverageLoss()
    bar = tqdm(range(len(train_loader)))
    for step,(input_ids,input_mask,segment_ids,label) in zip(bar,train_loader):
        input_ids = input_ids.to(self.device)
        input_mask = input_mask.to(self.device)
        segment_ids = segment_ids.to(self.device)
        label = label.to(self.device)  # 指定运算地址
        outputs = self.model(input_ids=input_ids, token_type_ids=segment_ids, attention_mask=input_mask,
                             labels=label)
        loss = outputs[0]
        logits = outputs[1]
        batch_size = input_ids.size(0)
        losses.update(loss.detach().item(),batch_size)
        final_scores.update(label,logits)
        bar.set_description(
            f'loss:{round(losses.avg, 4)};macro_F1:{round(final_scores.avg, 4)}')
        loss.backward()                                 #反向传播计算当前梯度
        if (step+1) % self.config.backward_step == 0:
            self.optimizer.step()                       #根据梯度更新网络
            self.optimizer.zero_grad()                  #清空当前梯度

        if (step + 1) % self.config.ckpt_step == 0:  # 每训练560次运行计算一次验证集
            self.log(f'run validation\n')
            val_final_scores = self.validation(test_loader)
            self.log(
                f'[RESULT]: Validation. step: {step}, final_score: {val_final_scores.avg:.5f}, time: {(time.time() - t):.5f}\n')
            if val_final_scores.avg > self.best_score:  # 如果val分数大于最高分则保存模型并记录分数
                self.log(
                    f'Step:{step} flnal_score improved from {self.best_score} to {val_final_scores.avg},save model\n')
                print(
                    f'Step:{step} flnal_score improved from {self.best_score} to {val_final_scores.avg},save model\n')
                self.best_score = val_final_scores.avg
                self.save(f'{config.output_dir}/best_model{k}.bin')
            else:
                self.log(f'flnal_score did not improved from {self.best_score}')
                print(f'flnal_score did not improved from {self.best_score}')
        if step % self.config.verbose_step == 0:  # 每训练1000次记录训练的损失以及当前的F1分数
            self.log(
                f'Train Step {step}, loss: {loss.item()}' + \
                f'final_score: {final_scores.avg:.5f}, ' + \
                f'time: {(time.time() - t):.5f}\n'
            )
    self.model.eval()
    return final_scores

其中训练部分在train_one_epoch,而总方法fit调用此方法,以及融合验证方法。

8.其余部分,由于个人需求部分,不做展开,而评价的指标,即用平滑的F1,即

class F1_score(object):
    def __init__(self):
        self.reset()

    def reset(self):
        self.y_true = []
        self.y_pred = []
        self.score = 0

    def update(self, y_true, y_pred):
        logits = y_pred.detach().cpu().numpy()
        y_pred = np.argmax(logits, axis=2).tolist()
        y_pred = list(chain(*y_pred))
        y_true = y_true.cpu().numpy().tolist()
        y_true = list(chain(*y_true))
        self.y_true.extend(y_true)
        self.y_pred.extend(y_pred)
        self.score = classification_report(self.y_true, self.y_pred, output_dict = True)['macro avg']['f1-score']


    @property
    def avg(self):
        return self.score

class AverageLoss(object):
    """Computes and stores the average and current value"""

    def __init__(self):
        self.reset()

    def reset(self):
        self.val = 0
        self.avg = 0
        self.sum = 0
        self.count = 0

    def update(self, val, n=1):
        self.val = val
        self.sum += val * n
        self.count += n
        self.avg = self.sum / self.count

9 总结,英文实体识别,总的来说最大的难点在于token后的ids与label的对应问题,本文已做出解决方式。实体拿出方式,可仿照写入方式,自行构建。本数据来源于2020 CONV比赛数据,比赛仍在进行,目前F1的评分为76%(未彻底结束),仍有modeltrick可做,如CRF,以及调节样本权重,各层模型权重等,比赛结束,会分享完整代码。旨在分享经验,以及记录打榜历程。

猜你喜欢

转载自blog.csdn.net/qq_42213591/article/details/109333943
NER
今日推荐