pytorch_pretrained_bert换成transformers踩坑

本文以文本分类为例叙述

步骤

1、前向传播时,pytorch_pretrained_bert是以下内容

        _, pooled = self.bert(context, token_type_ids=types,
                              attention_mask=mask,
                              output_all_encoded_layers=False)

报错:

    result = self.forward(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'output_all_encoded_layers'

2、去除 output_all_encoded_layers=False

如下:

        _, pooled = self.bert(context, token_type_ids=types,
                              attention_mask=mask)

继续报错:

    if input.dim() == 2 and bias is not None:
AttributeError: 'str' object has no attribute 'dim'

3、添加return_dict=False

如下:

_, pooled = self.bert(context, token_type_ids=types,
                              attention_mask=mask,return_dict=False)

问题解决

注:若得不到解决
尝试如下三种方式

_, cls_hs = self.bert(sent_id, attention_mask=mask)

_, cls_hs = self.bert(sent_id, attention_mask=mask)[:2]

_, cls_hs = self.bert(sent_id, attention_mask=mask, return_dict=False)

猜你喜欢

转载自blog.csdn.net/yjh_SE007/article/details/117878617