How to use pre-trained BERT as embedding in pytorch

1. Install the appropriate package

pip install pytorch_pretrained_bert==0.6.2 -i https://pypi.tuna.tsinghua.edu.cn/simple

2. Download the corresponding pre-trained model bert-base-chinese.

 3. Code example:

import torch
from pytorch_pretrained_bert import BertTokenizer, BertModel

tokenizer = BertTokenizer.from_pretrained('./bert-base-chinese', do_lower_case=True)
bert = BertModel.from_pretrained('./bert-base-chinese')
bert.eval()

text = '我爱北京天安门。'
text = tokenizer.tokenize(text)
print(text)                      # ['我', '爱', '北', '京', '天', '安', '门', '。']
text_id = tokenizer.convert_tokens_to_ids(text)
print(text_id)                   # [2769, 4263, 1266, 776, 1921, 2128, 7305, 511]
text_id = torch.tensor(text_id,dtype = torch.long)
text_id = text_id.unsqueeze(dim=0)
print(text_id)                   # tensor([[2769, 4263, 1266,  776, 1921, 2128, 7305,  511]])
output = bert(text_id)[0]
print(len(output))               # 12层
text_embedding = bert(text_id)[0][0]       # 取第1层,也可以取别的层。
text_embedding = text_embedding.detach()   # 切断反向传播。
print(text_embedding.shape)                # torch.Size([1, 8, 768])

Guess you like

Origin blog.csdn.net/m0_46483236/article/details/123926952