When using Torch module to load local roberta model, OSERROR is always reported as follows:
OSError: Model name './chinese_roberta_wwm_ext_pytorch' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed './chinese_roberta_wwm_ext_pytorch' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.
Solution: Use BertTokenizer and BertModel to load, do not use RobertaTokenizer/RobertaModel,
If you use RobertaForQuestionAnswering, the following method is also possible, but there will be a warning.
import torch
from transformers import BertTokenizer, BertModel,RobertaTokenizer,RobertaForQuestionAnswering
from transformers import logging
# logging.set_verbosity_warning()
# logging.set_verbosity_error()
tokenizer = BertTokenizer.from_pretrained("./chinese_roberta_wwm_ext_pytorch")
roberta = RobertaForQuestionAnswering.from_pretrained("./chinese_roberta_wwm_ext_pytorch",return_dict=True)