The way the model is loaded to the cpu and gpu

  1. The from_pretrained method is used. Under normal circumstances, BertMoldel.from_pretrained() will be loaded on the cpu. The internal map_location is set to the cpu by default. If you want to deploy it on the gpu, execute the following three sentences.

BertMoldel.from_pretrained()
device=torch.device(’cuda’)
model.to(device) 
  1. Use the load_state_dict method to load the model. You can specify where the model is deployed. If you want to deploy to the GPU, you don’t need to modify the first line, just add line 4.5.

state_dict=torch.load(model_path, map_location=’cpu’)
#部署到 gpu,把上面改为map_location=’gpu’
model.load_state_dict(state_dict)
#已在CPU上加载,下面两句也可加入GPU
device=torch.device(’cuda’)
model.to(device) 

Guess you like

Origin blog.csdn.net/M_TDM/article/details/129436122