[Deep Learning] How to load the model to cpu and gpu

[Deep Learning] How to load the model to cpu and gpu

1. Use from_pretrained method

In this case, BertMoldel.from_pretrained() will be loaded on the cpu, and the internal map_location is set to the cpu by default. If you want to deploy on the gpu, execute the following three sentences.

BertMoldel.from_pretrained()
device=torch.device(’cuda’)
model.to(device) 

2. Use the load_state_dict method

  • If you do not add map_location, the default is to load according to the location when saving the model. Right now,

    • When you save the model, you use the gpu, so when you load it, it is also loaded on the gpu;
    • When you save the model, you use the cpu, so when you load it, it is also loaded on the cpu;
  • plus map_location

    • You can specify where to deploy. If you want to deploy to GPU, you don’t need to modify the first line, just add line 4.5.
state_dict=torch.load(model_path, map_location='cpu')
#部署到 gpu,把上面改为map_location='gpu'

model.load_state_dict(state_dict)
#已在CPU上加载,下面两句也可加入GPU

device=torch.device(’cuda’)
model.to(device)

reference

【1】https://blog.csdn.net/M_TDM/article/details/129436122

おすすめ

転載: blog.csdn.net/qq_51392112/article/details/130495092