Determine and specify models and data in pytorch on GPU or CPU

Sometimes, you need to check whether the model and data are on the GPU or the CPU; or you need to put the model and data on the specified GPU or CPU, so what should you do?

  • 1. Determine whether the model is on the GPU or the CPU
model = nn.LSTM(input_size=10, hidden_size=4, num_layers=1, batch_first=True)
print(next(model.parameters()).device)  
  • 2. Determine whether the data is on the GPU or the CPU
data = torch.ones([2, 3])
print(data.device) 
  • 3. Specify whether the model is on the GPU or the CPU
model = nn.LSTM(input_size=10, hidden_size=4, num_layers=1, batch_first=True)
# 指定模型在GPU上
model = model.cuda()
# 指定模型在CPU上
model = model.cpu()
  • 4. Specify whether the data is on the GPU or on the CPU
data = torch.ones([2, 3])
# 指定数据在GPU上 
data = data.cuda()
# 指定数据在CPU上 
data = data.cpu()

Reference: https://www.cnblogs.com/picassooo/p/13736843.html

Guess you like

Origin blog.csdn.net/m0_46483236/article/details/123942225