Note | PyTorch

Load Model

Spare memory automatically search for the most GPU, then the model is loaded onto the GPU:

os.system('nvidia-smi -q -d Memory |grep -A4 GPU|grep Free >tmp')
memory_gpu=[int(x.split()[2]) for x in open('tmp','r').readlines()]
dev = torch.device("cuda:" + str(np.argmax(memory_gpu)))
print(dev)

model.load_state_dict(torch.load(os.path.join(dir_model, "model_" + str(index_model) + ".pt"), map_location=dev))
model.to(dev)

The mistakes

Abnormal loss

  • Finally CNN layer using a nonlinear activation function ReLU, floating in the vicinity of the output lead 0.

Guess you like

Origin www.cnblogs.com/RyanXing/p/11600382.html
Recommended