Solve single GPU training

Problem. Now there is only one GPU for training, and the other is useless. I don’t know how to adjust
(1) batch_size needs to be greater than the number of existing
GPUs (2) device = torch.device("cuda:0" if torch.cuda.is_available( ) else “cpu”)#cuda:0 represents the starting
#device_id is 0, if it is cuda directly, the default is to start from 0, and the starting position can be modified according to actual needs, such as cuda:1
(3) if torch. cuda.device_count()> 1: # View the number of GPUs available on the current computer, if the number of
GPUs > 1, then more GPU training model = torch.nn.DataParallel(model)# Multi-gpu training, automatically select GPU
(4) model.to(device)#Put the network model on the specified gpu or cpu

Guess you like

Origin blog.csdn.net/gz153016/article/details/108524253