1. View the device where the tensor is located:
data = data.cuda()#将数据转移到gpu上
print(data.device) # 输出:cuda:0
data = data.cpu()#将数据转移到cpu上
print(data.device) # 输出:cpu
2. View the device where the model is located
model = model.cuda()#将模型转移到gpu上
print(next(model.parameters()).device) # 输出:cuda:0
model = model.cpu()#将模型转移到cpu上
print(next(model.parameters()).device) # 输出:cpu
3. There are two common ways to load models and tensors to GPU in Pytorch.
Method 1:
# 如果GPU可用,将模型和张量加载到GPU上
if torch.cuda.is_available():
model = model.cuda()
x = x.cuda()
y = y.cuda()
Method 2:
# 分配到的GPU或CPU
device=torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# 将模型加到GPU
model=model.to(device)
# 将张量加到GPU
x=x.to(device)
y=y.to(device)
4. Specify the GPU code
# 代码1:
torch.cuda.set_device(1)
# 代码2:
device = torch.device("cuda:1")
# 代码3:(官方推荐使用),
os.environ["CUDA_VISIBLE_DEVICES"] = '1'
(如果你想同时调用两块GPU的话)
os.environ["CUDA_VISIBLE_DEVICES"] = '1,2'
Reference link: Select the specified GPU in PyTorch
Note that the specified GPU code needs to be placed at the beginning of the program segment, as shown in the following figure:
5. View the number of GPUs
torch.cuda.device_count()