View local or remote server GPU and usage method based on Pytorch

View local or remote server GPU and usage method based on Pytorch

1. View GPU specific information

nvidia-smi # 查看GPU型号等具体信息

insert image description here

2. View local or remote server GPU based on Pytorch

import torch
print(torch.cuda.is_available() )# cuda是否可用
print(torch.cuda.device_count() )# gpu数量
print(torch.cuda.current_device())# 当前设备索引, 从0开始
print(torch.cuda.get_device_name(0))# 返回gpu名字

insert image description here

3. How to use GPU (typical)

device = torch.cuda.current_device() if args.cuda else torch.device('cpu')
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
os.environ['CUDA_VISIBLE_DEVICES']='0'

Guess you like

Origin blog.csdn.net/rothschild666/article/details/127135636