pytorch view gpu information

Other: windows using nvidia-smi View gpu information

Methods Why transfer the data to the GPU is called .cuda instead .gpu, as the data transferred to the CPU method call is .cpu? This is because the GPU programming interface using CUDA, but not all of the current GPU supports CUDA, Nvidia's GPU was only partially supported. PyTorch future might support AMD's GPU, and AMD GPU programming interface using OpenCL, therefore PyTorch also set aside the .cl method for future support for AMD GPU and the like.

torch.cuda.is_available ()
CUDA is available;

torch.cuda.device_count ()
returns the number of GPU;

torch.cuda.get_device_name (0)
return gpu name, device index from zero by default;

torch.cuda.current_device ()
Returns the index of the current device;

 

 

# params.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
params.device = torch.device('cpu')
params.n_gpu = torch.cuda.device_count()
params.multi_gpu = args.multi_gpu

 

For more information: https: //pytorch.org/docs/stable/cuda.html
---------------------
Author: lsh Oh
Source: CSDN
Original: https : //blog.csdn.net/nima1994/article/details/83001910
Disclaimer: This article is a blogger original article, reproduced, please attach Bowen link!

Guess you like

Origin www.cnblogs.com/jfdwd/p/11237716.html