[pytorch, learning]-4.6 GPU computing

reference

4.6 GPU computing

So far, we have been using the CPU for calculations. For complex neural networks and large-scale data, using CPU to calculate may not be efficient enough.
In this section, I will introduce how to use a single NIVIDA GPU for calculations.

4.6.1 Computing equipment

PyTorch can specify the device used for storage and calculation, if you use a memory CPU or a graphics memory GPU. By default, PyTorch will create data in memory, and then use GPU to calculate.
Use to torch.cuda.is_available()check whether the GPU is available:

import torch
from torch import nn

torch.cuda.is_available()

Insert picture description here

# 查看GPU数量
torch.cuda.current_device()

Insert picture description here

PS: The number sequence number of the GPU is calculated from 0

# 根据索引查看GPU名字
torch.cuda.get_device_name(0)

Insert picture description here

4.6.2 TensorGPU computing

By default, it Tensorwill be stored in memory. Therefore, before we Tensorcould not see the GPU related logo every time we printed

x = torch.tensor([1,2,3])
x

Insert picture description here

Use .cuda()can Tensorconvert (copy) the CPU on the GPU. If you have multiple GPU, we use .cuda(i)to represent the i-th block GPU

x = x.cuda()

Insert picture description here
We can view the device by Tensorthe devicepropertiesTensor

x.device

Insert picture description here
We can specify the device when we create it

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

x = torch.tensor([1, 2, 3], device=device)
print(x)

x= torch.tensor([1,2,3]).to(device)

print(x)

Insert picture description here
If you perform operations on the data on the GPU, the result is still stored on the GPU

y = x**2
y

Insert picture description here
It should be noted that the data stored in different locations cannot be directly calculated. That is, the data stored on the CPU cannot be directly calculated with the data stored on the GPU, and the data on different GPUs cannot be directly calculated.

4.6.3 GPU calculation of the model

With Tensorsimilar, PyTorch model can also .cudaswitch to the GPU. We can deviceview the device storing the model through the properties

net = nn.Linear(3, 1)
list(net.parameters())[0].device

Insert picture description here
Transfer the model to the GPU

net.cuda()
list(net.parameters())[0].device

Insert picture description here
The model and parameters need to be calculated on the GPU at the same time

x =torch.rand(2, 3).cuda()
net(x)

Insert picture description here

Guess you like

Origin blog.csdn.net/piano9425/article/details/107176403