Article Directory
Tensor
Matrix initialization - torch
from __future__ import print_function
import torch
x = torch.empty(5,3)
y = torch.rand(5,3)
z = torch.zeros(5,3,dtype=torch.long)#全零
m = torch.tensor([5.5,3])
x = x.new_ones(5,3,dtype=torch.double)
x = torch.randn_like(x,dtype = torch.float)
x.size()
# 分行列-1表示一个不确定的数,就是你如果不确定你想要reshape成几行,但是你很肯定要reshape成4列,那不确定的地方就可以写成-1,例如一个长度的16向量x,x.view(-1, 4)等价于x.view(4, 4),x.view(-1, 2)等价于x.view(8,2)
x = torch.randn(4,4)
y = x.view(-1)
z = x.view(-1,8)
#转置
x.t_()
Automatic differentiation (autograd)
Neural Networks
Conv2d parameters
nn.Linear detailed
Activation function ReLU
Pooling layer
Image classifier
Note: If the red box code in the above figure is running, the device cannot get CUDA or torch.cuda.is_available() returns False. This situation is basically due to the problem of CUDA version mismatch, the simplest and safest approach Is to recreate a new virtual environment that matches the version, create a GPU version of the Pytorch environment tutorial (click to jump)
Data parallel processing
How many GPUs does the computer have for each output outside In Model?