1: tensor can backpropagation, variable propagation may be reversed.
2: Variable calculation, it will generate a calculation progressively FIG. This figure is to time all compute nodes are connected together, and finally the error reverse transmission, all-time Variable inside the gradient are calculated, but do not have this capability tensor.
3: How variable into numpy
4:Variable
There is a name data
field, you can get the original is packaged by its Tensor
data. Meanwhile, the use of grad
field, can obtain a gradient (also a Variable
).
5: each Variable
has a creator
field, which indicates it is Function
created (in addition to the user's own those explicitly created, this time creator
is None
)
6: When calculating the gradient back propagation, if the Variable
scalar (such as ultimate loss
Euclidean distance or cross-entropy), then the backward()
function takes no arguments. However, if Variable
there is more than one element, when it is need to specify each element therein (conduction by the upper layer) of the gradient (i.e., and a Variable
shape matching Tensor
)
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import torch
from torch.autograd import Variable # torch 中 Variable 模块
tensor = torch.FloatTensor([[1,2],[3,4]])
# 把鸡蛋放到篮子里, requires_grad是参不参与误差反向传播, 要不要计算梯度
variable = Variable(tensor, requires_grad=True)
var = Variable(tensor ) ### 默认是不参与反向传播,如果需要则需要在代码里面声明
print("tensor:",tensor)
print("variable:",variable)
print("var:",var)
#### 获取Variable 的数据
print(variable.data.numpy()) ###variable 数据转化为numpy 的类型
operation result:
PS F:\Graduation project\deeplearning> python .\c.py
tensor: tensor([[1., 2.],
[3., 4.]])
variable: tensor([[1., 2.],
[3., 4.]], requires_grad=True)
var: tensor([[1., 2.],
[3., 4.]])
[[1. 2.]
[3. 4.]]