pytorch Variable Variable

1: tensor can backpropagation, variable propagation may be reversed.

2: Variable calculation, it will generate a calculation progressively FIG. This figure is to time all compute nodes are connected together, and finally the error reverse transmission, all-time Variable inside the gradient are calculated, but do not have this capability tensor.

3: How variable into numpy

4:VariableThere is a name datafield, you can get the original is packaged by its Tensordata. Meanwhile, the use of gradfield, can obtain a gradient (also a Variable).

5: each Variablehas a creatorfield, which indicates it is Functioncreated (in addition to the user's own those explicitly created, this time creatoris None)

6: When calculating the gradient back propagation, if the Variablescalar (such as ultimate lossEuclidean distance or cross-entropy), then the backward()function takes no arguments. However, if Variablethere is more than one element, when it is need to specify each element therein (conduction by the upper layer) of the gradient (i.e., and a Variableshape matching Tensor)

import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable

import torch
from torch.autograd import Variable # torch 中 Variable 模块
tensor = torch.FloatTensor([[1,2],[3,4]])
# 把鸡蛋放到篮子里, requires_grad是参不参与误差反向传播, 要不要计算梯度
variable = Variable(tensor, requires_grad=True)
var = Variable(tensor )  ### 默认是不参与反向传播,如果需要则需要在代码里面声明
print("tensor:",tensor)
print("variable:",variable)
print("var:",var)

#### 获取Variable 的数据
print(variable.data.numpy()) ###variable  数据转化为numpy 的类型

operation result:

PS F:\Graduation project\deeplearning> python .\c.py
tensor: tensor([[1., 2.],
        [3., 4.]])
variable: tensor([[1., 2.],
        [3., 4.]], requires_grad=True)
var: tensor([[1., 2.],
        [3., 4.]])
[[1. 2.]
 [3. 4.]]

Published 234 original articles · won praise 61 · views 120 000 +

Guess you like

Origin blog.csdn.net/weixin_42528089/article/details/103841534