PyTorch learning portal (B): Autogard the automatic seek gradient

autograd PyTorch package is the core part of the neural network, simple to learn about.

autograd provides automatic differentiating feature of all tensor operations. Its flexibility reflected in the decision-making process can be run back-propagation code through, so that makes every iteration may be different.

Variable class

  • autograd.VariableThis package is the core classes.
  • Tensor it encapsulates, and the support of virtually all of the Tensor.
  • Once you have completed tensor can call .backward()the function, it will help you put all the good gradient calculation.
  • By the Variable .datayou can get to the tensor properties.
  • By the Variabe .gradyou can obtain the gradient property.

The figure is a configuration diagram of the Variable:

 

 

Function category

  • For automatic seek gradient there is a very important class autograd.Function.
  • VariableWith Functionbuilt together acyclic graph, that the calculation of the propagation front.
  • Each variable is calculated by Function function has a .grad_fnproperty.
  • Variable (not a function obtained by calculation) is defined by the user .grad_fnis null.
  • If you want to calculate the gradient of a variable, you can call .backward()the function:
    1. When a scalar variable is the time do not need to specify any parameters.
    2. When a scalar variable is not the time, you need to specify a tensor with the variable of the same size grad_outputused to store the calculated gradient.

The sample code

  • import packet
import torch
from torch.autograd import Variable
  • Create a variable
x=Variable(torch.ones(2,2),requires_grad=True) print (x) 
 
  • Do an operation of variable
y=x+2
print(y)
 
  • grad_fn attribute
    where x belongs to user-defined, and y belonging to the function generator, so there grad_fn attributes y and x no.
print (x.grad_fn)
print (y.grad_fn)
 
  • more operations on y
z = y * y * 3
out = z.mean() print(z, out) 
 

Gradients

If you follow the above code to do down, it has completed the above-defined variables x and calculation functions.
Now we can use the backward()automatic derivation friends.

out.backward()
print(x.grad) 

Reverse gradient calculated as follows:

 

 
  • out here is a scalar, so a direct call backward () function can be.
  • We must pay attention when out is an array, as defined by the size of the first Tensor such as grad_outputthe implementation of .backgrad(grad_output)the statement.

Basically completed the above calculation process, and FIG backpropagation gradient establishment request propagates before.

Guess you like

Origin www.cnblogs.com/jfdwd/p/11434382.html