Pytorch's automatic derivative: AUTOGRAD: AUTOMATIC DIFFERENTIATION

torch.Tensor is the central class of the package. If you set its attribute .requires_grad as True, it starts to track all operations on it. When you finish your computation you can call .backward() and have all the gradients computed automatically. The gradient for this tensor will be accumulated into .grad attribute.

After creating tensor, we have to use.requires_grad,将它设置为True。计算结束后,设置.backward()就可以反向传播,自动求出所有导数。自变量的.grad就是保存导数的地方。

例子:

import torch

x=torch.ones(2,2,requires_grad=True)
print(x)
y=x+2
z=y*y*3
out=z.mean()
print(z,out)

out.backward()
print(x.grad)

Output: 

tensor([[1., 1.],
        [1., 1.]], requires_grad=True)
tensor([[27., 27.],
        [27., 27.]], grad_fn=<MulBackward0>) tensor(27., grad_fn=<MeanBackward0>)
tensor([[4.5000, 4.5000],
        [4.5000, 4.5000]])

Explanation:

You should get a matrix with a value of 4.5. We can set out to tensor, which is the tensor'O'. According to the setting in the code, the value after taking the derivative of the argument x of O is 4.5. 

Guess you like

Origin blog.csdn.net/weixin_40244676/article/details/106193902