torch.Tensor.retain_grad()的使用举例

参考链接: torch.Tensor.retain_grad()

在这里插入图片描述
原文及翻译:

retain_grad()
方法: retain_grad()
    Enables .grad attribute for non-leaf Tensors.
    对非叶节点(即中间节点张量)张量启用用于保存梯度的属性(.grad).
    (译者注: 默认情况下对于非叶节点张量是禁用该属性grad,计算完梯度之后就被
    释放回收内存,不会保存中间结果的梯度.)

实验代码展示:

Microsoft Windows [版本 10.0.18363.1316]
(c) 2019 Microsoft Corporation。保留所有权利。

C:\Users\chenxuqi>conda activate ssd4pytorch1_2_0

(ssd4pytorch1_2_0) C:\Users\chenxuqi>python
Python 3.7.7 (default, May  6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.manual_seed(seed=20200910)
<torch._C.Generator object at 0x000001CDB4A5D330>
>>> data_in = torch.randn(3,5,requires_grad=True)
>>> data_in
tensor([[ 0.2824, -0.3715,  0.9088, -1.7601, -0.1806],
        [ 2.0937,  1.0406, -1.7651,  1.1216,  0.8440],
        [ 0.1783,  0.6859, -1.5942, -0.2006, -0.4050]], requires_grad=True)
>>> data_mean = data_in.mean()
>>> data_mean
tensor(0.0585, grad_fn=<MeanBackward0>)
>>> data_in.requires_grad
True
>>> data_mean.requires_grad
True
>>> data_1 = data_mean * 20200910.0
>>> data_1
tensor(1182591., grad_fn=<MulBackward0>)
>>> data_2 = data_1 * 15.0
>>> data_2
tensor(17738864., grad_fn=<MulBackward0>)
>>> data_2.retain_grad()
>>> data_3 = 2 * (data_2 + 55.0)
>>> loss = data_3 / 2.0 +89.2
>>> loss
tensor(17739010., grad_fn=<AddBackward0>)
>>>
>>> data_in.grad
>>> data_mean.grad
>>> data_1.grad
>>> data_2.grad
>>> data_3.grad
>>> loss.grad
>>> print(data_in.grad, data_mean.grad, data_1.grad, data_2.grad, data_3.grad, loss.grad)
None None None None None None
>>>
>>> loss.backward()
>>> data_in.grad
tensor([[20200910., 20200910., 20200910., 20200910., 20200910.],
        [20200910., 20200910., 20200910., 20200910., 20200910.],
        [20200910., 20200910., 20200910., 20200910., 20200910.]])
>>> data_mean.grad
>>> data_mean.grad
>>> data_1.grad
>>> data_2.grad
tensor(1.)
>>> data_3.grad
>>> loss.grad
>>>
>>>
>>> print(data_in.grad, data_mean.grad, data_1.grad, data_2.grad, data_3.grad, loss.grad)
tensor([[20200910., 20200910., 20200910., 20200910., 20200910.],
        [20200910., 20200910., 20200910., 20200910., 20200910.],
        [20200910., 20200910., 20200910., 20200910., 20200910.]]) None None tensor(1.) None None
>>>
>>>
>>> print(data_in.is_leaf, data_mean.is_leaf, data_1.is_leaf, data_2.is_leaf, data_3.is_leaf, loss.is_leaf)
True False False False False False
>>>
>>>
>>>

猜你喜欢

转载自blog.csdn.net/m0_46653437/article/details/112921448
今日推荐