torch.Tensor.is_leaf属性的使用说明

参考链接: torch.Tensor.is_leaf

在这里插入图片描述

原文及翻译:

is_leaf()
属性: is_leaf
    All Tensors that have requires_grad which is False will be leaf 
    Tensors by convention.
    按照惯例,所有属性requires_grad为False的张量Tensor都将是叶子张
    量(leaf Tensor).

    For Tensors that have requires_grad which is True, they will be 
    leaf Tensors if they were created by the user. This means that 
    they are not the result of an operation and so grad_fn is None.
    对于那些属性requires_grad为True的张量,如果他们是用户自己创建的,那么
    他们是叶子张量.这意味着它们不是任何张量运算的结果,所以它们的属性grad_fn
    是None.

    Only leaf Tensors will have their grad populated during a call to 
    backward(). To get grad populated for non-leaf Tensors, you can 
    use retain_grad().
    在调用方法backward(),只有叶子张量的梯度属性grad会保持存在. 为了让
    非叶子张量的梯度属性grad保持存在,我们可以使用方法retain_grad().

    Example:  例子:

    >>> a = torch.rand(10, requires_grad=True)
    >>> a.is_leaf
    True
    >>> b = torch.rand(10, requires_grad=True).cuda()
    >>> b.is_leaf
    False
    # b was created by the operation that cast a cpu Tensor into a cuda Tensor
    # 张量b是有一个运算操作(将一个cpu张量转为cuda张量)创建的,而不是由用户直接创建的
    >>> c = torch.rand(10, requires_grad=True) + 2
    >>> c.is_leaf
    False
    # c was created by the addition operation
    # 张量c是由加法运算操作创建的
    >>> d = torch.rand(10).cuda()
    >>> d.is_leaf
    True
    # d does not require gradients and so has no operation creating it (that is tracked by the autograd engine)
    # 张量b不需要求梯度,因此不是由任何(由自动梯度引擎追踪的)操作创建产生的.
    >>> e = torch.rand(10).cuda().requires_grad_()
    >>> e.is_leaf
    True
    # e requires gradients and has no operations creating it
    # 张量e需要求梯度,并且不是由任何操作来创建产生的
    >>> f = torch.rand(10, requires_grad=True, device="cuda")
    >>> f.is_leaf
    True
    # f requires grad, has no operation creating it
    # 张量f需要求梯度,并且不是由任何操作来创建产生的

代码实验展示:

Microsoft Windows [版本 10.0.18363.1316]
(c) 2019 Microsoft Corporation。保留所有权利。

C:\Users\chenxuqi>conda activate ssd4pytorch1_2_0

(ssd4pytorch1_2_0) C:\Users\chenxuqi>python
Python 3.7.7 (default, May  6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>>
>>> torch.manual_seed(seed=20200910)
<torch._C.Generator object at 0x000002320838D330>
>>>
>>> a = torch.rand(10, requires_grad=True)
>>> a
tensor([0.9817, 0.9880, 0.8879, 0.3911, 0.8532, 0.2367, 0.6074, 0.6374, 0.7830,
        0.1322], requires_grad=True)
>>> a.is_leaf
True
>>>
>>> a = torch.rand(10, requires_grad=False)
>>> a
tensor([0.1113, 0.6394, 0.3543, 0.2189, 0.9847, 0.4838, 0.7628, 0.9987, 0.4533,
        0.1989])
>>> a.is_leaf
True
>>>
>>> b = torch.rand(10, requires_grad=False).cuda()
>>> b.is_leaf
True
>>> b = torch.rand(10, requires_grad=True).cuda()
>>> b.is_leaf
False
>>>
>>> c = torch.rand(10, requires_grad=True) + 2
>>> c.is_leaf
False
>>>
>>> c = torch.rand(10, requires_grad=False) + 2
>>> c.is_leaf
True
>>>
>>> d = torch.rand(10).cuda()
>>> d.is_leaf
True
>>>
>>>
>>> e = torch.rand(10).cuda().requires_grad_()
>>> e.is_leaf
True
>>>
>>> f = torch.rand(10, requires_grad=True, device="cuda")
>>> f.is_leaf
True
>>>
>>>
>>>

实验代码:关于中间节点(非叶节点张量)保持梯度,不释放内存:

Microsoft Windows [版本 10.0.18363.1316]
(c) 2019 Microsoft Corporation。保留所有权利。

C:\Users\chenxuqi>conda activate ssd4pytorch1_2_0

(ssd4pytorch1_2_0) C:\Users\chenxuqi>python
Python 3.7.7 (default, May  6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.manual_seed(seed=20200910)
<torch._C.Generator object at 0x000001CDB4A5D330>
>>> data_in = torch.randn(3,5,requires_grad=True)
>>> data_in
tensor([[ 0.2824, -0.3715,  0.9088, -1.7601, -0.1806],
        [ 2.0937,  1.0406, -1.7651,  1.1216,  0.8440],
        [ 0.1783,  0.6859, -1.5942, -0.2006, -0.4050]], requires_grad=True)
>>> data_mean = data_in.mean()
>>> data_mean
tensor(0.0585, grad_fn=<MeanBackward0>)
>>> data_in.requires_grad
True
>>> data_mean.requires_grad
True
>>> data_1 = data_mean * 20200910.0
>>> data_1
tensor(1182591., grad_fn=<MulBackward0>)
>>> data_2 = data_1 * 15.0
>>> data_2
tensor(17738864., grad_fn=<MulBackward0>)
>>> data_2.retain_grad()
>>> data_3 = 2 * (data_2 + 55.0)
>>> loss = data_3 / 2.0 +89.2
>>> loss
tensor(17739010., grad_fn=<AddBackward0>)
>>>
>>> data_in.grad
>>> data_mean.grad
>>> data_1.grad
>>> data_2.grad
>>> data_3.grad
>>> loss.grad
>>> print(data_in.grad, data_mean.grad, data_1.grad, data_2.grad, data_3.grad, loss.grad)
None None None None None None
>>>
>>> loss.backward()
>>> data_in.grad
tensor([[20200910., 20200910., 20200910., 20200910., 20200910.],
        [20200910., 20200910., 20200910., 20200910., 20200910.],
        [20200910., 20200910., 20200910., 20200910., 20200910.]])
>>> data_mean.grad
>>> data_mean.grad
>>> data_1.grad
>>> data_2.grad
tensor(1.)
>>> data_3.grad
>>> loss.grad
>>>
>>>
>>> print(data_in.grad, data_mean.grad, data_1.grad, data_2.grad, data_3.grad, loss.grad)
tensor([[20200910., 20200910., 20200910., 20200910., 20200910.],
        [20200910., 20200910., 20200910., 20200910., 20200910.],
        [20200910., 20200910., 20200910., 20200910., 20200910.]]) None None tensor(1.) None None
>>>
>>>
>>> print(data_in.is_leaf, data_mean.is_leaf, data_1.is_leaf, data_2.is_leaf, data_3.is_leaf, loss.is_leaf)
True False False False False False
>>>
>>>
>>>

猜你喜欢

转载自blog.csdn.net/m0_46653437/article/details/112912505