torch.Tensor.tolist()方法的使用举例

参考链接: torch.Tensor.tolist()

在这里插入图片描述
原文及翻译:

tolist()
方法: tolist()
    ” tolist() -> list or number
    tolist() -> 返回列表或者数字

    Returns the tensor as a (nested) list. For scalars, a standard Python number is returned, 
    just like with item(). Tensors are automatically moved to the CPU first if necessary.
    以一个(嵌套式的)列表的形式返回一个张量.如果这个张量是一个标量,那么一个标准的Python数字类型被返回,
    就和使用方法item()一样. 如果有必要的话,那么转化之前张量会先自动地移动到CPU上.
    
    This operation is not differentiable.
    该运算操作是不可微分(不可以求导数求梯度).
    (译者注释:因为返回一个列表,自然不能调用backward()方法反向传播求梯度.)

    Examples:  例子:
    >>> a = torch.randn(2, 2)
    >>> a.tolist()
    [[0.012766935862600803, 0.5415473580360413],
     [-0.08909505605697632, 0.7729271650314331]]
    >>> a[0,0].tolist()
    0.012766935862600803

实验代码展示:

Microsoft Windows [版本 10.0.18363.1316]
(c) 2019 Microsoft Corporation。保留所有权利。

C:\Users\chenxuqi>conda activate ssd4pytorch1_2_0

(ssd4pytorch1_2_0) C:\Users\chenxuqi>python
Python 3.7.7 (default, May  6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.manual_seed(seed=20200910)
<torch._C.Generator object at 0x0000021632BBD330>
>>>
>>> print(torch.__version__)
1.2.0+cu92
>>>
>>> print(torch.cuda.is_available())
True
>>>
>>> a = torch.randn(2, 2)
>>> a
tensor([[ 0.2824, -0.3715],
        [ 0.9088, -1.7601]])
>>> a.shape
torch.Size([2, 2])
>>> a.tolist()
[[0.2823888957500458, -0.37148377299308777], [0.908775269985199, -1.7601189613342285]]
>>> print(a.tolist())
[[0.2823888957500458, -0.37148377299308777], [0.908775269985199, -1.7601189613342285]]
>>> a[0,0].tolist()
0.2823888957500458
>>> print(a[0,0].tolist())
0.2823888957500458
>>> a[0,0].shape
torch.Size([])
>>> a = torch.randn(3,2).cuda()
>>> a
tensor([[-0.1806,  2.0937],
        [ 1.0406, -1.7651],
        [ 1.1216,  0.8440]], device='cuda:0')
>>> a.tolist()
[[-0.18060052394866943, 2.0936813354492188], [1.040623426437378, -1.7651376724243164], [1.121639609336853, 0.84396892786026]]
>>> a
tensor([[-0.1806,  2.0937],
        [ 1.0406, -1.7651],
        [ 1.1216,  0.8440]], device='cuda:0')
>>> a[0,1]
tensor(2.0937, device='cuda:0')
>>> a[0,1].shape
torch.Size([])
>>> (a[0,1]).tolist()
2.0936813354492188
>>>
>>> a = torch.randn(1)
>>> a.shape
torch.Size([1])
>>> a
tensor([0.1783])
>>> a.tolist()
[0.1783328503370285]
>>>
>>>
>>> a = torch.randn(1).cuda()
>>> a
tensor([0.6859], device='cuda:0')
>>> a.tolist()
[0.6858752369880676]
>>> a
tensor([0.6859], device='cuda:0')
>>> a = torch.randn(1)
>>> a
tensor([-1.5942])
>>> a.tolist()
[-1.5942193269729614]
>>>
>>> a = torch.randn(())
>>> a
tensor(-0.2006)
>>> a.tolist()
-0.2005603164434433
>>> print(a.tolist())
-0.2005603164434433
>>> t = a.tolist()
>>> t
-0.2005603164434433
>>> type(t)
<class 'float'>
>>> type(1.3)
<class 'float'>
>>>
>>> a = torch.randn(()).cuda()
>>> a
tensor(-0.4050, device='cuda:0')
>>> a.tolist()
-0.40504276752471924
>>> print(a.tolist())
-0.40504276752471924
>>> t = a.tolist()
>>> t
-0.40504276752471924
>>> type(t)
<class 'float'>
>>> type(3.14)
<class 'float'>
>>>
>>>
>>> a.shape
torch.Size([])
>>> a.item()
-0.40504276752471924
>>>
>>> d = a.item()
>>> a
tensor(-0.4050, device='cuda:0')
>>> d
-0.40504276752471924
>>> type(d)
<class 'float'>
>>> type(3.14)
<class 'float'>
>>>
>>>
>>>

猜你喜欢

转载自blog.csdn.net/m0_46653437/article/details/112914731