pytorch---torch.stack

torch.stack([tensor1, tensor2, tensor3…], dim=0)

基本作用: 将几个矩阵按照dim指定维度堆叠在一起;具体如何堆叠看下方例子

import torch
x = torch.rand((2,3))
y = torch.rand((2,3))
z = torch.stack([x, y], dim=0)
z2 = torch.stack([x, y], dim=1)
z3 = torch.stack([x, y], dim=2)
print("x:",x)
print("y:", y)
print("z:", z)
print("z2:",z2)
print("z3:", z3)

输出:

x: tensor([[0.2380, 0.3537, 0.8676],
           [0.6321, 0.8611, 0.1720]])

y: tensor([[0.0692, 0.4968, 0.2710],
           [0.2103, 0.6017, 0.3630]])

z: tensor([[[0.2380, 0.3537, 0.8676],
            [0.6321, 0.8611, 0.1720]],

        [[0.0692, 0.4968, 0.2710],
         [0.2103, 0.6017, 0.3630]]])

z2: tensor([[[0.2380, 0.3537, 0.8676],
         [0.0692, 0.4968, 0.2710]],

        [[0.6321, 0.8611, 0.1720],
         [0.2103, 0.6017, 0.3630]]])

z3: tensor([[[0.2380, 0.0692],
             [0.3537, 0.4968],
             [0.8676, 0.2710]],

        [[0.6321, 0.2103],
         [0.8611, 0.6017],
         [0.1720, 0.3630]]])
         
[Finished in 1.5s]

总结
假设list中tensor的个数为n, 每个tensor的shape为[w, h]

  • dim = 0 就是将list中几个矩阵直接按顺序堆叠起来,堆叠结果的shape为[n, w, h]
  • dim = 1 是将list中每个矩阵的的每行拿出来重新组成w个矩阵,然后堆叠在一起,最后结果shape为[w, n, h]; 具体来说就是将每个tensor的第一行拿出来组成一个tensor1(shape=[n, h]), 然后每个tensor第二行拿出来组成tensor2, …依次进行下去,最后将tensor1, tensor2…堆叠起来
  • dim = 2 将每个tensor的每行拿出来组成tenor, 然后转置,得到tensor_i, 最后将tensor_1, tensor2, …tensor_i堆叠起来, 最后结果的shape=[w, h, n]。具体来说,就先将每个tensor第一行拿出来组成矩阵再转置得到tensor1, 然后取每个tensor的第二行做相同操作,得到tensor2,…,最后将tensor1, tensor2,…全部堆叠起来
发布了33 篇原创文章 · 获赞 1 · 访问量 2621

猜你喜欢

转载自blog.csdn.net/orangerfun/article/details/103931343