torch.cat和torch.max[笔记]

注: 这是个人笔记,想了解的还是去torch官网吧,这一类numpy接口,用的时候还是单独测试一下,防止出错

torch.max

torch.max(input, dim, keepdim=False, out=None) -> (Tensor, LongTensor)
用的较多的是不同于传统的max,此函数返回相应维度的最大值及其索引(values, indices),如果keepdim = True,那么返回的values和indices相应的tensor的形状和input相同,只不过未选择到的元素会被squeeze,可见torch.aqueeze函数
示例如下
1,keepdim =True

a = torch.LongTensor([[2,3,5],[2,5,6],[5,4,8],[9,4,7]])
decoder_scores, decoder_input = torch.max(a, dim=1, keepdim = True)
print(decoder_input);print(decoder_input.size());print(a.size())
输出:
tensor([[2],
        [2],
        [2],
        [0]])
torch.Size([4, 1])
torch.Size([4, 3])

2,keepdim = False (default)

a = torch.LongTensor([[2,3,5],[2,5,6],[5,4,8],[9,4,7]])
decoder_scores, decoder_input = torch.max(a, dim=1, keepdim = False)
print(decoder_input);print(decoder_input.size());print(a.size())
输出:
tensor([2, 2, 2, 0])
torch.Size([4])
torch.Size([4, 3])

可以看出 和torch.squeeze还是有区别的,需要自行体会

torch.cat

torch.cat(tensors, dim=0, out=None) → Tensor
拼接操作
例子:

>>> x = torch.randn(2, 3)
>>> x
tensor([[ 0.6580, -1.0969, -0.4614],
        [-0.1034, -0.5790,  0.1497]])
>>> torch.cat((x, x, x), 0)
tensor([[ 0.6580, -1.0969, -0.4614],
        [-0.1034, -0.5790,  0.1497],
        [ 0.6580, -1.0969, -0.4614],
        [-0.1034, -0.5790,  0.1497],
        [ 0.6580, -1.0969, -0.4614],
        [-0.1034, -0.5790,  0.1497]])
>>> torch.cat((x, x, x), 1)
tensor([[ 0.6580, -1.0969, -0.4614,  0.6580, -1.0969, -0.4614,  0.6580,
         -1.0969, -0.4614],
        [-0.1034, -0.5790,  0.1497, -0.1034, -0.5790,  0.1497, -0.1034,
         -0.5790,  0.1497]])

2019-07-26:用于chatbot的decoding algorithm过程【greedy-search, beam-search】

发布了48 篇原创文章 · 获赞 9 · 访问量 1万+

猜你喜欢

转载自blog.csdn.net/NewDreamstyle/article/details/97418302