tf.pad与torch.nn.functional.pad的区别

第一个参数都是input,第二个参数都是填充的位置和数量

例子:

pytorch:

import torch
t1 = torch.tensor([[ 1,  2,  3],
                      [ 4,  5,  6],
                      [ 7,  8,  9],
                      [10, 11, 12]])


a = torch.nn.functional.pad(t1, [1, 2,3,4])



# tensor([[ 0,  0,  0,  0,  0,  0],
#         [ 0,  0,  0,  0,  0,  0],
#         [ 0,  0,  0,  0,  0,  0],
#         [ 0,  1,  2,  3,  0,  0],
#         [ 0,  4,  5,  6,  0,  0],
#         [ 0,  7,  8,  9,  0,  0],
#         [ 0, 10, 11, 12,  0,  0],
#         [ 0,  0,  0,  0,  0,  0],
#         [ 0,  0,  0,  0,  0,  0],
#         [ 0,  0,  0,  0,  0,  0],
#         [ 0,  0,  0,  0,  0,  0]])

可以看到添加pading的维度是由里向外的,先dim=1,再dim=0  (先列后行)

tensorflow:

import tensorflow as tf


input = tf.constant([[ 1,  2,  3],
                      [ 4,  5,  6],
                      [ 7,  8,  9],
                      [10, 11, 12]])



# d =  tf.pad(inputs,[1,2,0,0])
c = tf.pad(input, [(1, 2), (3, 4)])

with tf.Session() as sess:
    print(sess.run(c))
# [[ 0  0  0  0  0  0  0  0  0  0]
#  [ 0  0  0  1  2  3  0  0  0  0]
#  [ 0  0  0  4  5  6  0  0  0  0]
#  [ 0  0  0  7  8  9  0  0  0  0]
#  [ 0  0  0 10 11 12  0  0  0  0]
#  [ 0  0  0  0  0  0  0  0  0  0]
#  [ 0  0  0  0  0  0  0  0  0  0]]

可以看到添加pading的维度是由外向里的,先axis=0,再axis=1(先行后列)

猜你喜欢

转载自blog.csdn.net/djdjdhch/article/details/130612467