Tensoeflow:Tensorflow的padding到底是怎么一回事?(‘valid’和‘same’,附源码解释)

tensorflow的padding到底是怎么一回事?

之前设计网络的时候,就大概算了一下feature map的输入输出的形状,padding呢一般就是两种,一种是‘valid’的方式,另外一种是‘same’方式,特别具体的怎么计算都记不太清了,结果转caffe model的时候发现,两种的padding方式是不同的,于是想在弄弄清楚tensorflow到底是怎么个padding方式,结果网上一堆,各个博客的公式还都不一样,算了,自己看源码吧。

首先,我们先进行实验,直接调用接口,看一下,这个比较直观。

import numpy as np
import tensorflow as tf

test = np.random.random([1, 10, 10, 3])
valid_pad = tf.layers.average_pooling2d(test, pool_size=[3, 3], strides=3,
                                        padding='valid')
same_pad = tf.layers.average_pooling2d(test, pool_size=[3, 3], strides=3,
                                        padding='same')
print(valid_pad.get_shape)
print(same_pad.get_shape)
<bound method Tensor.get_shape of <tf.Tensor 'average_pooling2d_11/AvgPool:0' shape=(1, 3, 3, 3) dtype=float64>>
<bound method Tensor.get_shape of <tf.Tensor 'average_pooling2d_12/AvgPool:0' shape=(1, 4, 4, 3) dtype=float64>>

可以看到‘valid’和‘same’确实是不一样的,那么pad到底是怎么计算它的输出长度的呢?

def conv_output_length(input_length, filter_size, padding, stride, dilation=1):
  """Determines output length of a convolution given input length.

  Arguments:
      input_length: integer.
      filter_size: integer.
      padding: one of "same", "valid", "full", "causal"
      stride: integer.
      dilation: dilation rate, integer.

  Returns:
      The output length (integer).
  """
  if input_length is None:
    return None
  assert padding in {'same', 'valid', 'full', 'causal'}
  dilated_filter_size = filter_size + (filter_size - 1) * (dilation - 1)
  if padding in ['same', 'causal']:
    output_length = input_length
  elif padding == 'valid':
    output_length = input_length - dilated_filter_size + 1
  elif padding == 'full':
    output_length = input_length + dilated_filter_size - 1
  return (output_length + stride - 1) // stride

在不用膨胀卷积的时候,我们可以认为

  dilated_filter_size = filter_size

那么对于‘same’的模式下,计算就应该是


    output_length = input_length
    res = (output_length + stride - 1) // stride

    我们把上面例子中input_length=10, filter_size = 3, stride=3带入到上面式子中,可以得到:
    (10+3-1) // 3 = 4

那么对于‘valid’的模式呢?

 output_length = input_length - dilated_filter_size + 1
 res = (output_length + stride - 1) // stride
 
 我们把上面例子中input_length=10, filter_size = 3, stride=3带入到上面式子中,可以得到:
 (10-3+1+3-1) // 3 = 3

我们再试一个例子呢

test = np.random.random([1, 127, 127,3])
valid_pad = tf.layers.average_pooling2d(test, pool_size=[5, 5], strides=3,
                                        padding='valid')
same_pad = tf.layers.average_pooling2d(test, pool_size=[5, 5], strides=3,
                                        padding='same')
print(valid_pad.get_shape)
print(same_pad.get_shape)
<bound method Tensor.get_shape of <tf.Tensor 'average_pooling2d_13/AvgPool:0' shape=(1, 41, 41, 3) dtype=float64>>
<bound method Tensor.get_shape of <tf.Tensor 'average_pooling2d_14/AvgPool:0' shape=(1, 43, 43, 3) dtype=float64>>

小伙伴可以按照上述的公式计算一下,答案是正确的!

LZ觉得所有的根据以事实说话,那么我们会计算输出的维度,但是具体的padding,‘same’是怎么padding的呢?

test = np.random.randint(1, 5, [1, 5, 5, 1]).astype(np.float32)
print(np.squeeze(test))
valid_pad = tf.layers.average_pooling2d(test, pool_size=[2, 2], strides=2,
                                        padding='valid')
same_pad_1 = tf.layers.average_pooling2d(test, pool_size=[2, 2], strides=2,
                                        padding='same')
same_pad_0 = tf.layers.average_pooling2d(test, pool_size=[2, 2], strides=3,
                                        padding='same')
same_pad_2 = tf.layers.average_pooling2d(test, pool_size=[3, 3], strides=4,
                                        padding='same')
same_pad_3 = tf.layers.average_pooling2d(test, pool_size=[4, 4], strides=4, 
                                        padding='same')
print(valid_pad.get_shape)
print(same_pad_0.get_shape)
print(same_pad_1.get_shape)
print(same_pad_2.get_shape)
print(same_pad_3.get_shape)


sess = tf.Session()
print("valid_pad: ",np.squeeze(sess.run(valid_pad)))
print("same_pad_0: ", np.squeeze(sess.run(same_pad_0)))
print("same_pad_1: ", np.squeeze(sess.run(same_pad_1)))
print("same_pad_2: ", np.squeeze(sess.run(same_pad_2)))
print("same_pad_3: ", np.squeeze(sess.run(same_pad_3)))
[[4. 2. 1. 1. 3.]
 [4. 3. 4. 1. 2.]
 [1. 1. 3. 1. 3.]
 [4. 4. 4. 4. 1.]
 [4. 4. 2. 4. 4.]]
<bound method Tensor.get_shape of <tf.Tensor 'average_pooling2d_22/AvgPool:0' shape=(1, 2, 2, 1) dtype=float32>>
<bound method Tensor.get_shape of <tf.Tensor 'average_pooling2d_24/AvgPool:0' shape=(1, 2, 2, 1) dtype=float32>>
<bound method Tensor.get_shape of <tf.Tensor 'average_pooling2d_23/AvgPool:0' shape=(1, 3, 3, 1) dtype=float32>>
<bound method Tensor.get_shape of <tf.Tensor 'average_pooling2d_25/AvgPool:0' shape=(1, 2, 2, 1) dtype=float32>>
<bound method Tensor.get_shape of <tf.Tensor 'average_pooling2d_26/AvgPool:0' shape=(1, 2, 2, 1) dtype=float32>>
valid_pad:  [[3.25 1.75]
 [2.5  3.  ]]
same_pad_0:  [[3.25 1.75]
 [4.   3.25]]
same_pad_1:  [[3.25 1.75 2.5 ]
 [2.5  3.   2.  ]
 [4.   3.   4.  ]]
same_pad_2:  [[3.25 1.75]
 [4.   3.25]]
same_pad_3:  [[2.5555556 1.8333334]
 [3.6666667 3.25     ]]

下面LZ来总结下到底tensorflow的padding到底是怎么一回事?

‘VALID’

valid很简单,就是丢,如果kernel_size超过了feature map剩下没有处理过的维度,那就丢掉,但是这样就会产生信息的丢失,最好还是用‘same’比较好

‘SAME’

same是要把feature map上的信息都用上,并且会在feature map周围进行补零操作。
如果pad的数量为1,也就是在feature map的右边和下边各补充一列和一行零
如果pad的数量为2, 也就是在feature map的上下左右各补充一列和一行零
如果pad的数量为3,会在上和左边补一行和一列,而在右边和下边补两列,两行

以此类推,如果pad的维度为偶数,上下左右补零的维度相同,如果pad的维度为奇数,则右边和下边补零的维度比左边和上边补零的维度多1

网上博客都是个人理解,如果真正想弄明白,自己动手试一试!

发布了349 篇原创文章 · 获赞 237 · 访问量 65万+

猜你喜欢

转载自blog.csdn.net/Felaim/article/details/104776105