Depthwise Separable Convolution (depth separable convolution) implementation

Separable convolution depth summary of ideas - in accordance with the ordinary convolution - the depth of convolution.

depthwise_conv2dIt comes from the depth of separable convolution, the following papers:

Xception: Deep Learning with Depthwise Separable Convolutions

Function is defined as follows:

tf.nn.depthwise_conv2d(input,filter,strides,padding,rate=None,name=None,data_format=None)

Removed name parameter to specify the name of the operation, the data format specified DATA_FORMAT, a total of five parameters and methods relating to:

input:
refers to the need to do the convolution of the input image, the Tensor requirement is a 4-dimensional, having [batch, height, width, in_channels ] Shape such, a specific meaning is the number of the picture batch of [training, picture height, picture width, image channel number]

filter:
corresponds to the convolution kernel CNN, the Tensor requirement is a 4-dimensional, having [filter_height, filter_width, in_channels, channel_multiplier ] Shape such, highly specific meaning width [kernel convolution, convolution kernel, input channel the number of convolution output multiplier], where the third dimension empathy in_channels, is the parameter value of the fourth dimension

strides:
convolution sliding step.

padding:
the amount of type string, only "SAME", one of "VALID", the value determined in different ways edge padding.

rate:
detailed explanation of this parameter see [Tensorflow] realize how empty tf.nn.atrous_conv2d convolution?

Returns a result Tensor, shape of [batch, out_height, out_width, in_channels * channel_multiplier], Note that this output channel becomes in_channels * channel_multiplier
custom convolutional information do Example:

img1 = tf.constant(value=[[[[1],[2],[3],[4]],[[1],[2],[3],[4]],[[1],[2],[3],[4]],[[1],[2],[3],[4]]]],dtype=tf.float32)
img2 = tf.constant(value=[[[[1],[1],[1],[1]],[[1],[1],[1],[1]],[[1],[1],[1],[1]],[[1],[1],[1],[1]]]],dtype=tf.float32)
img = tf.concat(values=[img1,img2],axis=3)

filter1 = tf.constant(value=0, shape=[3,3,1,1],dtype=tf.float32)
filter2 = tf.constant(value=1, shape=[3,3,1,1],dtype=tf.float32)
filter3 = tf.constant(value=2, shape=[3,3,1,1],dtype=tf.float32)
filter4 = tf.constant(value=3, shape=[3,3,1,1],dtype=tf.float32)
filter_out1 = tf.concat(values=[filter1,filter2],axis=2)
filter_out2 = tf.concat(values=[filter3,filter4],axis=2)
filter = tf.concat(values=[filter_out1,filter_out2],axis=3)
做普通卷积:

out_img = tf.nn.conv2d(input=img, filter=filter, strides=[1,1,1,1], padding='VALID')

Ordinary convolution implementation process follows a series of charts:

Do the depth of convolution:

out_img = tf.nn.depthwise_conv2d(input=img, filter=filter, strides=[1,1,1,1], rate=[1,1], padding='VALID')

 

形象的解释一下depthwise_conv2d卷积了。看普通的卷积,我们对卷积核每一个out_channel的两个通道分别和输入的两个通道做卷积相加,得到feature map的一个channel,而depthwise_conv2d卷积,我们对每一个对应的in_channel,分别卷积生成两个out_channel,所以获得的feature map的通道数量可以用in_channel* channel_multiplier来表达,这个channel_multiplier,就可以理解为卷积核的第四维。
做深度可分离卷积:

 如下,增加定义了point_filter 核。

import tensorflow as tf
img1 = tf.constant(value=[[[[1],[2],[3],[4]],[[1],[2],[3],[4]],[[1],[2],[3],[4]],[[1],[2],[3],[4]]]],dtype=tf.float32)
img2 = tf.constant(value=[[[[1],[1],[1],[1]],[[1],[1],[1],[1]],[[1],[1],[1],[1]],[[1],[1],[1],[1]]]],dtype=tf.float32)
img = tf.concat(values=[img1,img2],axis=3)
filter1 = tf.constant(value=0, shape=[3,3,1,1],dtype=tf.float32)
filter2 = tf.constant(value=1, shape=[3,3,1,1],dtype=tf.float32)
filter3 = tf.constant(value=2, shape=[3,3,1,1],dtype=tf.float32)
filter4 = tf.constant(value=3, shape=[3,3,1,1],dtype=tf.float32)
filter_out1 = tf.concat(values=[filter1,filter2],axis=2)
filter_out2 = tf.concat(values=[filter3,filter4],axis=2)
filter = tf.concat(values=[filter_out1,filter_out2],axis=3)

point_filter = tf.constant(value=1, shape=[1,1,4,4],dtype=tf.float32)
out_img = tf.nn.depthwise_conv2d(input=img, filter=filter, strides=[1,1,1,1],rate=[1,1], padding='VALID')
做深度分层卷积=做深度卷积,然后做pointwise卷积,因此在上代码添加做pointwise卷积代码即可完成,如下:

out_img = tf.nn.conv2d(input=out_img, filter=point_filter, strides=[1,1,1,1], padding='VALID')

输出:

使用官方函数编码查看结果,即:

out_img = tf.nn.separable_conv2d(input=img, depthwise_filter=filter, pointwise_filter=point_filter,strides=[1,1,1,1], rate=[1,1], padding='VALID')
输出:

ok,愉快地结束。

 

Guess you like

Origin www.cnblogs.com/HYWZ36/p/11403230.html