VGGNet-16中的一些函数解析

使用get_shape()[-1].value获取tensor的最后一维的长度,get_shape()[0].value获取tensor第一维的长度。例如:

 input = tf.random_normal([224,225,3], dtype=tf.float32,stddev = 1e-1)
>>> input.get_shape()[0].value
224
>>> input.get_shape()[1].value
225
>>> input.get_shape()[2].value
3
>>> input.get_shape()[3].value
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\PengFeihu\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\tensor_shape.py", line 612, in __getitem__
    return self._dims[key]
IndexError: list index out of range
>>> input.get_shape()[-1].value
3
>>> input.get_shape()[-2].value
225
>>> input.get_shape()[-3].value
224
>>> input.get_shape()[-4].value
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\PengFeihu\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\tensor_shape.py", line 612, in __getitem__
    return self._dims[key]
IndexError: list index out of range

Xavier初始化方法函数:

tf.contrib.layers.xavier_initializer_conv2d()

tf.Variable可将参数转化为可训练(可变)参数:

tf.Variable(bias, trainable=True, name='b')

tf.name_scope(name)可以让变量有相同的变量名name,例如:

with tf.name_scope('V1'):
    a1 = tf.Variable(name='a1', shape=[1], initializer=tf.constant_initializer(1))
    a2 = tf.Variable(tf.random_normal(shape=[2,3], mean=0, stddev=1), name='a2')
    

输出:

V1/a1:0

V1/a2:0

tf.nn.relu_layer使用如下:

tf.nn.relu_layer(input, kernel, biases, name = scope)

表示前两个参数先进行矩阵乘法,再加上第三个参数,当然是biase_add,然后再进行relu非线性变换激活。

最大池化函数tf.nn.max_pool(input, ksize=[1, kh, kw, 1], stride=[1, dh, dw, 1], padding, name),其中,input为输入tensor,ksize为池化核大小kh*kw,stride为池化步长,padding为池化方式,有SAME和VALID两种,valid为不可超过边框。 

猜你喜欢

转载自blog.csdn.net/SHNU_PFH/article/details/81535221
今日推荐