Tensorflow study notes - a preliminary understanding of TensorFlow through the LeNet-5 model

#LeNet-5 learning record

The first time I came into contact with TensorFlow, I recorded a little study notes by studying the senior's lecture notes https://blog.csdn.net/LLyj_/article/details/88933773 and another senior's explanation.

1. LeNet-5

1. INPUT layer

2.C1 layer
#Create the first layer of convolution
#The number of parameters is 5 * 5 * 1 * 32. Each 5 * 5 * 1 convolution kernel can get a channel, and 32 5 * 5 * 1 convolutions can get 32 channel
# There are 32 convolution kernels in total and because the size of the padding "same" remains unchanged, the size of the feature map is 28 * 28 * 32 #The
activation function increases the nonlinearity of the neural network model

    with tf.variable_scope("C1-conv",reuse=resuse):
    	# tf.get_variable共享变量
    	# [5, 5, 1, 32]卷积核大小为5×5×1,有32个
    	# stddev正太分布的标准差
        conv1_weights = tf.get_variable("weight", [5, 5, 1, 32],
                             initializer=tf.truncated_normal_initializer(stddev=0.1))	
        # tf.constant_initializer初始化为常数,这个非常有用,通常偏置项就是用它初始化的
        conv1_biases = tf.get_variable("bias", [32], initializer=tf.constant_initializer(0.0))
        # strides:卷积时在图像每一维的步长,这是一个一维的向量,长度4
        # padding=’SAME’,表示padding后卷积的图与原图尺寸一致,激活函数relu()
        conv1 = tf.nn.conv2d(input_tensor, conv1_weights, strides=[1, 1, 1, 1], padding="SAME")
        relu1 = tf.nn.relu(tf.nn.bias_add(conv1, conv1_biases))

S2 layer
#Create the first pooling layer
#2 * 2's maximum pooling can reduce the image size by half
#After pooling, get 14 * 14 * 32

    # tf.name_scope的主要目的是为了更加方便地管理参数命名。
    # 与 tf.Variable() 结合使用。简化了命名
    with tf.name_scope("S2-max_pool",):
    	# ksize:池化窗口的大小,取一个四维向量,一般是[1, height, width, 1],
    	# 因为我们不想在batch和channels上做池化,所以这两个维度设为了1
    	# strides:窗口在每一个维度上滑动的步长,一般也是[1, stride,stride, 1]
        pool1 = tf.nn.max_pool(relu1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding="SAME")

C3 layer
#Create the second convolutional layer
#The operation is the same as the first convolutional layer
#64 bias items are initialized here
#The number of convolution kernels in the second convolution is 64
#The feature map obtained is 14 * 14 * 64

    with tf.variable_scope("C3-conv",reuse=resuse):
        conv2_weights = tf.get_variable("weight", [5, 5, 32, 64],
                                     initializer=tf.truncated_normal_initializer(stddev=0.1))
        conv2_biases = tf.get_variable("bias", [64], initializer=tf.constant_initializer(0.0))
        conv2 = tf.nn.conv2d(pool1, conv2_weights, strides=[1, 1, 1, 1], padding="SAME")
        relu2 = tf.nn.relu(tf.nn.bias_add(conv2, conv2_biases))

S4 layer
#Create the second pooling layer
#Reduce again to get 7 * 7 * 64
#nodes=7 * 7 * 64 = 3136
#shape[0] is automatically calculated, always set to -1
#Change the matrix into linear

    with tf.name_scope("S4-max_pool",):
        pool2 = tf.nn.max_pool(relu2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding="SAME")
        # get_shape()函数可以得到这一层维度信息,由于每一层网络的输入输出都是一个batch的矩阵,
        # 所以通过get_shape()函数得到的维度信息会包含这个batch中数据的个数信息
        # shape[1]是长度方向,shape[2]是宽度方向,shape[3]是深度方向
        # shape[0]是一个batch中数据的个数,reshape()函数原型reshape(tensor,shape,name)
        shape = pool2.get_shape().as_list()
        nodes = shape[1] * shape[2] * shape[3]    # nodes=3136
        reshaped = tf.reshape(pool2, [shape[0], nodes])

C5 layer
#Create the first fully connected layer
#The weight parameter here is nodes * 512, that is, 3136 * 512
#Multiply the matrix to get B * 512, B refers to the number of data in the batch

    with tf.variable_scope("layer5-full1",reuse=resuse):
        Full_connection1_weights = tf.get_variable("weight", [nodes, 512],
                                      initializer=tf.truncated_normal_initializer(stddev=0.1))
        # if regularizer != None:
        tf.add_to_collection("losses", regularizer(Full_connection1_weights))
        Full_connection1_biases = tf.get_variable("bias", [512],
                                                     initializer=tf.constant_initializer(0.1))     
        if avg_class ==None:
            Full_1 = tf.nn.relu(tf.matmul(reshaped, Full_connection1_weights) + \
                                                                   Full_connection1_biases)
        else:
            Full_1 = tf.nn.relu(tf.matmul(reshaped, avg_class.average(Full_connection1_weights))
                                                   + avg_class.average(Full_connection1_biases))

F6 layer
#Create the second fully connected layer

 with tf.variable_scope("layer6-full2",reuse=resuse):
         Full_connection2_weights = tf.get_variable("weight", [512, 10],
                                       initializer=tf.truncated_normal_initializer(stddev=0.1))                                             
         # if regularizer != None:
         tf.add_to_collection("losses", regularizer(Full_connection2_weights))
         Full_connection2_biases = tf.get_variable("bias", [10],
                                                    initializer=tf.constant_initializer(0.1))
         if avg_class == None:
             result = tf.matmul(Full_1, Full_connection2_weights) + Full_connection2_biases
        else:
            result = tf.matmul(Full_1, avg_class.average(Full_connection2_weights)) + \
                                                  avg_class.average(Full_connection2_biases)

OUTPUT layer

2. Summary

I only have a little preliminary understanding, and I still need more study to improve the knowledge system. I don’t have a deep understanding of the record of my first study, and many concepts are still vague. I will record it for future improvement.

3. Source of code and comments in the code

https://blog.csdn.net/LLyj_/article/details/88933773

Guess you like

Origin blog.csdn.net/Lianhaiyan_zero/article/details/89072797