1 Introduction
Defining a function to add a layer in Tensorflow can easily add a neural layer, saving a lot of time for subsequent additions.
Common parameters in the neural layer are usually weights, biases and excitation functions.
2.def add_layer()
First, we need to import the tensorflow module.
Then define the function def add_layer () to add the neural layer. It has four parameters: input value, input size, output size and excitation function. We set the default excitation function to None.
import tensorflow as tf
def add_layer(inputs, in_size, out_size, activation_function=None):
Next, we begin to define weights and biases.
When generating initial parameters, the random distribution (normal distribution) is much better than all zeros, so our weights here are a random variable matrix with in_size rows and out_size columns.
Weights = tf.Variable.random_normal([in_size, out_size])
biases = tf.Variable(tf.zeros([1,out_size]) + 0.1)
Next, we define Wx_plus_b, which is the inactive value of the neural network. Among them, tf.matmul () is the matrix multiplication.
Wx_plus_b = tf.matmul(inputs, Weights) + biases
When activation_function-the excitation function is None, the output is the current predicted value-Wx_plus_b, when it is not None, Wx_plus_b is passed to the activation_function () function to get the output.
if activation_function is None:
outputs = Wx_plus_b
else:
outputs = activation_function(Wx_plus_b)
return outputs
Finally, return the output and add a nerve layer function-def add_layer () is defined.
Complete function:
def add_layer(inputs, in_size, out_size, activation_function=None):
Weights = tf.Variable.random_normal([in_size, out_size])
biases = tf.Variable(tf.zeros([1,out_size]) + 0.1)
Wx_plus_b = tf.matmul(inputs, Weights) + biases
if activation_function is None:
outputs = Wx_plus_b
else:
outputs = activation_function(Wx_plus_b)
return outputs