Review of Tensorflow functions used in handwriting recognition

tf.truncated_normal(shape, stddev=0.1)

Output random values ​​from a truncated normal distribution.
Generated values ​​follow a normal distribution with the specified mean and standard deviation, discarding reselection if the generated value is greater than 2 standard deviations from the mean.

Parameters:
shape: One-dimensional tensor, which is also the output tensor.
mean: the mean of the normal distribution.
stddev: The standard deviation of the normal distribution.
dtype: The type of output.
seed: an integer, when set, the random number generated every time is the same.
name: The name of the operation.

tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)

Output random values ​​from a normal distribution.

Parameters:
shape: One-dimensional tensor, which is also the output tensor.
mean: the mean of the normal distribution.
stddev: The standard deviation of the normal distribution.
dtype: The type of output.
seed: an integer, when set, the random number generated every time is the same.
name: The name of the operation.

tf.constant(0.1, shape=shape)

constant is the constant node of TensorFlow. It is created by the constant method. It is the starting node in the Computational Graph and is the incoming data.

Parameters:
value: initial value, required, must be a tensor (1 or [1,2,3] or [[1,2,3],[2,2,3]] or... )
dtype: data type, optional, the default is the data type of value, the incoming parameter is the enumeration value under tensorflow (float32, float64.......)
shape: data shape, optional, the default is value The shape, when set, must not be smaller than the value. It can be higher in order and dimension than the value. The excess part is filled with the last number according to the value.

tf.nn.relu(features, name=None)

Calculate the activation function relu, which is max(features, 0). That is to set the non-maximum value of each row in the matrix to 0. And return a tensor of the same shape as the feature.

Parameters:
features: tensor type, must be these types: A Tensor.float32, float64, int32, int64, uint8, int16, int8, uint16, half.name
: : operation name (optional)

tf.nn.conv2d(x, W_conv1, strides=[1, 1, 1, 1], padding='SAME')

convolution function,

Parameters:
input (input): [batch, in_height, in_width, in_channels]
filter / convolution kernel (filter / kernel): [filter_height, filter_width, in_channels, out_channels]

Step size (Strides): [1, stride, stride, 1]
Description: The step size of the filter movement, the first and fourth digits are generally constant at 1, the second digit refers to the step size when moving horizontally, and the third digit Indicates the step size of vertical movement. strides = [1, stride, stride, 1].

padding: A string from: "SAME", "VALID"
Description: Valid: When using the filter to move by step in the input matrix, the last column and row of the insufficient part will be discarded; Same: First up and down the input matrix Add a row with a value of 0 to each, and add a column with a value of 0 to the left and right, that is, wrap the original matrix with 0, and then when moving, if the column or row length of the input matrix is ​​not enough, use 0 to make up. For details, see another article: Padding: Detailed explanation of SAME and VALID

Return:
Tensor, ie Feature Map, in the format of shape [batch, height, width, channels]

https://www.cnblogs.com/qggg/p/6832342.html

tf.nn.max_pool(value, ksize, strides, padding, name=None)

max pooling

Parameters:
value: The input that needs to be pooled. Generally, the pooling layer is connected to the convolutional layer, so the input is usually a feature map, which is still a shape like [batch, height, width, channels]
ksize: The size of the pooling window, Take a four-dimensional vector, usually [1, height, width, 1], because we don't want to do pooling on batch and channels, so these two dimensions are set to 1
strides: similar to convolution, the window is in each dimension The step size of the sliding is generally [1, stride, stride, 1]
padding: similar to convolution, you can take 'VALID' or 'SAME'

Return:
Tensor, shape is [batch, height, width, channels]

https://www.2cto.com/kf/201612/572952.html

tf.nn.dropout(x, keep_prob, noise_shape=None, seed=None, name=None)

Scale the input tensor x to the output according to the given keep_prob parameter.

Parameters:
x : input tensor
keep_prob : float type, the probability that each element is retained
noise_shape : a 1-dimensional int32 tensor, representing the shape of the "keep/discard" flag that is randomly generated.
seed : integer variable, random number seed.
name : name, useless.

https://www.cnblogs.com/qggg/p/6849881.html

tf.argmax(vector, 1)

Returns the index of the largest value

Returns the index number of the maximum value in the vector. If the vector is a vector, it returns a value. If it is a matrix, it returns a vector. Each dimension of this vector is the maximum value of the corresponding matrix row. The index number of the element.

https://blog.csdn.net/uestc_c2_403/article/details/72232807

github

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324738057&siteId=291194637