TensorFlow combat fifth class (MNIST handwriting recognition dataset)

Tensorflow achieve softmax regression digital handwriting recognition

MNIST recognition of handwritten numerals may be described as the image hello world art machine learning.

MNIST is a very simple machine vision data sets. It consists of tens of thousands of handwritten numeral 28 * 28 pixels composition, which contains only the gray value images information. Our task is to classify these handwritten numbers. Converted into a total of 10 0-9 classification.

First, run the following command to load the code MNIST handwritten data sets:

from tensorflow.examples.tutorials.mnist Import   Input_Data
 # Number. 1 to 10 Data 
# Create a folder to store data 
MNIST = input_data.read_data_sets ( ' MNIST_data ' , one_hot = True)

Dataset contains 55,000 samples, the test set of 10,000 samples, while there are 5000 samples validation set. Label information for each sample has his correspondence, that label.

We will train the model on the training set, test results on the validation set and decide when to complete the training, assessment and evaluation at the end we model the effect.

We are ready to start designing algorithm data. We use the softmax regression algorithm training handwritten numeral recognition of the classification model. Digital divided 0-9, so a total of ten categories, when we predict the picture, softmax regression will estimate a probability for each category, and then take the estimated maximum probability as a digital output of the model.

Note: when we are dealing with a multi-classification model, generally require the use softmax regression. For example convolutional neural network or recurrent neural network, if a classification model, then the final layer is also softmax regression.

 

loss function is selected cross-entropy, cross entropy is a measure of the similarity of the predicted value and the true value, if exactly the same, they cross entropy equal to zero.

cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys * tf.log(prediction),
reduction_indices=[1])) # loss

 

Method train (optimization method) using a gradient descent method.

= tf.train.GradientDescentOptimizer train_step (0.5 ) .minimize (cross_entropy) 
Sess = tf.Session ()
 # tf.initialize_all_variables () The wording soon abandoned 
# replaced with the following wording: 
sess.run (tf.global_variables_initializer ())

 

Complete code:

# Classification Classification learning 

Import tensorflow AS TF 

from tensorflow.examples.tutorials.mnist Import   input_data
 # Number The 1 to 10 the Data 
# Create a folder to store data 
MNIST = input_data.read_data_sets ( ' MNIST_data ' , one_hot = True) 

DEF add_layer (the Inputs, in_size , out_size, activation_function = None):
     # add more than one layer and the output layer returns 

    Weights = tf.Variable (tf.random_normal ([in_size, out_size])) 
    biases = tf.Variable (tf.zeros ([. 1, out_size]) + 0.1 ) 
    Wx_plus_b = tf.matmul (Inputs, Weights) +biases

    if activation_function is None:
        outputs = Wx_plus_b
    else:
        outputs = activation_function(Wx_plus_b)
    return outputs

def compute_accuracy(v_xs,v_ys):
    global prediction
    y_pre = sess.run(prediction,feed_dict={xs:v_xs})
    correct_prediction = tf.equal(tf.argmax(y_pre,1),tf.argmax(v_ys,1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    result = sess.run(accuracy, feed_dict=XS {: v_xs, YS: v_ys})
     return the Result 


# the DEFINE placeholder for the Inputs to Network 
XS = tf.placeholder (tf.float32, [None, 784]) # None was not specified how many of his sample, but predetermined size 28 28 * 
YS = tf.placeholder (tf.float32, [None, 10 ]) 

# the Add Layer Output 
# activation function using the softmax function 
Prediction add_layer = (XS, 784,10, activation_function = tf.nn.softmax) 

# The error BETWEEN real Data and prediction 
'' ' Loss function i.e. optimizing the objective function selected cross-entropy function 
cross entropy is a measure of a predicted value and the true value similarity 
if identical, they cross zero entropy 
' '' 
cross_entropy = tf.reduce_mean (- tf.reduce_sum (YS * tf.log (Prediction),
                                       reduction_indices= [. 1]))        # Loss 
# gradient descent method 
train_step = tf.train.GradientDescentOptimizer (0.5 ) .minimize (cross_entropy) 

Sess = tf.Session () 
the init = tf.global_variables_initializer () 
sess.run (the init) 

for I in Range (2000 ):
     # each taking only 100 images 
    batch_xs, batch_ys = mnist.train.next_batch (100 ) 
    sess.run (train_step, feed_dict = {XS: batch_xs, YS: batch_ys})
     IF I == 50% 0:
         Print (compute_accuracy (mnist.test.images, mnist.test.labels))

 

Output:

 

Guess you like

Origin www.cnblogs.com/baobaotql/p/11281537.html