KNN (nearest value) of the base model algorithm tensorflow

KNN algorithm theory, this article will use KNN algorithm MINST training data set with tensorflow.

Codes:

Import print_function __future__ from, Division Import numpy AS NP Import tensorflow TF AS introduced MNIST data # from tensorflow.examples.tutorials.mnist Import Input_Data MNIST = input_data.read_data_sets ( "./ tmp / Data /", one_hot = True) # 5000 take training data and test data 200 Xtr, Ytr = mnist.train.next_batch (5000) for training # 5000 (NN candidates in a) XTE, YTE = mnist.test.next_batch (200) for testing # 200 # input FIG xtr tf.placeholder = ( "a float", [None, 784]) XTE tf.placeholder = ( "a float", [784]) # L1 using the nearest neighbor distance acquisition value # distance L1 calculated distance = tf.reduce_sum (tf. ABS (tf.add (XTR, tf.negative (XTE))), = reduction_indices. 1) # forecast: obtaining the minimum distance (the nearest value)




















= tf.arg_min pred (Distance, 0) Accuracy = 0. # initialization parameters of the init = tf.global_variables_initializer () # start training with tf.Session () AS sess:    sess.run (the init)    # test    for i Range in (len (XTE)):        # Get nearest        nn_index = sess.run (Pred, feed_dict = {XTR: Xtr, XTE: XTE [I,:]})        # predicted value and the actual value comparison        print ( "Test" , I, "Prediction:", np.argmax (Ytr [nn_index]),              "True Class:", np.argmax (YTE [I]))        # calculation accuracy        if np.argmax (Ytr [nn_index]) == np.argmax (YTE [I]):            Accuracy = + 1. / len (XTE)    Print ( "the Done!")    Print ( "the Accuracy:",accuracy)








   














---------------------

Guess you like

Origin www.cnblogs.com/hyhy904/p/11182997.html