Android tensorflow softmax 实现mnist分类

今天记录下 tensorflow 基于 mnist 图库集完成softmax回归;mnist库是手写阿拉伯数字图片集,如下图所示

这里写图片描述

下面是下载并读取数据集,数据集存放在linux 根目录 下的 /data 下 mnist 文件夹下

from tensorflow.examples.tutorials import mnist
mnist_data = mnist.input_data.read_data_sets('/data/mnist', one_hot=True)

下面是创建 线性模型 y = w*x + b,w为权值,b为偏值;注意这里的w 、b、x、y都是多维数组,可以看做矩阵
y_ 为真实值 y为预测值


# x is feature value
x = tf.placeholder(tf.float32, [None, 784])

 # w is weight for feature
w = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.matmul(x, w) + b          #  y = x*w + b
 # y_ is true value 
y_ = tf.placeholder(tf.float32, [None, 10])

下面是创建softmax 交叉熵 和 对应的损失函数及选择最优值的优化器,里面分别使用了两种方法,都可以编译试一下,看一下最终的识别率

#损失均值的方法
#cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
#损失和的方法
cross_entropy = tf.reduce_sum(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
learn_rate = 0.01
#梯度下降优化器
#train_step = tf.train.GradientDescentOptimizer(learn_rate).minimize(cross_entropy)
#自适应优化器
train_step = tf.train.AdamOptimizer(learn_rate).minimize(cross_entropy)

学习率就是寻找最优值时调整的值的大小,就是梯度的大小,minimize 就是使损失函数达到最小值

全部代码如下:

#!/user/bin/env python
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt       #用于显示
from IPython.display import display, HTML

from tensorflow.examples.tutorials import mnist      #下载mnist数据集
 #读取mnist数据集
mnist_data = mnist.input_data.read_data_sets('/data/mnist', one_hot=True)

 # x is feature value
x = tf.placeholder(tf.float32, [None, 784])

 # w is weight for feature
w = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.matmul(x, w) + b

 # y_ is true value 
y_ = tf.placeholder(tf.float32, [None, 10])

 #cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
cross_entropy = tf.reduce_sum(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))

learn_rate = 0.01
#梯度下降优化器
 #train_step = tf.train.GradientDescentOptimizer(learn_rate).minimize(cross_entropy)
 #自适应优化器,学习率为 0.01
train_step = tf.train.AdamOptimizer(learn_rate).minimize(cross_entropy)

sess = tf.InteractiveSession()
tf.global_variables_initializer().run()

 # mnist_data.train training data

for _ in range(10000):
    #select 100 data as training set
    batch_xs, batch_ys = mnist_data.train.next_batch(100)
    sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})

#预测与真实值比较,得出准确率
correct_prediction = tf.equal(tf.arg_max(y, 1), tf.arg_max(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
#打印出准确率
print(sess.run(accuracy, feed_dict={x: mnist_data.test.images,y_: mnist_data.test.labels}))

#下面是将识别错误的打印并显示出来
for i in range(0, len(mnist_data.test.images)):
  result = sess.run(correct_prediction, feed_dict={x: np.array([mnist_data.test.images[i]]), y_: np.array([mnist_data.test.labels[i]])})
  if not result:
    print('the pre value:',sess.run(y, feed_dict={x: np.array([mnist_data.test.images[i]]), y_: np.array([mnist_data.test.labels[i]])}))
    print('the true value:',sess.run(y_,feed_dict={x: np.array([mnist_data.test.images[i]]), y_: np.array([mnist_data.test.labels[i]])}))
    one_pic_arr = np.reshape(mnist_data.test.images[i], (28, 28))
    pic_matrix = np.matrix(one_pic_arr, dtype="float")
    plt.imshow(pic_matrix)
    plt.show()
    break

参考文章:
https://www.cnblogs.com/lizheng114/p/7439556.html

猜你喜欢

转载自blog.csdn.net/shiluohuashengmi/article/details/81776858