Tensorflow学习笔记(3)-mnist(softmax regression)

版权声明:博客仅供参考,有什么意见,请在下方留言,转载时请附上链接,谢谢! https://blog.csdn.net/u010105243/article/details/76559740
# -*- coding: utf-8 -*-
# @Time    : 17-8-1 下午8:22
# @Author  : 未来战士biubiu!!
# @FileName: 3-nearest neighbor.py
# @Software: PyCharm Community Edition
# @Blog    :http://blog.csdn.net/u010105243/article/

import tensorflow as tf

sess = tf.InteractiveSession()

# import MNIST data
from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets("MNIST_data", one_hot=True)

# 构建图结构,这里的x,y_不是确切的值,而是一个palceholder,在我们运行tensorflow时输入
x = tf.placeholder(tf.float32, [None, 784])  # [None,28× 28]第一个参数代表batch_size,第二个参数代表图片的大小
y_ = tf.placeholder(tf.float32, [None, 10])  # 输出预测值有10个

W = tf.Variable(tf.zeros([784, 10]))  # 784*10的mat,因为我们有784的feature和10个输出
b = tf.Variable(tf.zeros([10]))  # 10类

# 初始化
sess.run(tf.global_variables_initializer())
# 预测
y = tf.matmul(x, W) + b
# 定义损失函数
corss_entopy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
# 训练优化
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(corss_entopy)

for _ in range(1000):
    batch = mnist.train.next_batch(100)
    # 训练数据
    train_step.run(feed_dict={x: batch[0], y_: batch[1]})
    # 判断预测正取的数据
    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
    # 计算正确率
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    print(accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))

参考:
https://www.tensorflow.org/get_started/mnist/pros

猜你喜欢

转载自blog.csdn.net/u010105243/article/details/76559740
今日推荐