TensorFlow MNIST手写数字识别(神经网络极简版)

MNIST手写数字识别数据集是NIST数据集的一个子集(介绍),常用于深度学习的入门样例。

该数据集包含60000张图片作为训练数据(为验证模型效果,一般从验证数据中划分出一部分作为验证数据,一般为5000),10000张图片作为测试数据。MNIST数据集中每张图片代表0-9中的一个数字,图片大小为 28 × 28 28 \times 28 ,且数字都位于图片中央。

(1) 加载数据

Tensorflow提供了一个类来处理MNIST数据,这个类会自动下载并转化MNIST数据的格式,将原始数据解析成可直接进行训练和测试的数据格式。代码如下:

mnist = input_data.read_data_sets("datasets/MNIST_data/", one_hot=True)
print("Training data size: ", mnist.train.num_examples)        #55000
print("Validating data size: ", mnist.validation.num_examples) #5000
print("Testing data size: ", mnist.test.num_examples)          #10000

(2)设置参数

learning_rate = 0.0001  # 学习率
num_epochs = 1000       # 迭代次数
BATCH_SIZE = 100        #每轮迭代的训练数据个数

(3)前向传播

定义神经网络,输入层为 784 ( 28 × 28 ) 784(对应像素28 \times 28) ,隐藏层为 500 500 ,输出层为 10 10 (对应 10 10 个类别)。代码如下:

(m,n_x) = mnist.train.images.shape #784
n_y = mnist.train.labels.shape[1] #10
n_1 = 500

X = tf.placeholder(tf.float32, shape=(None,n_x), name="X")  #(55000,784)
Y = tf.placeholder(tf.float32, shape=(None,n_y), name="Y")  #(55000,10)

W1 = tf.get_variable("w1",[n_x,n_1],initializer = tf.contrib.layers.xavier_initializer(seed = 1))   #(784,500)
b1 = tf.get_variable("b1",[1,n_1],  initializer = tf.zeros_initializer())                          #(1,500)
W2 = tf.get_variable("w2",[n_1,n_y],initializer = tf.contrib.layers.xavier_initializer(seed = 1))  #(500,10)
b2 = tf.get_variable("b2",[1,n_y],  initializer = tf.zeros_initializer())                          #(1,10)

Z1 = tf.nn.relu(tf.matmul(X,W1) + b1) #(55000,500)
Z2 = tf.matmul(Z1,W2) + b2            #(55000,10)

(4)定义损失函数

使用交叉熵损失函数,代码如下:

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = Z2, labels = Y))

(5) 优化器

使用Adam优化器,代码如下

optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)

(6)模型训练

with tf.Session() as sess:
    tf.initialize_all_variables().run()
    for i in range(num_epochs):
        x,y = mnist.train.next_batch(BATCH_SIZE)
        sess.run(optimizer,feed_dict={X:x,Y:y})
        
        if i%500 == 0:
            cost_v = sess.run(cost,feed_dict={X:x,Y:y})
            costs.append(cost_v)
            print(i,cost_v)
        
   # Calculate the correct accuracy
    correct_prediction = tf.equal(tf.argmax(Z2,1), tf.argmax(Y,1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
    print ("Train Accuracy:", accuracy.eval({X:mnist.train.images, Y: mnist.train.labels})) #Train Accuracy: 0.98807275
    print ("Test Accuracy:", accuracy.eval({X: mnist.test.images, Y: mnist.test.labels}))   #Test Accuracy: 0.9756

(7)模型评估

代码如下:

plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()

生成图形如下:
在这里插入图片描述
可以看出损失值随迭代轮数增加而减小。

下载完整代码

import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.python.framework import ops
from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets("datasets/MNIST_data/", one_hot=True)


learning_rate = 0.0001
num_epochs = 10000
BATCH_SIZE = 100

(m,n_x) = mnist.train.images.shape #784
n_y = mnist.train.labels.shape[1] #10
n_1 = 500
costs = []

tf.set_random_seed(1)        # to keep consistent results

ops.reset_default_graph()    # to be able to rerun the model without overwriting tf variables
X = tf.placeholder(tf.float32, shape=(None,n_x), name="X")  #(55000,784)
Y = tf.placeholder(tf.float32, shape=(None,n_y), name="Y")  #(55000,10)

W1 = tf.get_variable("w1",[n_x,n_1],initializer = tf.contrib.layers.xavier_initializer(seed = 1))   #(784,500)
b1 = tf.get_variable("b1",[1,n_1],  initializer = tf.zeros_initializer())                          #(1,500)
W2 = tf.get_variable("w2",[n_1,n_y],initializer = tf.contrib.layers.xavier_initializer(seed = 1))  #(500,10)
b2 = tf.get_variable("b2",[1,n_y],  initializer = tf.zeros_initializer())                          #(1,10)

Z1 = tf.nn.relu(tf.matmul(X,W1) + b1) #(55000,500)
Z2 = tf.matmul(Z1,W2) + b2            #(55000,10)

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = Z2, labels = Y))
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)

with tf.Session() as sess:
    tf.initialize_all_variables().run()
    for i in range(num_epochs):
        x,y = mnist.train.next_batch(BATCH_SIZE)
        sess.run(optimizer,feed_dict={X:x,Y:y})
        
        if i%500 == 0:
            cost_v = sess.run(cost,feed_dict={X:x,Y:y})
            costs.append(cost_v)
            print(i,cost_v)
        
   # Calculate the correct accuracy
    correct_prediction = tf.equal(tf.argmax(Z2,1), tf.argmax(Y,1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
    print ("Train Accuracy:", accuracy.eval({X:mnist.train.images, Y: mnist.train.labels})) #Train Accuracy: 0.98807275
    print ("Test Accuracy:", accuracy.eval({X: mnist.test.images, Y: mnist.test.labels}))   #Test Accuracy: 0.9756
    
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()

猜你喜欢

转载自blog.csdn.net/apr15/article/details/106297110