Tensorflow 实现MNIST手写数字体识别

 最近刚刚上手tensorflow,感觉突然接触学习tensorflow很不习惯,毕竟tensorflow是一种基于计算图的编程思维,

但是学习一门语言最好的办法就是不断的实践,所以就找了深度学习入门级的任务MNIST手写体识别来练手,而在调试

程序的过程中也是非常痛苦的,到现在使用滑动平均模型进行训练的方式还没有跑通,只能等以后慢慢积累解决了。好了

进入正文,这个程序分成3个独立的部分:前向传播过程以及定义神经网络的结构、训练过程以及训练完毕保存模型参数、

加载训练好的模型参数在测试集上验证过程。

 MNIST数据集大小是28*28,所以定义了一个三层的全连接的网络,hidden layer 节点个数是500。在训练过程中对

权重进行正则化处理,以防止过拟合。优化的时候选取指数下降的自动调整学习率。源代码稍后附上,不懂的地方最好自己

百度解决。

 以下是保存的最新的模型参数,训练中设置每隔500步保存一次参数:


 训练过程和测试过程同时运行:



由于训练过程设置了50000轮,时间太久,所以在30000轮时候终止了测试,不过在验证集的准确率可以达到97%了

最终实际在测试集上的准确率为97.6%。

 前向传播代码:

# !/usr/bin/env python
# -*- coding:utf-8 -*- 
# Author: wsw
# 定义神经网络的结构参数以及前向传播过程
import tensorflow as tf

# 定义神经网络的结构参数
INPUT_NODES = 784
LAYER1_NODES = 500
OUTPUT_NODES = 10


# 获取权重变量,并将权重变量的正则化项加入自定义损失集合
def get_weights_variable(shape, regularizer):
	"""

	:param shape: 权重变量的维度
	:param regularizer: 正则化函数
	:return: 权重变量
	"""
	weights = tf.get_variable('weights', shape, initializer=tf.truncated_normal_initializer(stddev=1))
	if regularizer is not None:
		# 自定义损失集合,将权重变量的损失项加入到集合中
		tf.add_to_collection('losses', regularizer(weights))
	return weights


# 定义前向传播过程
def forwardpropogation(input_tensor, regularizer):
	"""

	:param input_tensor: 输入数据张量
	:param regularizer: 正则化函数
	:return: 前向传播的结果
	"""
	with tf.variable_scope('layer1'):
		weights = get_weights_variable(shape=[INPUT_NODES, LAYER1_NODES], regularizer=regularizer)
		biases = tf.get_variable('biases', [LAYER1_NODES], initializer=tf.constant_initializer(0.1))
		layer1_out = tf.nn.relu(tf.matmul(input_tensor, weights) + biases)
	with tf.variable_scope('layer2'):
		weights = get_weights_variable(shape=[LAYER1_NODES, OUTPUT_NODES], regularizer=regularizer)
		biases = tf.get_variable('biases', [OUTPUT_NODES], initializer=tf.constant_initializer(0.2))
		layer2_out = tf.matmul(layer1_out, weights)+biases

	return layer2_out

 训练过程代码:

# !/usr/bin/env python
# -*- coding:utf-8 -*- 
# Author: wsw
# 定义训练过程
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import os
# 加载神经网络结构参数以及前向传播函数
import best_mnist_recognize.mnist_inference as inference

# 定义神经网络的配置参数
LEARNING_RATE_BASE = 0.8
LEARNING_RATE_DECAY = 0.99
DECAY_STEPS = 550
BATCH_SIZE = 100
MOVING_AVEARGE_DECAY = 0.999
# 正则化系数
LAMBDA = 0.001
EPOCHS = 100
# 模型参数保存文件地址
MODEL_SAVE_PATH = './model_parameter/'
MODEL_NAME = 'best_mnist_recognize.ckpt'
# 数据地址
data_path = 'E:/MNIST_data'


# 定义训练函数
def train(mnist):
	# 定义输入输出占位符
	x = tf.placeholder(tf.float32, shape=[None, inference.INPUT_NODES], name='x-input')
	y_ = tf.placeholder(tf.float32, shape=[None, inference.OUTPUT_NODES], name='y-input')
	# 记录当前训练步数变量
	gloabl_step = tf.Variable(0, trainable=False)
	# 定义滑动平均类
	var_average = tf.train.ExponentialMovingAverage(MOVING_AVEARGE_DECAY, gloabl_step)
	var_average_op = var_average.apply(tf.trainable_variables())
	# 设置指数衰减学习率
	learning_rate = tf.train.exponential_decay(LEARNING_RATE_BASE, gloabl_step,
	                                           DECAY_STEPS, LEARNING_RATE_DECAY)
	# 定义正则化类
	regularizer = tf.contrib.layers.l2_regularizer(LAMBDA)
	# 计算前向传播的结果
	output = inference.forwardpropogation(x, regularizer)
	# 计算损失
	sum_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=output, labels=tf.argmax(y_, 1))
	cross_entropy = tf.reduce_mean(sum_loss)
	# 加上正则化项得到总损失, tf.add_n可以使多项相加
	loss = cross_entropy + tf.add_n(tf.get_collection('losses'))
	# 设置优化方式
	optimiset = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, gloabl_step)
	# 将需要执行的多个操作组合起来
	train_op = tf.group(optimiset, var_average_op)
	# 定义一个持久化模型的类
	saver = tf.train.Saver()
	with tf.Session() as sess:
		# 初始化所有变量操作
		tf.global_variables_initializer().run()
		# EPOCHS训练轮数
		for i in range(EPOCHS):
			# 在每一轮下训练
			for j in range(DECAY_STEPS):
				# sess.run(var_average.average(tf.trainable_variables().keys()))
				xtrain, ytrain = mnist.train.next_batch(BATCH_SIZE)
				_, loss_value, step = sess.run([train_op, loss, gloabl_step],
				                               feed_dict={x: xtrain, y_: ytrain})
				# 每次训练500步查看在batch上的训练结果
				# 每训练500步保存一次模型的参数
				if step % 500 == 0:
					print('After training {0:d} step loss value '
					      'on training batch is {1:g}'.format(step, loss_value))
					print('!!! Starting Save Model Parameters !!!')
					# 加入global_step模型参数保存时会具体指定是第几步的参数 eg. ckpt-500
					saver.save(sess, os.path.join(MODEL_SAVE_PATH, MODEL_NAME), global_step=gloabl_step)
					print('!!! Model Parameters Save Finished !!!')


# 定义主函数
def main():
	mnist = input_data.read_data_sets(data_path, one_hot=True)
	train(mnist)


if __name__ == "__main__":
	main()
 测试以及验证代码:

# !/usr/bin/env python
# -*- coding:utf-8 -*- 
# Author: wsw
# 定义神经网络的测试过程
import tensorflow as tf
import time
from tensorflow.examples.tutorials.mnist import input_data
import re
# 加载网络的前向传播过程以及训练过程
from best_mnist_recognize import mnist_inference as inference
from best_mnist_recognize import mnist_train as train


# 定义在验证集上的性能的测试函数
def evaluate(mnist):
	pattern = re.compile('-(\d+)')
	with tf.Graph().as_default():
		# 定义输入输出占位符
		x = tf.placeholder(tf.float32, shape=[None, inference.INPUT_NODES], name='x-input')
		y_ = tf.placeholder(tf.float32, shape=[None, inference.OUTPUT_NODES], name='y-input')
		# 设置验证集数据
		validation_feed = {x: mnist.validation.images, y_: mnist.validation.labels}
		# 设置测试集数据
		test_feed = {x: mnist.test.images, y_: mnist.test.labels}
		# 计算前向传播的结果,测试时不需要加入正则化项
		output = inference.forwardpropogation(x, regularizer=None)
		# 使用前向传播的结果计算正确率
		correct_prediction = tf.equal(tf.argmax(output, 1), tf.argmax(y_, 1))
		accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

		# # 定义一个滑动平均类
		# var_average = tf.train.ExponentialMovingAverage(train.MOVING_AVEARGE_DECAY)
		# # 通过变量从命名加载变量的滑动平均值
		# var_shadow_map = var_average.variables_to_restore()
		# print(var_shadow_map)
		# print('---------------------------------------------------------------------')
		# 定义一个持久化类
		saver = tf.train.Saver()
		# 开始验证在测试集上的准确率
		while True:
			with tf.Session() as sess:
				# tf.train.get_checkpoint_state 会找到最新的模型文件名
				newest = tf.train.get_checkpoint_state(train.MODEL_SAVE_PATH)
				# 最新模型的路径
				path = newest.model_checkpoint_path
				if newest and path:
					# 加载模型
					saver.restore(sess, path)
					# 取出模型参数对应的迭代步数
					step = re.findall(pattern, newest.model_checkpoint_path)
					accuracy_score = sess.run(accuracy, feed_dict=validation_feed)
					print('After {0:d} training steps '
					      'validation accuracy is {1:g}'.format(int(step[0]), accuracy_score))
				else:
					print('No checkpoint file is found')
					return
				# 每隔10秒加载一次模型参数
				time.sleep(10)
				# 如果大于训练步数之后,就退出模型,输出在测试集上的准确率
				if int(step[0]) > train.EPOCHS*train.DECAY_STEPS:
					test_acc = sess.run(accuracy, feed_dict=test_feed)
					print('Accuracy on test set is {0:g}'.format(test_acc))
					break


def main():
	mnist = input_data.read_data_sets(train.data_path, one_hot=True)
	evaluate(mnist)


if __name__ == '__main__':
	main()










 

猜你喜欢

转载自blog.csdn.net/qq_30666517/article/details/79117115