继刚才的逻辑回归解决的十分类任务意犹未尽,分别设计了二层和三层的神经网络对比解决这个10分类问题
下面画一个草图代表三层神经网络的计算图:
import numpy as np import tensorflow as tf import matplotlib.pyplot as plt import input_data mnist = input_data.read_data_sets('data/', one_hot=True) # NETWORK TOPOLOGIES n_hidden_1 = 256 #第一层256个神经元 n_hidden_2 = 128 #第二层128个神经元 n_hidden_3 =64 #第三层64个神经元 n_input = 784 #n*784,其实就是一个28*28的灰度图 n_classes = 10 #最后输出的10个数字的得分 # INPUTS AND OUTPUTS x = tf.placeholder("float", [None, n_input])#跟之前的一样,现在会话层占个坑 y = tf.placeholder("float", [None, n_classes])#跟之前的一样,现在会话层占个坑 # NETWORK PARAMETERS stddev = 0.1 weights = { 'w1': tf.Variable(tf.random_normal([n_input, n_hidden_1], stddev=stddev)), 'w2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2], stddev=stddev)), 'w3': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_3], stddev=stddev)), 'out': tf.Variable(tf.random_normal([n_hidden_3, n_classes], stddev=stddev))#以上几句代码就是连接层与层之间的矩阵对其 } biases = { 'b1': tf.Variable(tf.random_normal([n_hidden_1])), 'b2': tf.Variable(tf.random_normal([n_hidden_2])), 'b3': tf.Variable(tf.random_normal([n_hidden_3])), 'out': tf.Variable(tf.random_normal([n_classes]))#以上几句代码对应于每一层的biases偏移项 } print ("NETWORK READY") def multilayer_perceptron(_X, _weights, _biases): layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(_X, _weights['w1']), _biases['b1'])) layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, _weights['w2']), _biases['b2'])) layer_3 = tf.nn.sigmoid(tf.add(tf.matmul(layer_2, _weights['w3']), _biases['b3'])) return (tf.matmul(layer_3, _weights['out']) + _biases['out'])#以上几句就是层与层之间的矩阵计算 # PREDICTION pred = multilayer_perceptron(x, weights, biases) # LOSS AND OPTIMIZER cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y)) optm = tf.train.GradientDescentOptimizer(learning_rate=0.001).minimize(cost) corr = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) accr = tf.reduce_mean(tf.cast(corr, "float"))#以上几句是定义损失函数和优化过程 # INITIALIZER init = tf.global_variables_initializer()#定义初始化变量 print ("FUNCTIONS READY") training_epochs = 20#以上和刚刚逻辑回归中的框架几乎差不多可对比理解 batch_size = 100 display_step = 4 # LAUNCH THE GRAPH sess = tf.Session() sess.run(init) # OPTIMIZE for epoch in range(training_epochs): avg_cost = 0. total_batch = int(mnist.train.num_examples/batch_size) # ITERATION for i in range(total_batch): batch_xs, batch_ys = mnist.train.next_batch(batch_size) feeds = {x: batch_xs, y: batch_ys} sess.run(optm, feed_dict=feeds) avg_cost += sess.run(cost, feed_dict=feeds) avg_cost = avg_cost / total_batch # DISPLAY if (epoch+1) % display_step == 0: print ("Epoch: %03d/%03d cost: %.9f" % (epoch, training_epochs, avg_cost)) feeds = {x: batch_xs, y: batch_ys} train_acc = sess.run(accr, feed_dict=feeds) print ("TRAIN ACCURACY: %.3f" % (train_acc)) feeds = {x: mnist.test.images, y: mnist.test.labels} test_acc = sess.run(accr, feed_dict=feeds) print ("TEST ACCURACY: %.3f" % (test_acc)) print ("OPTIMIZATION FINISHED") 打印3层神经网络结果: Epoch: 003/020 cost: 2.301472056 TRAIN ACCURACY: 0.110 TEST ACCURACY: 0.113 Epoch: 007/020 cost: 2.300291767 TRAIN ACCURACY: 0.060 TEST ACCURACY: 0.113 Epoch: 011/020 cost: 2.299280995 TRAIN ACCURACY: 0.090 TEST ACCURACY: 0.113 Epoch: 015/020 cost: 2.298274851 TRAIN ACCURACY: 0.120 TEST ACCURACY: 0.113 Epoch: 019/020 cost: 2.297222061 TRAIN ACCURACY: 0.030 TEST ACCURACY: 0.113 OPTIMIZATION FINISHED 修改三层神经网络变成2层神经网络 打印2层神经网络结果: Epoch: 003/020 cost: 2.269114979 TRAIN ACCURACY: 0.190 TEST ACCURACY: 0.250 Epoch: 007/020 cost: 2.233587230 TRAIN ACCURACY: 0.380 TEST ACCURACY: 0.368 Epoch: 011/020 cost: 2.194513177 TRAIN ACCURACY: 0.360 TEST ACCURACY: 0.458 Epoch: 015/020 cost: 2.149544148 TRAIN ACCURACY: 0.460 TEST ACCURACY: 0.528 Epoch: 019/020 cost: 2.096397300 TRAIN ACCURACY: 0.530 TEST ACCURACY: 0.593 OPTIMIZATION FINISHED
观察上面的对比可看到2层神经网络效果比三层神经网络效果要好,当然了这中间还有很多可以优化的地方,还要调参设置神经元的个数,再去观察分类效果,其实我们从刚刚的线性回归、逻辑回归到用神经网络去做分类任务,我们也能看到神经网络在于调参和设置相关的神经元,所以在解决实际问题是并不一定坚持用神经网络,可以结合传统的机器学习算法。
当然了从这两个不同层的神经网络,我们可以看到在tensorflow中玩神经网络它把我们的反向传播和相关复杂的计算都给封装起来了,这对我们来说太棒了,很简单直接调用即可,虽然这样简单我还是建议大家去看下相关的源码,去理解比如反向传播真正用代码实现是怎么个过程,有利于我们更深入的理解神经网络的正向和反向传播过程。
深夜每学到这里,大脑都异常兴奋,深度学习的世界总是给人无穷的吸引力,坚持努力探索,争取在深度学习的世界也能贡献自己的一份力量。