TensorFlow实战Chp4-实现多层感知机(MLP)

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/mr_muli/article/details/81220888
  • TensorFlow实战Chp4-实现多层感知机(MLP)
# -*- coding: utf-8 -*-
"""
Created on Thu Jul 26 11:10:45 2018

@author: muli
"""
# 多层感知机(MLP)--也叫作 全连接神经网络(FCN)

#众所周知,机器学习的三大类可分为:
#    1.监督学习
#    2.无监督学习
#    3.强化学习
#    其实还有一种属于这两种学习的折中,称为半监督学习,同时利用无标记和有标记的数据进行训练。

#若按照学习过程(权值的更新方法)中,一次使用多少个样本对权值进行更新划分,可划分成:
#   1.离线学习 / 批梯度下降
#   2.在线学习 / 随机梯度下降
#   3.最小批梯度下降


# 减轻过拟合的Dropout
# 自适应学习速率Adagrad
# 解决梯度弥散的激活函数ReLU

from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf


mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
sess = tf.InteractiveSession()

in_units = 784
h1_units = 300
W1 = tf.Variable(tf.truncated_normal([in_units, h1_units], stddev=0.1))
b1 = tf.Variable(tf.zeros([h1_units]))
W2 = tf.Variable(tf.zeros([h1_units, 10]))
b2 = tf.Variable(tf.zeros([10]))

# Create the model
x = tf.placeholder(tf.float32, [None, in_units])
# Dropout的比率
keep_prob = tf.placeholder(tf.float32)
# 隐含层
hidden1 = tf.nn.relu(tf.matmul(x, W1) + b1)
# dropout处理
hidden1_drop = tf.nn.dropout(hidden1, keep_prob)
# 输出层
y = tf.nn.softmax(tf.matmul(hidden1_drop, W2) + b2)

# Define loss and optimizer
y_ = tf.placeholder(tf.float32, [None, 10])
# 损失函数:交叉熵损失
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
# 优化:选择自适应的优化器Adagrad
train_step = tf.train.AdagradOptimizer(0.3).minimize(cross_entropy)

# Train
tf.global_variables_initializer().run()
for i in range(3000):
  batch_xs, batch_ys = mnist.train.next_batch(100)
  train_step.run({x: batch_xs, y_: batch_ys, keep_prob: 0.75})

# Test trained model
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(accuracy.eval({x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))

猜你喜欢

转载自blog.csdn.net/mr_muli/article/details/81220888