lesson15 反向传播

https://www.bilibili.com/video/av22530538/?p=15

#反向传播===》训练模型参数,在所有参数上用梯度下降,
#使NN模型在训练数据上的损失函数最小。
#损失函数(loss):预测值y)与已知答案(y_)的差距:
#均方误差MSE:MES(y_,y)=(y-y_)^2求和/n
#loss = tf.reduce_mean(tf.square(y_-y))

#反向传播训练方法:以减小loss值为 i优化目标
#train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
#train_step = tf.train.MomentumOptimizer(0.001,0.9).minimize(loss)
#train_step = tf.train.AdadeltaOptimizer(0.001).minimize(loss)

#学习率:决定参数每次更新的幅度

#神经网络实现过程:
#1、准备数据集,提取特征,作为输入喂给神经网络(NN)
#2、搭建NN结构,从输入到输出(先搭建计算图,再用会话执行)
#(NN前向传播算法===》计算输出)
#3、大量特征数据喂给NN,迭代优化NN参数
#(NN反向传播算法---》优化参数训练模型)
#4、使用训练好的模型预测和分类

#搭建神经网络的八股:准备、前传、反传、迭代

#0准备  import
#       常量定义
#        生成数据集
#1前向传播:定义输入,参数和输出
#x=
#y_=
#w1=
#w2=
#a=
#y=

#2反向传播:定义损失函数,反向传播方法
#loss=
#train_step=

#3生成会话,训练STEPS轮
#with tf.Session() as sess
#    init_op = tf.global_variables_initializer()
#    sess_ru(init_op)
#     sSTEPS = 3000
#      for i range(STEPS):
#           strat= 
#           end=
#           sess.run(train_step.feed_dict:)


import numpy as np
import tensorflow as tf
BATCH_SIZE = 8
seed = 23455

#基于seed产生随机数
rng = np.random.RandomState(seed)
#随机数返回32行2列的矩阵 表示32组 体积和重量   作为输入数据集
X = rng.rand(32,2)
#从X这个32行2列的矩阵中取出一行判断如果和小于1给Y赋值1  如果和不小于1给Y赋值0
#作为输入数据集的标签(正确答案)
Y = [[int(x0 + x1 < 1)] for (x0, x1) in X]

print("X:\n",X)
print("Y:\n",Y)
#1定义神经网络的输入,参数和输出,定义前向传播过程。
x = tf.placeholder(tf.float32, shape=(None, 2))
y_ = tf.placeholder(tf.float32, shape=(None ,1))

w1 = tf.Variable(tf.random_normal([2,3], stddev=1, seed=1))
w2 = tf.Variable(tf.random_normal([3,1], stddev=1, seed=1))

a = tf.matmul(x, w1)
y = tf.matmul(a, w2)
#2定义损失函数及反向传播方法
loss = tf.reduce_mean(tf.square(y-y_))

train_step = tf.train.GradientDescentOptimizer(0.001).minimize(loss)
#train_step = tf.train.MomentumOptimizer(0.001,0.9).minimize(loss)
#train_step = tf.train.AdadeltaOptimizer(0.001).minimize(loss)

#3生成会话,训练STEPS轮
with tf.Session() as sess:
    init_op = tf.global_variables_initializer()
    sess.run(init_op)
    #输出目前(未经训练)的参数取值
    print("w1:\n",sess.run(w1))
    print("w2:\n",sess.run(w2))
    print("\n")
    #训练模型
    STEPS = 3000
    for i in range(STEPS):
        start = (i*BATCH_SIZE) % 32
        end = start + BATCH_SIZE
        sess.run(train_step, feed_dict={x: X[start:end], y_: Y[start:end]})
        if i%500 == 0:
            total_loss = sess.run(loss, feed_dict={x: X, y_: Y})
            print("After %d trainig steps,loss on all adata is %g" % (i, total_loss))
    #输出训练后的参数取值        
    print("\n")
    print("w1:\n",sess.run(w1))
    print("w2:\n",sess.run(w2))
('X:\n', array([[0.83494319, 0.11482951],
       [0.66899751, 0.46594987],
       [0.60181666, 0.58838408],
       [0.31836656, 0.20502072],
       [0.87043944, 0.02679395],
       [0.41539811, 0.43938369],
       [0.68635684, 0.24833404],
       [0.97315228, 0.68541849],
       [0.03081617, 0.89479913],
       [0.24665715, 0.28584862],
       [0.31375667, 0.47718349],
       [0.56689254, 0.77079148],
       [0.7321604 , 0.35828963],
       [0.15724842, 0.94294584],
       [0.34933722, 0.84634483],
       [0.50304053, 0.81299619],
       [0.23869886, 0.9895604 ],
       [0.4636501 , 0.32531094],
       [0.36510487, 0.97365522],
       [0.73350238, 0.83833013],
       [0.61810158, 0.12580353],
       [0.59274817, 0.18779828],
       [0.87150299, 0.34679501],
       [0.25883219, 0.50002932],
       [0.75690948, 0.83429824],
       [0.29316649, 0.05646578],
       [0.10409134, 0.88235166],
       [0.06727785, 0.57784761],
       [0.38492705, 0.48384792],
       [0.69234428, 0.19687348],
       [0.42783492, 0.73416985],
       [0.09696069, 0.04883936]]))
('Y:\n', [[1], [0], [0], [1], [1], [1], [1], [0], [1], [1], [1], [0], [0], [0], [0], [0], [0], [1], [0], [0], [1], [1], [0], [1], [0], [1], [1], [1], [1], [1], [0], [1]])
('w1:\n', array([[-0.8113182 ,  1.4845988 ,  0.06532937],
       [-2.4427042 ,  0.0992484 ,  0.5912243 ]], dtype=float32))
('w2:\n', array([[-0.8113182 ],
       [ 1.4845988 ],
       [ 0.06532937]], dtype=float32))


After 0 trainig steps,loss on all adata is 5.13118
After 500 trainig steps,loss on all adata is 0.429111
After 1000 trainig steps,loss on all adata is 0.409789
After 1500 trainig steps,loss on all adata is 0.399923
After 2000 trainig steps,loss on all adata is 0.394146
After 2500 trainig steps,loss on all adata is 0.390597


('w1:\n', array([[-0.7000663 ,  0.9136318 ,  0.08953571],
       [-2.3402493 , -0.14641267,  0.58823055]], dtype=float32))
('w2:\n', array([[-0.06024267],
       [ 0.91956186],
       [-0.0682071 ]], dtype=float32))

猜你喜欢

转载自blog.csdn.net/ldinvicible/article/details/82825704