《深入理解Tensorflow架构设计与实现原理》第三章一元线性回归实践

使用Tensorflow训练该模型典型过程可以分为以下8个步骤:

(1)定义超参数 :超参数是在训练过程中使用的配置参数。超参数有学习率,隐藏神经元个数,批数据个数和正则化项等。本例定义的超参数如下:

learning_rate=0.01 #学习率
max_train_steps=1000 #最大训练步数

(2)输入数据:本例输入17对数据

import numpy as np
train_X=np.array([[3.3],[4.4],[5.5],[6.71],[6.93],[4.186],[9.779],[6.182],[7.59],[2.168],[7.042],[10.791],[5.313],[9.27],[3.1],[2.2],[5.65]],dtype=np.float32)
train_Y=np.array([[1.7],[2.76],[2.09],[3.19],[1.689],[1.571],[2.266],[3.596],[2.53],[2.11],[1.87],[3.2456],[2.904],[3.42],[1.3],[4.22],[2.56]],dtype=np.float32)
total_samples=train_X.shape[0]

(3)构建模型

X=tf.placeholder(tf.float32,[None,1])
w=tf.Variable(tf.random_normal([1,1]),name='weight')
b=tf.Variable(tf.zeros([1]),name='bias')
Y=tf.matmul(X,w)+b

(4)定义损失函数:在本例中,我们定义损失函数如下所示:


相关代码如下:

Y_=tf.placeholder(tf.float32,[None,1])
loss=tf.reduce_sum(tf.pow(Y-Y_,2))/(total_samples)

(5)创建优化

optimizer=tf.train.GradientDescentOptimizer(learning_rate)

(6)定义单步训练操作

train_op=tf.train.minimize(loss)

(7)创建会话

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

(8)迭代训练

print("start training")
    for step in range(max_train_steps):
        sess.run(train_op,feed_dict={X:train_X,Y_:train_Y})
        if step%100==0:
            c=sess.run(loss,feed_dict={X:train_X,Y_:train_Y})
            print("step:%d,loss %.4f,w:%.4f,b:%.4f"%(step,c,sess.run(w),sess.run(b)))
    final_loss=sess.run(loss,feed_dict={X:train_X,Y_:train_Y})
    print("final_loss : %.4f"%(final_loss))

完整代码如下:

import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt 

learning_rate=0.01
max_train_steps=1000
train_X=np.array([[3.3],[4.4],[5.5],[6.71],[6.93],[4.186],[9.779],[6.182],[7.59],[2.168],[7.042],[10.791],[5.313],[9.27],[3.1],[2.2],[5.65]],dtype=np.float32)
train_Y=np.array([[1.7],[2.76],[2.09],[3.19],[1.689],[1.571],[2.266],[3.596],[2.53],[2.11],[1.87],[3.2456],[2.904],[3.42],[1.3],[4.22],[2.56]],dtype=np.float32)
total_samples=train_X.shape[0]
X=tf.placeholder(tf.float32,[None,1])
w=tf.Variable(tf.random_normal([1,1]),name='weight')
b=tf.Variable(tf.zeros([1]),name='bias')
Y=tf.matmul(X,w)+b
Y_=tf.placeholder(tf.float32,[None,1])
loss=tf.reduce_sum(tf.pow(Y-Y_,2))/(total_samples)
optimizer=tf.train.GradientDescentOptimizer(learning_rate)
train_op=optimizer.minimize(loss)
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    print("start training")
    for step in range(max_train_steps):
        sess.run(train_op,feed_dict={X:train_X,Y_:train_Y})
        if step%100==0:
            c=sess.run(loss,feed_dict={X:train_X,Y_:train_Y})
            print("step:%d,loss %.4f,w:%.4f,b:%.4f"%(step,c,sess.run(w),sess.run(b)))
    final_loss=sess.run(loss,feed_dict={X:train_X,Y_:train_Y})
    print("final_loss : %.4f"%(final_loss))
    weight,bias=sess.run([w,b])
    plt.plot(train_X,train_Y,'ro',label='Training data')
    plt.plot(train_X,weight*train_X+bias,label='Fitted line')
    plt.legend()
    plt.show()
 

结果如下所示:

start training
step:0,loss 1.5447,w:0.2798,b:0.0675
step:100,loss 0.9537,w:0.2883,b:0.6067
step:200,loss 0.7956,w:0.2315,b:0.9991
step:300,loss 0.7079,w:0.1892,b:1.2913
step:400,loss 0.6593,w:0.1577,b:1.5089
step:500,loss 0.6323,w:0.1342,b:1.6709
step:600,loss 0.6174,w:0.1167,b:1.7916
step:700,loss 0.6091,w:0.1037,b:1.8814
step:800,loss 0.6045,w:0.0940,b:1.9483
step:900,loss 0.6020,w:0.0868,b:1.9982
final_loss : 0.6005

生成图如下所示:


要很注意代码中的缩进问题。

猜你喜欢

转载自blog.csdn.net/erihanami/article/details/80216130
今日推荐