Tensorflow神经网络框架(第二课 2-4Tensorflow简单实例 线性回归 梯度下降法)

2-4Tensorflow简单实例Last Checkpoint: 上星期五15:41(unsaved changes) Logout
In [1]:
import tensorflow as tf
import numpy as np
In [2]:
#使用numpy生成100个随机点
x_data = np.random.rand(100)
y_data = x_data*0.1 + 0.2 # 构造一条直线 斜率:0.1 
#构造一个线性模型
b = tf.Variable(0.)
k = tf.Variable(0.)
y = k*x_data + b
#二次代价函数
loss = tf.reduce_mean(tf.square(y_data-y))
#定义一个梯度下降法来进行训练的优化器
optimizer = tf.train.GradientDescentOptimizer(0.2) #学习率:0.2
#最小化代价函数
train = optimizer.minimize(loss)
init = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init)
    for step in range(201): #迭代201次
        sess.run(train)
        if step%20 == 0:#每迭代20次 打印出来
            print(step,sess.run([k,b]))
0 [0.048383515, 0.09845313]
20 [0.09913179, 0.20042612]
40 [0.099467784, 0.20026132]
60 [0.09967373, 0.20016019]
80 [0.09979998, 0.20009822]
100 [0.09987738, 0.2000602]
120 [0.09992483, 0.20003691]
140 [0.09995392, 0.20002262]
160 [0.099971764, 0.20001386]
180 [0.09998268, 0.20000851]
200 [0.09998938, 0.20000522]

猜你喜欢

转载自blog.csdn.net/u011473714/article/details/80804472