Application of Tensorflow linear regression

Application of Tensorflow linear regression

y_data is a linear function of x_data with a noise added. Here, the error between the predicted value y and y_data is minimized by training the values ​​of k and d. The evaluation index is a quadratic cost function (ie standard deviation). The optimization method is In the gradient descent method, the variables (Variable) k and d will change every time you train, and the result of 200 training shows that it is closer to y_data

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
# 均匀分布
x_data = np.random.rand(100)
# 标准差是0.01,默认是1
noise = np.random.normal(0,0.01,x_data.shape)
y_data = 0.1*x_data + 0.2 + noise

# 构建一个线性模型
d = tf.Variable(np.random.rand(1))
k = tf.Variable(np.random.rand(1))
y = k*x_data + d

# 二次代价函数
loss = tf.losses.mean_squared_error(y_data,y)
# 定义一个梯度下降法优化器,lr = 0.3
optimizer = tf.compat.v1.train.GradientDescentOptimizer(0.3)

# 训练,最小化代价函数
train = optimizer.minimize(loss)

# 初始化变量
init = tf.compat.v1.global_variables_initializer()

with tf.compat.v1.Session() as sess:
    sess.run(init)
    for i in range(201):
        sess.run(train)
        if i%20 == 0:
            print(i,sess.run([k,d]))
    y_pred = sess.run(y)
    plt.scatter(x_data,y_data)
    plt.plot(x_data,y_pred,'r',lw = 3)
    plt.show()

result:

0 [array([0.33404332]), array([0.08479198])]
20 [array([0.20811465]), array([0.14033641])]
40 [array([0.15140166]), array([0.17172419])]
60 [array([0.12458373]), array([0.18656656])]
80 [array([0.11190231]), array([0.19358509])]
100 [array([0.10590564]), array([0.19690395])]
120 [array([0.10306999]), array([0.19847334])]
140 [array([0.10172909]), array([0.19921546])]
160 [array([0.10109502]), array([0.19956638])]
180 [array([0.10079518]), array([0.19973233])]
200 [array([0.1006534]), array([0.19981079])]

Insert picture description here

Guess you like

Origin blog.csdn.net/weixin_44823313/article/details/112463662