用TensorFlow进行线性回归

生成数据点

首先,生成1000个测试点,这里以直线 y = 0.1 x + 0.3 y=0.1x+0.3 为例,即围绕该直线随机产生1000个点。
在这里插入图片描述
在这里插入图片描述

进行训练

在这里插入图片描述

完整代码

# !/usr/bin/env python
# —*— coding: utf-8 —*—
# @Time:    2020/1/3 8:02
# @Author:  Martin
# @File:    Linear_Regression.py
# @Software:PyCharm
import numpy as np
import tensorflow.compat.v1 as tf
import matplotlib.pyplot as plt
tf.disable_v2_behavior()
# 随机生成1000个点,围绕在直线y=0.1x+0.3周围
num_points = 1000
vectors_set = []
for i in range(num_points):
    x1 = np.random.normal(0.0, 0.55)
    y1 = x1 * 0.1 + 0.3 + np.random.normal(0.0, 0.03)
    vectors_set.append([x1, y1])
# 生成一些样本
x_data = [v[0] for v in vectors_set]
y_data = [v[1] for v in vectors_set]

plt.scatter(x_data, y_data, c='r')
plt.show()

W = tf.Variable(tf.random_uniform([1], -1.0, 1.0, name='W'))
b = tf.Variable(tf.zeros([1], name='b'))
y = W * x_data + b
# 以估计值y和实际值y_data之间的均方差作为损失
loss = tf.reduce_mean(tf.square(y-y_data), name='loss')
# 采用梯度下降法来优化参数
optimizer = tf.train.GradientDescentOptimizer(0.5)
# 训练的过程就是最小化这个误差值
train = optimizer.minimize(loss, name='train')

sess = tf.Session()

init = tf.global_variables_initializer()
sess.run(init)

print("W=", sess.run(W), "b=", sess.run(b), "loss=", sess.run(loss))
# 执行20次训练
for step in range(20):
    sess.run(train)
    print("W=", sess.run(W), "b=", sess.run(b), "loss=", sess.run(loss))

plt.scatter(x_data, y_data, c='r')
plt.plot(x_data, sess.run(W)*x_data+sess.run(b))
plt.show()

得到结果

W= [-0.7334206] b= [0.] loss= 0.29737055
W= [-0.49245846] b= [0.316152] loss= 0.10034303
W= [-0.32505727] b= [0.31196198] loss= 0.052170333
W= [-0.20486829] b= [0.3090511] loss= 0.027339231
W= [-0.11857794] b= [0.30696118] loss= 0.014539786
W= [-0.05662528] b= [0.3054607] loss= 0.007942177
W= [-0.012146] b= [0.30438343] loss= 0.0045413696
W= [0.01978815] b= [0.30361] loss= 0.0027883884
W= [0.04271545] b= [0.3030547] loss= 0.0018847961
W= [0.05917624] b= [0.30265602] loss= 0.0014190299
W= [0.07099435] b= [0.3023698] loss= 0.0011789459
W= [0.07947923] b= [0.3021643] loss= 0.001055192
W= [0.08557101] b= [0.30201676] loss= 0.0009914021
W= [0.08994463] b= [0.30191082] loss= 0.0009585208
W= [0.0930847] b= [0.30183476] loss= 0.0009415715
W= [0.09533913] b= [0.30178016] loss= 0.00093283516
W= [0.09695771] b= [0.30174097] loss= 0.0009283317
W= [0.09811977] b= [0.30171284] loss= 0.0009260105
W= [0.09895409] b= [0.3016926] loss= 0.00092481385
W= [0.09955309] b= [0.30167812] loss= 0.0009241972
W= [0.09998314] b= [0.3016677] loss= 0.00092387915

在这里插入图片描述
由结果可知,随着训练次数的增多,loss值越来越小。
进行20次训练后,得到的最终结果为:
W=0.09998314,b=0.3016677

发布了102 篇原创文章 · 获赞 93 · 访问量 9652

猜你喜欢

转载自blog.csdn.net/Deep___Learning/article/details/103814413
今日推荐