5 depth study and practice by generating artificial data sets for linear regression

1 Job Description

Here Insert Picture Description

2 issues

1 loss increasing problem

%matplotlib inline
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
# 设置随机数种子,确保固定方式的随机数,其实可以自己选不带也可以
np.random.seed(5)
#直接采用np生成等差数列,生成100个点,每个点在0到100之间
x_data=np.linspace(0,100,500)
# 生成y=3.1234x+2.98+噪声,噪声维度和x_data一致,这里噪声容限也取40%
y_data=3.1234*x_data+2.98+np.random.randn(*x_data.shape)*0.4
# 画出随机散点图
plt.scatter(x_data,y_data)
# 画出我们想要学习的线性函数y=2x+1
plt.plot(x_data,3.1234*x_data+2.98,color='red',linewidth=3)
#mean_value=np.mean(x_data,axis=0)
#sigma=np.std(x_data,axis=0)
#x_data=(x_data-mean_value)/sigma
#定义占位符,x是特征值,y是标签
x=tf.placeholder("float",name = "x")
y=tf.placeholder("float",name= "y")
def model(x,w,b):
    return tf.multiply(x,w)+b
w=tf.Variable(2.0,name="w0")
b=tf.Variable(0.0,name="b0")
pred=model(x,w,b)
train_epcohs=20
learning_rate=0.1
#显示loss的粒度
display_step=20
loss_function=tf.reduce_mean(tf.square(y-pred))
optimizer=tf.train.GradientDescentOptimizer(learning_rate).minimize(loss_function)
sess=tf.Session()
init=tf.global_variables_initializer()
sess.run(init)
step=0#记录训练步数
loss_list=[]#保存loss值得列表
for epoch in range(train_epcohs):
    for xs ,ys in zip (x_data,y_data):
        _,loss=sess.run([optimizer,loss_function],feed_dict={x:xs,y:ys})
        loss_list.append(loss)
        step=step+1
        if step%display_step==0:
            print("Train Epoch:","%02d"%(epoch+1),"step:%03d"%(step),"loss=",\
                 "{:.9f}".format(loss))
        
    b0temp=b.eval(session=sess)
    w0temp=w.eval(session=sess)
  #  w_origin0=w0temp/sigma
  # b_origin0=b0temp-w0temp*mean_value/sigma
    print(w0temp,b0temp)
  #print(w_origin0,b_origin0)
  #plt.plot((x_data+mean_value)*sigma,w0temp*x_data+b0temp)
  plt.plot(x_data,w0temp*x_data+b0temp)

Here Insert Picture Description
Here there will be a sudden increase in losses, mainly taking the input values between 0-100, wherein the larger the value of variables such as the effects of easier intake, such as non-control region 50 into the gradient calculation.
Solution 1: The value can be changed -1-1 solve
Method 2: this is not a fundamental solution to help us control the amount of input we get here is to enter a standardized method also solves the problem, the following are the specific steps to solve .

3 Content

3.1 generating a data set

%matplotlib inline
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
# 设置随机数种子,确保固定方式的随机数,其实可以自己选不带也可以
np.random.seed(5)
#直接采用np生成等差数列,生成100个点,每个点在0到100之间
x_data=np.linspace(0,100,500)
# 生成y=3.1234x+2.98+噪声,噪声维度和x_data一致,这里噪声容限也取40%
y_data=3.1234*x_data+2.98+np.random.randn(*x_data.shape)*0.4

Use matplotlib draw results

# 画出随机散点图
plt.scatter(x_data,y_data)
# 画出我们想要学习的线性函数y=2x+1
plt.plot(x_data,3.1234*x_data+2.98,color='red',linewidth=3)

Here Insert Picture Description
FIG not empty generated map position specifying tensorboard

tf.reset_default_graph()
logdir="D:\\111"

3.2 Standardization

mean_value=np.mean(x_data,axis=0)
sigma=np.std(x_data,axis=0)
x_data=(x_data-mean_value)/sigma

3.3 structural model

#定义占位符,x是特征值,y是标签
x=tf.placeholder("float",name = "x")
y=tf.placeholder("float",name= "y")
#构造回归模型
def model(x,w,b):
    return tf.multiply(x,w)+b
w=tf.Variable(4.0,name="w0")
b=tf.Variable(0.0,name="b0")
pred=model(x,w,b)
train_epcohs=20
learning_rate=0.05
#显示loss的粒度
display_step=20
loss_function=tf.reduce_mean(tf.square(y-pred))
optimizer=tf.train.GradientDescentOptimizer(learning_rate).minimize(loss_function)
sess=tf.Session()

3.4 counted

init=tf.global_variables_initializer()
sess.run(init)
step=0#记录训练步数
loss_list=[]#保存loss值得列表
for epoch in range(train_epcohs):
    for xs ,ys in zip (x_data,y_data):     
        _,loss=sess.run([optimizer,loss_function],feed_dict={x:xs,y:ys})    
        loss_list.append(loss)
        step=step+1        
        if step%display_step==0:            
            print("Train Epoch:","%02d"%(epoch+1),"step:%03d"%(step),"loss=",\
                 "{:.9f}".format(loss))                 
    b0temp=b.eval(session=sess)
    w0temp=w.eval(session=sess)
    w_origin0=w0temp/sigma
    b_origin0=b0temp-w0temp*mean_value/sigma
    print(w0temp,b0temp)
    print(w_origin0,b_origin0)
    plt.plot(x_data*sigma+mean_value,w0temp*x_data+b0temp)

Here Insert Picture Description

3.5 forecast

x_test=5.79
predict=sess.run(pred,feed_dict={x:(x_test-mean_value)/sigma})
print("预测值是:%f"%predict)
target=3.1234*x_test+2.98
print("目标值是:%f"%target)

Here Insert Picture Description
Save chart

writer=tf.summary.FileWriter(logdir,tf.get_default_graph())
writer.close()

3.6tensorboard

Use the following method
Here Insert Picture Description
may be required to see an enlarged during the gradient chain rule compliance
Here Insert Picture Description

Published 284 original articles · won praise 19 · views 20000 +

Guess you like

Origin blog.csdn.net/weixin_39289876/article/details/104571279