Beginner deep learning (1) TensorFlow basic development steps to prepare data and build a model (forward + reverse)

This article uses logistic regression to fit two-dimensional data as an example to demonstrate the basic development steps of tensorflow.

Example: Find the law of y≈2x from a set of seemingly chaotic data

Example description:

Suppose there is a set of data sets whose corresponding relationship between x and y is y≈2x.

There are roughly four steps in deep learning:

(1) Prepare data
(2) Build model
(3) Iterative training
(4) Use model

1. Prepare the data

Here, the formula y=2x is used as the main body, and the "=" becomes "≈" by adding some interference noise

Specific code implementation:

#导入所需要的包
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt

#生成模拟数据
train_X = np.linspace(-1, 1, 100) #在[-1,1]之间生成100个数作为x
train_Y = 2 * train_X + np.random.randn(*train_X.shape) * 0.3 # 将x乘以2,再加上一个[-1,1]区间的随机数*0.3作为加入的噪声

#通过画散点图显示产生的模拟数据点
plt.plot(train_X, train_Y, 'ro', label='Original data') #画出散点图
plt.legend()#使上述代码产生效果:如图例的名称
plt.show()

Insert picture description here

2. Build a model

The model is divided into two directions: forward and reverse.

(1) Forward model creation

Use the following code to create a model of a single neuron

# X,Y为占位符
X = tf.placeholder("float") # 代表x的输入
Y = tf.placeholder("float") # 代表对应的真实值y

# 模型参数
W = tf.Variable(tf.random_normal([1]), name="weight") # w初始化为[-1,1]的随机数,形状为一维的数字
b = tf.Variable(tf.zeros([1]), name="bias") # b的初始化是0,形状为一维的数字

# 前向结构
z = tf.multiply(X, W)+ b # x*w+b

(2) Reverse model creation

Data flow neural network training process has two directions (forward and reverse), a forward generating a predicted value , the observed value comparison gap, then the reverse adjustment wherein the parameters , and then generates a forward predicted and observed values again Comparative , it has been so the cycle continues until the parameter is adjusted to an appropriate value.
Backpropagation often requires the introduction of some algorithms to achieve the correct adjustment of parameters.

#反向优化
cost =tf.reduce_mean( tf.square(Y - z)) #cost是生成的预测值与真实值的平方差
learning_rate = 0.01 #设定学习率为0.01
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) #GradientDescentOptimizer()函数是一个封装好的梯度下降算法

PS:

  • The value of the learning rate is generally less than 1. The larger the value, the greater the speed of the adjustment, but it is not accurate; the smaller the value, the higher the accuracy of the adjustment, but the slower the speed. Therefore, the determination of the learning rate needs to be determined according to the actual situation.
  • The simple understanding of gradient descent algorithm is that it will change the learning rate according to the speed of learning parameters.

Guess you like

Origin blog.csdn.net/qq_45154565/article/details/109635933
Recommended