Understand the basic steps of Tensorflow development

Understand the basic steps of Tensorflow development

1. Define Tensorflow input node

How to define the input node

  1. Defined by placeholders (generally)
  2. Defined by the dictionary type (used in the case of more input)
  3. Direct definition (rarely used)

1. Define by placeholder (generally)

For example, in the approximate fitting of y = 2x, placeholders are used to define the input nodes

X = tf.placeholder("float")
Y = tf.placeholder("float")

2. Defined by the dictionary type (used in the case of more input)

Similar to the first one

# 占位符
inputdict = {
    
    
    'x': tf.placeholder("float"),
    'y': tf.placeholder("float")
}

3. Direct definition (rarely used)

Put the defined Python variables directly into the OP node to participate in the input operation, and put the simulated data variables directly into the model for training.

#生成模拟数据
train_X =np.float32( np.linspace(-1, 1, 100))
train_Y = 2 * train_X + np.random.randn(*train_X.shape) * 0.3 # y=2x,但是加入了噪声
#图形显示
plt.plot(train_X, train_Y, 'ro', label='Original data')
plt.legend()
plt.show()

# 模型参数
W = tf.Variable(tf.random_normal([1]), name="weight")
b = tf.Variable(tf.zeros([1]), name="bias")
# 前向结构
z = tf.multiply(W, train_X)+ b

2. Define the "learning parameter" variable

Direct definition

W = tf.Variable(tf.random_normal([1]), name = "weigth")
b = tf.Variable(tf.zeros([1]), name = "bias")

Dictionary definition

# 模型参数
paradict = {
    
    
    'w' = tf.Variable(tf.random_normal([1]), name = "weigth"),
    'b' = tf.Variable(tf.zeros([1]), name = "bias")
}
# 向前结构
z = tf.multipply(X, paradict['w']) + paradict['b']

3. Define "Operation"

Forward propagation model

  • Single layer neural network
  • Multilayer neural network
  • Convolutional Neural Network
  • Recurrent neural network
  • GoogLeNet
  • Resnet

Define loss function

Mainly used to calculate the error between "output value" and "target value", it is used with back propagation
(must be diducible)

Tensorflow framework is ready for us

4. Optimize function, optimize goal

5. Initialize all variables

# 初始化变量
init = tf.global_variables_initializer()
# 这步必须在所有变量和OP定义完成之后,这样才能保证定义内容的有效性,否则无法使用session中的run来进行算值

# 启动session
with tf.Session() as sess:
    sess.run(init)

6. Iteratively update the parameters to the optimal solution

In the iterative training process, it is necessary to establish a session to complete. The common use is to use the with syntax, which can be closed after the session ends.

with tf Session() as sess:
     sess.run(init)
     
     for epoch in range(training_epochs):
        for (x, y) in zip(train_X, train_Y):
            sess.run(optimizer, feed_dict={
    
    X: x, Y: y})

Use the concept of MINIBATCH for iterative training (each time a certain amount of data is taken and placed in the network for training)

7. Test the model

print ("cost=", sess.run(cost, feed_dict={
    
    X: train_X, Y: train_Y}), "W=", sess.run(W), "b=", sess.run(b))

# print ("cost:",cost.eval({X: train_X, Y: train_Y}))

8. Use the model

print ("x=0.2,z=", sess.run(z, feed_dict={
    
    X: 0.2}))

Guess you like

Origin blog.csdn.net/qq_44082148/article/details/102991528