tensorflow forward propagation and loss function

# 【4】通过placeholder实现前向传播
import tensorflow as tf
w1 = tf.Variable(tf.random_normal([2,3],stddev = 1, seed = 1))
w2 = tf.Variable(tf.random_normal([3,1],stddev = 1, seed = 1))

#定义placedolder作为存放输入数据的地方。这里维度也不一定要定义
#但如果维度是确定的,那么给出维度可以降低出错的概率
x = tf.placeholder(tf.float32, shape = (3,2), name = "input")
a = tf.matmul(x,w1)
y = tf.matmul(a,w2)

sess = tf.Session()
init_op = tf.global_variables_initializer()
sess.run(init_op)
# print(sess.run(y)) 此句错误:还没有给出x的取值
#输出结果[3.95757794]
print(sess.run(y,feed_dict={x:[[0.7,0.9], [0.1,0.4], [0.5,0.8]]}))

#使用sigmoid函数将y转换成0~1之间的数。转换后y代表预测是正样本的概率
#1-y代表是负样本的概率
y = tf.sigmoid(y)
#定义损失函数  交叉熵
cross_entropy = -tf.reduce_mean(
    y_ * tf.log(tf.clip_by_value(y, 1e-10, 1.0))
    +(1-y) * tf.log(tf.clip_by_value(1-y, 1e-10, 1.0)))
#定义学习率
learning_rate = 0.001
#定义反向传播算法来优化神经网络中的参数
train _step = \tf.train.AdamOptimizer(learning_rate).minimize(cross_entropy)

 

In the above code, cross_entropy defines the cross entropy between the real value and the predicted value (cross entropy), which is a commonly used loss function in classification problems.

The train step defines the optimization method for backpropagation. Tensorflow currently supports 10 different optimizers, and different optimization algorithms can be selected according to specific applications. There are three commonly used optimization methods: tf.train.GradientDescentOptimizer, tf.train.AdamOptimizer and tf.train.Momentum optimizer.

After defining the backpropagation algorithm, by running sess.run(train_ step), you can optimize all the variables in the GraphKeys.TRAINABLE_VARIABLES collection, making the loss function smaller under the current batch.

Guess you like

Origin blog.csdn.net/summer_xj1/article/details/89060485