神经网络优化算法一(梯度下降、学习率设置)

1、梯度下降法

梯度下降算法主要用于优化单个参数的取值,而反向传播算法给出了一个高效的方式在所有的参数上使用梯度下降算法,从而使得神经网络模型在训练数据上的损失函数尽可能小。反向传播算法是训练神经网络的核心算法,它可以根据定义好的损失函数优化神经网络中参数的取值,从而使神经网络的模型在训练数据集上的损失函数达到一个较小值。

假设用θ表示神经网络中的参数,J(θ)表示在给定的参数取值下,训练数据集上损失函数的大小,那么整个优化过程可以抽象为寻找一个参数θ,使得J(θ) 最小。因为目前没有一个通用的方法可以对任意损失函数直接求解最佳的参数取值,所以在实践中,梯度下降算法是最常用的神经网络优化方法。如图是梯度下降算法的原理。

 x 轴表示参数θ的取值,y轴表示损失函数J(θ)的值。假设当前的参数和损失值对应图中小圆点的位置,那么梯度下降算法会将参数向x轴左侧移动,从而使得小圆点朝着箭头的方向移动。假设学习率为η,那么参数更新的公式为:其中偏导部分就是该参数在小圆点处的梯度,总是沿着负梯度方向移动。 

注意:梯度下降算法并不能保证优化的函数达到全局最优解。如下图所示:

 

当在小圆黑点位置,偏导等于0或者接近0时,参数就不会再进一步更新。如果 x 的初始值落在右侧深色的区间中,那么通过梯度下降得到的结果都会落到小黑点代表的局部最优解

因此:由此可见在训练神经网络时,参数的初始值会很大程度影响最后得到的结果。只有当损失函数为凸函数时,梯度下降算法才能保证达到全局最优解。

除了不一定能达到全局最优外,梯度下降算法的另一个问题就是计算时间太长。因为要在全部训练数据上最小化损失,所以损失函数J(θ)是在所有训练数据上的损失和。

例子:为损失函数,采用梯度下降法最小化损失函数。

代码如下:

#coding:utf-8
#损失函数 loss=(w+1)^2,令w初值是常数5.反向传播就是求最优的w,即是求最小的loss对应的w值。

import tensorflow as tf
#定义待优化的参数w的初值为5
w=tf.Variable(tf.constant(5,dtype=tf.float32))

#定义损失函数loss
loss=tf.square(w+1)   #tensorflow支持直接写算术式,loss=(w+1)^2也是可以的。

#定义反向传播方法
train_step=tf.train.GradientDescentOptimizer(1).minimize(loss)

#生成会话,训练40轮
with tf.Session() as sess:
	init_op=tf.global_variables_initializer()
	sess.run(init_op)
	print "训练之前w的值为:",sess.run(w),sess.run(loss)
	for i in range(40):
		sess.run(train_step)
		w_val=sess.run(w)
		loss_val=sess.run(loss)
		print "after %d steps: w is %f, loss is %f,"%(i,w_val,loss_val)

 运行结果:

(tf1.5) zhangkf@Ubuntu2:~/tf/tf4$ python loss.py 
2018-12-11 16:16:52.387120: E tensorflow/stream_executor/cuda/cuda_driver.cc:406] failed call to cuInit: CUDA_ERROR_UNKNOWN
After 0 steps: w is 2.600000,   loss is 12.959999.
After 1 steps: w is 1.160000,   loss is 4.665599.
After 2 steps: w is 0.296000,   loss is 1.679616.
After 3 steps: w is -0.222400,   loss is 0.604662.
After 4 steps: w is -0.533440,   loss is 0.217678.
After 5 steps: w is -0.720064,   loss is 0.078364.
After 6 steps: w is -0.832038,   loss is 0.028211.
After 7 steps: w is -0.899223,   loss is 0.010156.
After 8 steps: w is -0.939534,   loss is 0.003656.
After 9 steps: w is -0.963720,   loss is 0.001316.
After 10 steps: w is -0.978232,   loss is 0.000474.
After 11 steps: w is -0.986939,   loss is 0.000171.
After 12 steps: w is -0.992164,   loss is 0.000061.
After 13 steps: w is -0.995298,   loss is 0.000022.
After 14 steps: w is -0.997179,   loss is 0.000008.
After 15 steps: w is -0.998307,   loss is 0.000003.
After 16 steps: w is -0.998984,   loss is 0.000001.
After 17 steps: w is -0.999391,   loss is 0.000000.
After 18 steps: w is -0.999634,   loss is 0.000000.
After 19 steps: w is -0.999781,   loss is 0.000000.
After 20 steps: w is -0.999868,   loss is 0.000000.
After 21 steps: w is -0.999921,   loss is 0.000000.
After 22 steps: w is -0.999953,   loss is 0.000000.
After 23 steps: w is -0.999972,   loss is 0.000000.
After 24 steps: w is -0.999983,   loss is 0.000000.
After 25 steps: w is -0.999990,   loss is 0.000000.
After 26 steps: w is -0.999994,   loss is 0.000000.
After 27 steps: w is -0.999996,   loss is 0.000000.
After 28 steps: w is -0.999998,   loss is 0.000000.
After 29 steps: w is -0.999999,   loss is 0.000000.
After 30 steps: w is -0.999999,   loss is 0.000000.
After 31 steps: w is -1.000000,   loss is 0.000000.
After 32 steps: w is -1.000000,   loss is 0.000000.
After 33 steps: w is -1.000000,   loss is 0.000000.
After 34 steps: w is -1.000000,   loss is 0.000000.
After 35 steps: w is -1.000000,   loss is 0.000000.
After 36 steps: w is -1.000000,   loss is 0.000000.
After 37 steps: w is -1.000000,   loss is 0.000000.
After 38 steps: w is -1.000000,   loss is 0.000000.
After 39 steps: w is -1.000000,   loss is 0.000000.

2、神经网络进一步优化——学习率设置

上述介绍了学习率在梯度下降算法中的使用,学习率决定了参数每次更新的幅度。学习率过大会造成摇摆,过小会造成训练时间过长,为了解决学习率的问题,TensorFlow提供了一种更加灵活的学习率设置方法 -- 指数衰减法。

#tf.train.exponential_decay函数实现了指数衰减学习率。

decayed_learning_rate = learning_rate * decay_rate ^ (global_step / decay_steps)

其中:decayed_learning_rate 为每一轮优化时使用的学习率,learning_rate 为学习率的初始值,decay_rate为衰减系数global_step记录了当前训练的轮数,为不可训练的参数。decay_steps为衰减速度global_step / decay_steps表示的是多少轮更新一次,如下图显示了随着迭代轮数的增加,学习率逐步降低的过程,迭代轮数就是总训练样本数除以每一个 batch 中的训练样本数。

实验代码如下:

#coding:utf-8
#设损失函数 loss=(w+1)^2, 令w初值是常数10。反向传播就是求最优w,即求最小loss对应的w值

#使用指数衰减的学习率,在迭代初期得到较高的下降速度,可以在较小的训练轮数下取得更有收敛度。
import tensorflow as tf

LEARNING_RATE_BASE = 0.1	 #最初学习率
LEARNING_RATE_DECAY = 0.99 	 #学习率衰减率
LEARNING_RATE_STEP = 1  	 #喂入多少轮BATCH_SIZE后,更新一次学习率(更新速度为:轮/次),一般设为:总样本数/BATCH_SIZE

#运行了几轮BATCH_SIZE的计数器,初值给0, 设为不被训练
global_step = tf.Variable(0, trainable=False)   #tranable=False,标注当前的训练轮数为不可训练

#定义指数下降学习率
#通过 exponential_decay 函数生成学习率,初始学习率为 0.1
learning_rate = tf.train.exponential_decay(LEARNING_RATE_BASE, global_step, LEARNING_RATE_STEP, LEARNING_RATE_DECAY, staircase=True)

#定义待优化参数,初值给10
w = tf.Variable(tf.constant(5, dtype=tf.float32))

#定义损失函数loss
loss = tf.square(w+1)

#定义反向传播方法
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)

#生成会话,训练40轮
with tf.Session() as sess:
    init_op=tf.global_variables_initializer()
    sess.run(init_op)
    for i in range(40):
        sess.run(train_step)
        learning_rate_val = sess.run(learning_rate)
        global_step_val = sess.run(global_step)
        w_val = sess.run(w)
        loss_val = sess.run(loss)
        print "After %s steps: global_step is %f, w is %f, learning rate is %f, loss is %f" % (i, global_step_val, w_val, learning_rate_val, loss_val)

运行结果:

(tf1.5) zhangkf@Ubuntu2:~/tf/tf4$ python opt4_5.py 
2018-12-11 16:02:25.500643: E tensorflow/stream_executor/cuda/cuda_driver.cc:406] failed call to cuInit: CUDA_ERROR_UNKNOWN
After 0 steps: global_step is 1.000000, w is 3.800000, learning rate is 0.099000, loss is 23.040001
After 1 steps: global_step is 2.000000, w is 2.849600, learning rate is 0.098010, loss is 14.819419
After 2 steps: global_step is 3.000000, w is 2.095001, learning rate is 0.097030, loss is 9.579033
After 3 steps: global_step is 4.000000, w is 1.494386, learning rate is 0.096060, loss is 6.221961
After 4 steps: global_step is 5.000000, w is 1.015167, learning rate is 0.095099, loss is 4.060896
After 5 steps: global_step is 6.000000, w is 0.631886, learning rate is 0.094148, loss is 2.663051
After 6 steps: global_step is 7.000000, w is 0.324608, learning rate is 0.093207, loss is 1.754587
After 7 steps: global_step is 8.000000, w is 0.077684, learning rate is 0.092274, loss is 1.161403
After 8 steps: global_step is 9.000000, w is -0.121202, learning rate is 0.091352, loss is 0.772287
After 9 steps: global_step is 10.000000, w is -0.281761, learning rate is 0.090438, loss is 0.515867
After 10 steps: global_step is 11.000000, w is -0.411674, learning rate is 0.089534, loss is 0.346128
After 11 steps: global_step is 12.000000, w is -0.517024, learning rate is 0.088638, loss is 0.233266
After 12 steps: global_step is 13.000000, w is -0.602644, learning rate is 0.087752, loss is 0.157891
After 13 steps: global_step is 14.000000, w is -0.672382, learning rate is 0.086875, loss is 0.107334
After 14 steps: global_step is 15.000000, w is -0.729305, learning rate is 0.086006, loss is 0.073276
After 15 steps: global_step is 16.000000, w is -0.775868, learning rate is 0.085146, loss is 0.050235
After 16 steps: global_step is 17.000000, w is -0.814036, learning rate is 0.084294, loss is 0.034583
After 17 steps: global_step is 18.000000, w is -0.845387, learning rate is 0.083451, loss is 0.023905
After 18 steps: global_step is 19.000000, w is -0.871193, learning rate is 0.082617, loss is 0.016591
After 19 steps: global_step is 20.000000, w is -0.892476, learning rate is 0.081791, loss is 0.011561
After 20 steps: global_step is 21.000000, w is -0.910065, learning rate is 0.080973, loss is 0.008088
After 21 steps: global_step is 22.000000, w is -0.924629, learning rate is 0.080163, loss is 0.005681
After 22 steps: global_step is 23.000000, w is -0.936713, learning rate is 0.079361, loss is 0.004005
After 23 steps: global_step is 24.000000, w is -0.946758, learning rate is 0.078568, loss is 0.002835
After 24 steps: global_step is 25.000000, w is -0.955125, learning rate is 0.077782, loss is 0.002014
After 25 steps: global_step is 26.000000, w is -0.962106, learning rate is 0.077004, loss is 0.001436
After 26 steps: global_step is 27.000000, w is -0.967942, learning rate is 0.076234, loss is 0.001028
After 27 steps: global_step is 28.000000, w is -0.972830, learning rate is 0.075472, loss is 0.000738
After 28 steps: global_step is 29.000000, w is -0.976931, learning rate is 0.074717, loss is 0.000532
After 29 steps: global_step is 30.000000, w is -0.980378, learning rate is 0.073970, loss is 0.000385
After 30 steps: global_step is 31.000000, w is -0.983281, learning rate is 0.073230, loss is 0.000280
After 31 steps: global_step is 32.000000, w is -0.985730, learning rate is 0.072498, loss is 0.000204
After 32 steps: global_step is 33.000000, w is -0.987799, learning rate is 0.071773, loss is 0.000149
After 33 steps: global_step is 34.000000, w is -0.989550, learning rate is 0.071055, loss is 0.000109
After 34 steps: global_step is 35.000000, w is -0.991035, learning rate is 0.070345, loss is 0.000080
After 35 steps: global_step is 36.000000, w is -0.992297, learning rate is 0.069641, loss is 0.000059
After 36 steps: global_step is 37.000000, w is -0.993369, learning rate is 0.068945, loss is 0.000044
After 37 steps: global_step is 38.000000, w is -0.994284, learning rate is 0.068255, loss is 0.000033
After 38 steps: global_step is 39.000000, w is -0.995064, learning rate is 0.067573, loss is 0.000024
After 39 steps: global_step is 40.000000, w is -0.995731, learning rate is 0.066897, loss is 0.000018

 

猜你喜欢

转载自blog.csdn.net/abc13526222160/article/details/84955627