TensorFlow优化器optimizer

论文An overview of gradient descent optimization algorithms介绍了算法原理,请参考这个博客相关原理。

TensorFlow的相关实现

包里相关的函数:

Optimizer 
GradientDescentOptimizer 
AdagradOptimizer 
AdagradDAOptimizer 
MomentumOptimizer 
AdamOptimizer 
FtrlOptimizer 
RMSPropOptimizer

1 GradientDescentOptimizer

tf.train.GradientDescentOptimizer(learning_rate, use_locking=False,name=’GradientDescent’)

learning_rate:学习速率,控制参数的更新速度。过大过小都会影响算法的运算时间和结果,过大容易发散,过小运算时间太长。
其他参数可以忽略

2 AdadeltaOptimizer

tf.train.AdadeltaOptimizer(learning_rate=0.001, rho=0.95, epsilon=1e-08, use_locking=False, name=’Adadelta’)

learning_rate: tensor或者浮点数,学习率
rho: tensor或者浮点数. The decay rate.
epsilon: A Tensor or a floating point value. A constant epsilon used to better conditioning the grad update.

3 AdagradOptimizer

tf.train.AdagradOptimizer(learning_rate, initial_accumulator_value=0.1, use_locking=False, name=’Adagrad’)

learning_rate: 学习速率
initial_accumulator_value: A floating point value. Starting value for the accumulators, must be positive.
use_locking: 默认False。变量允许并发读写操作,若为true则防止对变量的并发更新。

4 MomentumOptimizer

tf.train.MomentumOptimizer(learning_rate, momentum, use_locking=False, name=’Momentum’, use_nesterov=False)

learning_rate: A Tensor or a floating point value
momentum: A Tensor or a floating point value.
use_locking: If True use locks for update operations.

5 AdamOptimizer

tf.train.AdamOptimizer(learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08, use_locking=False, name=’Adam’)

learning_rate: 学习速率
beta1: A float value or a constant float tensor. The exponential decay rate for the 1st moment estimates.
beta2: A float value or a constant float tensor. The exponential decay rate for the 2nd moment estimates.
epsilon: A small constant for numerical stability.

exponential_decay

tf.train.exponential_decay(
learning_rate,初始学习率
global_step,当前迭代次数
decay_steps,衰减速度(在迭代到该次数时学习率衰减为earning_rate * decay_rate)
decay_rate,学习率衰减系数,通常介于0-1之间。
staircase=False,(默认值为False,当为True时,(global_step/decay_steps)则被转化为整数) ,选择不同的衰减方式。
name=None
)

学习率会按照以下公式变化:
d e c a y e d _ l e a r n i n g r a t e = l e a r n i n g _ r a t e d e c a y _ r a t e g l o b a l _ s t e p / d e c a y _ s t e p s {decayed\_learning}_rate = learning\_rate * decay\_rate ^ {global\_step / decay\_steps}

使用

AdamOptimizer+exponential_decay

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt

train_X = np.linspace(-1, 1, 100)
train_Y = 2 * train_X + np.random.randn(*train_X.shape) * 0.33 + 10

X = tf.placeholder("float",)
Y = tf.placeholder("float")
w = tf.Variable(0.0, trainable=True, name='weight')
b = tf.Variable(0.0, trainable=True, name='bias')
loss = tf.square(Y - X*w - b)

global_step = tf.Variable(0, trainable=False)
lr = 0.01
decay_step = 1000  # 100
decay_rate = 0.6
learning_rate = tf.train.exponential_decay(lr, global_step, decay_step, decay_rate,)

train_op = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss)

w_value, b_value = [], []
global_step_value = 0
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    epoch = 1
    for i in range(25):
        for(x, y) in zip(train_X, train_Y):
            _, w_value, b_value, global_step_value = sess.run([train_op, w, b, global_step],
                                           feed_dict={X: x, Y: y})
            print("global_step: {}, epoch: {}, w: {}, b:{}".format(global_step_value, epoch, w_value, b_value))
        epoch +=1

plt.plot(train_X, train_Y, "+")
plt.plot(train_X, train_X.dot(w_value)+b_value)
plt.show()

结果:

在这里插入图片描述
global_step: 0, epoch: 25, w: 1.8610148429870605, b:9.99776840209961
global_step: 0, epoch: 25, w: 1.8617169857025146, b:9.998320579528809
global_step: 0, epoch: 25, w: 1.8623504638671875, b:9.998818397521973
global_step: 0, epoch: 25, w: 1.8629928827285767, b:9.99931526184082
global_step: 0, epoch: 25, w: 1.863304615020752, b:9.999588012695312
global_step: 0, epoch: 25, w: 1.8639928102493286, b:10.000094413757324
global_step: 0, epoch: 25, w: 1.8646544218063354, b:10.000576972961426
global_step: 0, epoch: 25, w: 1.8651009798049927, b:10.000919342041016
global_step: 0, epoch: 25, w: 1.865858554840088, b:10.001441955566406

问题:

若其他不变,优化器改为train_op = tf.train.AdadeltaOptimizer(learning_rate=learning_rate,).minimize(loss)
无法收敛,而GradientDescentOptimizer 可以。
无法收敛结果如下:
global_step: 0, epoch: 25, w: 0.0019085286185145378, b:0.023146923631429672
global_step: 0, epoch: 25, w: 0.0019197502406314015, b:0.02316095307469368
global_step: 0, epoch: 25, w: 0.0019310542847961187, b:0.02317480556666851
global_step: 0, epoch: 25, w: 0.0019428601954132318, b:0.023188995197415352
global_step: 0, epoch: 25, w: 0.001954132691025734, b:0.023202279582619667
global_step: 0, epoch: 25, w: 0.0019664266146719456, b:0.023216500878334045
global_step: 0, epoch: 25, w: 0.0019785056356340647, b:0.023230215534567833

改变学习速率可以解决问题 change the learning rate parameter and works well:

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt

train_X = np.linspace(-1, 1, 100)
train_Y = 2 * train_X + np.random.randn(*train_X.shape) * 0.33 + 10

X = tf.placeholder("float",)
Y = tf.placeholder("float")
w = tf.Variable(0.0, trainable=True, name='weight')
b = tf.Variable(0.0, trainable=True, name='bias')
loss = tf.square(Y - X*w - b)

global_step = tf.Variable(0, trainable=False)
lr = 0.5
decay_step = 1000  # 100
decay_rate = 0.6
learning_rate = tf.train.exponential_decay(lr, global_step, decay_step, decay_rate,)

train_op = tf.train.AdagradOptimizer(learning_rate=learning_rate).minimize(loss)

w_value, b_value = [], []
global_step_value = 0
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    epoch = 1
    for i in range(30):
        for(x, y) in zip(train_X, train_Y):
            _, w_value, b_value, global_step_value = sess.run([train_op, w, b, global_step],
                                           feed_dict={X: x, Y: y})
            print("global_step: {}, epoch: {}, w: {}, b:{}".format(global_step_value, epoch, w_value, b_value))
        epoch +=1

plt.plot(train_X, train_Y, "+")
plt.plot(train_X, train_X.dot(w_value)+b_value)
plt.show()

结果:

在这里插入图片描述

学习速率不使用exponential_decay的效果

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt

train_X = np.linspace(-1, 1, 100)
train_Y = 2 * train_X + np.random.randn(*train_X.shape) * 0.33 + 10

X = tf.placeholder("float",)
Y = tf.placeholder("float")
w = tf.Variable(0.0, trainable=True, name='weight')
b = tf.Variable(0.0, trainable=True, name='bias')
loss = tf.square(Y - X*w - b)

global_step = tf.Variable(0, trainable=False)
lr = 0.1
decay_step = 1000  # 100
decay_rate = 0.6
learning_rate = tf.train.exponential_decay(lr, global_step, decay_step, decay_rate,)

train_op = tf.train.AdadeltaOptimizer(learning_rate=10.0).minimize(loss)

w_value, b_value = [], []
global_step_value = 0
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    epoch = 1
    for i in range(30):
        for(x, y) in zip(train_X, train_Y):
            _, w_value, b_value, global_step_value = sess.run([train_op, w, b, global_step],
                                           feed_dict={X: x, Y: y})
            print("global_step: {}, epoch: {}, w: {}, b:{}".format(global_step_value, epoch, w_value, b_value))
        epoch +=1

plt.plot(train_X, train_Y, "+")
plt.plot(train_X, train_X.dot(w_value)+b_value)
plt.show()

output:
global_step: 0, epoch: 30, w: 2.080955982208252, b:10.012103080749512
global_step: 0, epoch: 30, w: 2.083653211593628, b:10.013960838317871
global_step: 0, epoch: 30, w: 2.086886167526245, b:10.016136169433594
global_step: 0, epoch: 30, w: 2.0803656578063965, b:10.01177978515625
global_step: 0, epoch: 30, w: 2.089266538619995, b:10.017807006835938
global_step: 0, epoch: 30, w: 2.091552257537842, b:10.01932144165039
global_step: 0, epoch: 30, w: 2.0865447521209717, b:10.01606559753418
global_step: 0, epoch: 30, w: 2.0876119136810303, b:10.016745567321777

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt

train_X = np.linspace(-1, 1, 100)
train_Y = 2 * train_X + np.random.randn(*train_X.shape) * 0.33 + 10

X = tf.placeholder("float",)
Y = tf.placeholder("float")
w = tf.Variable(0.0, trainable=True, name='weight')
b = tf.Variable(0.0, trainable=True, name='bias')
loss = tf.square(Y - X*w - b)

global_step = tf.Variable(0, trainable=False)
lr = 10.0
decay_step = 1000  # 100
decay_rate = 0.6
learning_rate = tf.train.exponential_decay(lr, global_step, decay_step, decay_rate,)

train_op = tf.train.AdadeltaOptimizer(learning_rate=learning_rate).minimize(loss)

w_value, b_value = [], []
global_step_value = 0
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    epoch = 1
    for i in range(30):
        for(x, y) in zip(train_X, train_Y):
            _, w_value, b_value, global_step_value = sess.run([train_op, w, b, global_step],
                                           feed_dict={X: x, Y: y})
            print("global_step: {}, epoch: {}, w: {}, b:{}".format(global_step_value, epoch, w_value, b_value))
        epoch +=1

plt.plot(train_X, train_Y, "+")
plt.plot(train_X, train_X.dot(w_value)+b_value)
plt.show()

output:
global_step: 0, epoch: 30, w: 2.021937608718872, b:10.024102210998535
global_step: 0, epoch: 30, w: 2.020988702774048, b:10.023301124572754
global_step: 0, epoch: 30, w: 2.009711503982544, b:10.0131254196167
global_step: 0, epoch: 30, w: 2.016847610473633, b:10.019465446472168
global_step: 0, epoch: 30, w: 2.016056776046753, b:10.018778800964355
global_step: 0, epoch: 30, w: 2.0100505352020264, b:10.01366138458252
global_step: 0, epoch: 30, w: 2.0078115463256836, b:10.011795043945312
global_step: 0, epoch: 30, w: 2.0068187713623047, b:10.010985374450684
global_step: 0, epoch: 30, w: 2.0039358139038086, b:10.008682250976562

猜你喜欢

转载自blog.csdn.net/qq_27009517/article/details/86525967
今日推荐