【读书1】【2017】MATLAB与深度学习——动量(1)

动量(Momentum)

本节探讨权重调节的变化。

This section explores the variations of theweight adjustment.

到目前为止,权重调节依赖于最简单形式的方程2.7和3.7。

So far, the weight adjustment has relied onthe simplest forms of Equations 2.7 and 3.7.

然而,有各种各样的权重调节形式可用。

However, there are various weightadjustment forms available.

使用先进的权重调节公式的好处是能够在神经网络训练过程中获得更高的稳定性和更快的学习速度。

The benefits of using the advanced weightadjustment formulas include higher stability and faster speeds in the trainingprocess of the neural network.

稳定和速度对于深度学习尤其困难,因为它很难训练。

These characteristics are especiallyfavorable for Deep Learning as it is hard to train.

本节只讨论包含动量的训练方法,这种方法已经使用很长时间了。

This section only covers the formulas thatcontain momentum, which have been used for a long time.

如果需要的话,你可能想访问脚注中所示的链接来进一步研究。(链接:sebastianruder.com/optimizing-gradient-descent)

If necessary, you may want to study thisfurther with the link shown in the footnote.
在这里插入图片描述
在这里插入图片描述

虽然这种影响随着时间的推移而减小,但旧的权重更新仍然一直存在。

Although the influence diminishes overtime, the old weight updates remain in the momentum.

因此,权重不完全受某些特定权重更新值的影响。

Therefore, the weight is not solelyaffected by a particular weight update value.

这样可以使得网络的学习稳定性提高了。

Therefore, the learning stability improves.

此外,随着权重更新的增加,动量也越来越大。

In addition, the momentum grows more andmore with weight updates.

因此,权重更新也会越来越大。

As a result, the weight update becomesgreater and greater as well.

从而提高了网络的学习速度。

Therefore, the learning rate increases.

——本文译自Phil Kim所著的《Matlab Deep Learning》

更多精彩文章请关注微信号:在这里插入图片描述

猜你喜欢

转载自blog.csdn.net/weixin_42825609/article/details/83068113