【读书1】【2017】MATLAB与深度学习——深度学习(1)

也许不容易理解为什么只加入额外的一层却花费了如此长的时间。

It may not be easy to understand why ittook so long for just one additional layer.

这是因为没有找到多层神经网络的正确学习规则。

It was because the proper learning rule forthe multi-layer neural network was not found.

由于训练是神经网络存储信息的唯一途径,不可训练的神经网络是无用的。

Since the training is the only way for theneural network to store the information, the untrainable neural network isuseless.

多层神经网络的训练问题最终在1986年引入反向传播算法时得到解决。

The problem of training of the multi-layerneural network was finally solved in 1986 when the back-propagation algorithmwas introduced.

神经网络再次登上了历史舞台。

The neural network was on stage again.

然而,很快又遇到了另一个问题。

扫描二维码关注公众号,回复: 3992862 查看本文章

However, it was soon met with anotherproblem.

它在实际问题上的表现并没有达到预期。

Its performance on practical problems didnot meet expectations.

当然,也尝试过各种方法来克服这些限制,包括添加隐藏层和隐藏层中的节点。

Of course, there were various attempts toovercome the limitations, including the addition of hidden layers and additionof nodes in the hidden layer.

但是都没有任何作用。

However, none of them worked.

其中很多方法的性能甚至很差。

Many of them yielded even poorerperformances.

由于神经网络具有非常简单的结构和概念,所以没有什么方法可以进一步改善它了。

As the neural network has a very simplearchitecture and concept, there was nothing much to do that could improve it.

最后,神经网络被判处没有改进可能性的死刑,它被大家遗忘了。

Finally, the neural network was sentencedto having no possibility of improvement and it was forgotten.

神经网络一直被遗忘了大约20年,直到2000年中期引入深度学习,才打开了新的研究大门。

It remained forgotten for about 20 yearsuntil the mid-2000s when Deep Learning was introduced, opening a new door.

由于深度神经网络的训练难度较大,深度隐藏层需要较长时间才能获得足够的性能。

It took a while for the deep hidden layerto yield sufficient performance because of the difficulties in training thedeep neural network.

无论如何,当前的深度学习技术产生了令人眼花缭乱的性能水平,已经超越其它机器学习技术和神经网络,并且盛行于人工智能的研究中。

Anyway, the current technologies in DeepLearning yield dazzling levels of performance, which outsmarts the otherMachine Learning techniques as well as other neural networks, and prevail inthe studies of Artificial Intelligence.

综上所述,多层神经网络解决单层神经网络存在的问题花了30年的时间,其原因在于缺少学习规则,而学习规则最终被反向传播算法所解决。

In summary, the reason the multi-layerneural network took 30 years to solve the problems of the single-layer neuralnetwork was the lack of the learning rule, which was eventually solved by theback-propagation algorithm.

相比之下,另一个20年过去了,由于性能上的欠缺,才引入了基于深度神经网络的深度学习。

In contrast, the reason another 20 yearspassed until the introduction of deep neural network-based Deep Learning wasthe poor performance.

具有附加隐藏层的反向传播训练常常导致较差的性能。

The backpropagation training with theadditional hidden layers often resulted in poorer performance.

深度学习为这个问题提供了解决方案。

Deep Learning provided a solution to thisproblem.

深度神经网络的改进Improvementof the Deep Neural Network

尽管深度学习具有显著的性能,但实际上并没有任何关键技术。

Despite its outstanding achievements, DeepLearning actually does not have any critical technologies to present.

深度学习的创新是许多小技术改进的结果。

The innovation of Deep Learning is a resultof many small technical improvements.

本节简要介绍为什么深度神经网络产生较差的性能以及深度学习如何克服这个问题。

This section briefly introduces why thedeep neural network yielded poor performance and how Deep Learning overcamethis problem.

深度神经网络性能较差的原因是因为网络没有得到正确的训练。

The reason that the neural network withdeeper layers yielded poorer performance was that the network was not properlytrained.

——本文译自Phil Kim所著的《Matlab Deep Learning》

更多精彩文章请关注微信号:在这里插入图片描述

猜你喜欢

转载自blog.csdn.net/weixin_42825609/article/details/83855291