【读书1】【2017】MATLAB与深度学习——示例:多元分类(6)

softmax函数用于输出节点的激活函数。

The softmax function is employed for theactivation function of the output node.

使用一元热键编码方法将训练数据的正确输出转换成向量形式。

The correct output of the training data isconverted into a vector using the one-hot encoding method.

学习规则的代价函数采用交叉熵函数。

The cost function of the learning ruleemploys the cross entropy function.

第五章 深度学习(Deep Learning)

现在是来讨论深度学习的时候了。

It’s time for Deep Learning.

不过,你不必紧张。

You don’t need to be nervous though.

由于深度学习仍然是神经网络的延伸,你以前阅读的大部分内容都是适用的。

As Deep Learning is still an extension ofthe neural network, most of what you previously read is applicable.

因此,并没有许多其它的新概念需要学习。

Therefore, you don’t have many additionalconcepts to learn.

简单地说,深度学习就是一种采用深度神经网络的机器学习技术。

Briefly, Deep Learning is a MachineLearning technique that employs the deep neural network.

如你所知,深度神经网络就是一种包含2个或者2个以上隐藏层的多层神经网络。

As you know, the deep neural network is themulti-layer neural network that contains two or more hidden layers.

虽然也许有一点失望,但这就是深度学习的精华。

Although this may be disappointinglysimple, this is the true essence of Deep Learning.

图5-1描述了深度学习的概念以及它与机器学习的关系。

Figure 5-1 illustrates the concept of DeepLearning and its relationship to Machine Learning.

在这里插入图片描述

图5-1 深度学习的概念以及它与机器学习的关系The concept ofDeep Learning and its relationship to Machine Learning

深度神经网络是代替机器学习的最终产品,学习规则成为从训练数据到生成模型(深度神经网络)的算法。

The deep neural network lies in the placeof the final product of Machine Learning, and the learning rule becomes thealgorithm that generates the model (the deep neural network) from the trainingdata.

现在,知道了深度学习只是使用更深层次(更多隐藏层)的神经网络,你可能会问,“是什么使得深度学习如此具有吸引力?

Now, knowing that Deep Learning is just theuse of a deeper (more hidden layers) neural network, you may ask, “What makesDeep Learning so attractive?

是否有人设想过将神经网络的层数拓展得更深?”

Has anyone ever thought of making theneural network’s layers even deeper?”

为了回答这些问题,我们需要研究神经网络的发展历史。

In order to answer these questions, we needto look into the history of the neural network.

第一代神经网络——单层神经网络,在解决机器学习面临的实际问题时,很快就暴露出它的基本局限性。

It did not take very long for the single-layerneural network, the first generation of the neural network, to reveal itsfundamental limitations when solving the practical problems that MachineLearning faced.

正如第2章所述,单层神经网络只能求解线性可分问题。

As addressed in Chapter 2, the single-layerneural network can solve only linearly separable problems.

研究者们已经知道多层神经网络将是下一个突破点。

The researchers already knew that themulti-layer neural network would be the next breakthrough.

然而,将单层神经网络再加入一层却花费了大约30年的时间。

However, it took approximately 30 yearsuntil another layer was added to the single-layer neural network.

——本文译自Phil Kim所著的《Matlab Deep Learning》

更多精彩文章请关注微信号:在这里插入图片描述

猜你喜欢

转载自blog.csdn.net/weixin_42825609/article/details/83817357