Wu Enda DeepLearning.ai series notes
dry goods|Wu Enda DeepLearning.ai course extraction notes (1-2) Neural network and deep learning --- Neural network basic
dry goods|Wu Enda DeepLearning.ai course extraction notes (1-3) Neural network and deep learning --- Shallow neural network
1
Dimensions of the matrix
The structure diagram of DNN is shown in the figure:
For the l-th layer neural network, the matrix dimension of each parameter of a single sample is:
2
Why use deep representation
Face recognition and voice recognition:
For face recognition, the first layer of the neural network extracts the contours and edges of the face from the original image, and each neuron learns different edge information; the second layer of the network combines the edge information learned by the first layer , Forming some local features of the face, such as eyes, mouth, etc.; the following layers gradually combine the features of the previous layer to form the appearance of the face.
With the increase in the number of neural network layers, the features gradually expand from the original edge to the whole face, from the whole to the part, from simple to complex. The more layers, the more accurate the effect of model learning.
For speech recognition, the first layer of neural network can learn some tones of language pronunciation, and the deeper network can detect basic phonemes, and then to word information, and gradually deepen it can learn phrases and sentences.
So from the above two examples, it can be seen that as the depth of the neural network deepens, the model can learn more complex problems and its functions become more powerful.
Circuit logic calculation:
Suppose the logic output of the exclusive OR is calculated:
that is, the number of inputs is n, and the number of outputs is n-1.
However, if the deep network is not applicable, and only a single hidden layer network (as shown in the figure on the right) is used, the number of neurons required is 2^(n-1). The same problem, but the number of neurons required by the deep network is much less than that of the shallow network.
3
Forward and backward propagation
Forward propagation
Backward propagation
4
Parameters and hyperparameters
parameter:
The parameters are the information we want the model to learn in the process, W, b
Hyperparameters:
More adjustments and learning of hyperparameters will be introduced in the next topic.
csdn blog: http://blog.csdn.net/koala_tree/article/details/78087711
column: https://zhuanlan.zhihu.com/p/29738823
Recommended reading:
Featured Dry Goods|Summary
Dry Goods in the Dry Goods Catalog for the Last Half Year|Wu Enda DeepLearning.ai Course Refinement Notes (1-2) Neural Network and Deep Learning--- Neural Network Basic
Dry Goods|Wu Enda DeepLearning.ai Course Refinement Notes (1-3) Neural Network And deep learning --- shallow neural network
欢迎关注公众号学习交流~
Welcome to join the exchange group to exchange learning