Make Your Own Neural Network
Build your own neural network
Author: lz0499
statement:
1) Make Your Own Neural Network translated from the Neural Network Introductory book written by Tariq Rashid . The author's goal is to use as little jargon and advanced mathematics as possible to explain how neural networks work in a graphic and textual way. Anyone with a high school math level can understand how neural networks work. This book is highly recommended for beginners as an introductory book on neural networks.
2) This article is for academic exchange only, not for commercial use. The original intention of translation is to deepen the understanding of neural networks while translating.
3) Due to the new knowledge of neural networks, some mistakes are inevitable in the translation process. If you find any mistakes, please correct me. Thanks!
4) Due to work reasons, I will selectively update the translation according to the original chapters from time to time.
5) This belongs to the first version. If there are errors, it is necessary to continue to correct and add or delete.
content:
Part 1: How Neural Networks Work
A single classifier does not seem to be enough
Neurons, the computers of nature
Signals through the neural network
Using a matrix to calculate the output of a three-layer neural network
Update weights from multiple nodes
Backpropagating errors from multiple nodes
Multilayer Neural Network Layer Backpropagation Error
Calculating Backward Propagation Error Using Matrix Multiplication
How to actually update the weights (1)
How to actually update the weights (2)
Weight update example
Let's actually calculate how the weights are updated in the next simple neural network.
The image below is a neural network we encountered earlier. But this time we add the output of each hidden layer. These outputs are set only to demonstrate how to update the weights, and are not necessarily this value in practice.
We want to update the weight W 1,1 between the hidden layer and the output . The current weight of W 1,1 is 2.0.
Let's write the error slope expression again我们一步一步开始计算:
l 第一部分tk-ok是误差e1=0.8
l Sigmoid函数中的加权和为2.0*0.4+3.0*0.5=2.3
l 把2.3带入Sigmoid函数得到0.909.中间表达式为0.909*(1-0.909)=0.083
l 最后一部分Oj即使j=1的隐藏层输出,即为oj=0.4
把上述所有部分相乘,不要忘记前面的负号。我们将得到最后的结果为-0.0265。如果我们设置学习率为0.1,则我们需要改变权重W1,1 -(0.1*-0.0265)=0.002650大小,即W11=2.0+0.002650=2.00265。
这个改变值很小,但是经过上千次甚至上万次迭代之后,权重值将固定在某一个数值,表示的是神经网络已经训练好了。