Article directory
- Neural networks
权值
store information in the form of, and the systematic method of modifying weights based on given information is called学习规则
. Since training is the only way for neural networks to systematically store information, learning rules are an important component in neural network research.
Adjustment of weights
(Xj is the input of the node, Yi is the output of the node, Wij is the weight between them, ei is the error between the correct value di and the output value Yi)
- If an input node causes an error in an output node, the weight between the two nodes is adjusted in proportion to the input value xj and the output error ei
Then this formula:
α=学习率(0< a <1)
- The learning rate α determines the number of weights each time. If this value is too high, the output will wander around the solution and fail to converge. Conversely, if it is too low, the calculation reaches the solution too slowly
The training process of single-layer neural network using delta rules
1. Initialize the weights with sufficient values.
2. Take "input" from the training data of {input, correct output} and input it into the neural network. Calculate the error from the correct output di to the input output yi.
3. Calculate weight updates according to the following delta rules:
4. Adjust the weight to:
5. Perform steps 2-4 on all training data.
6. Repeat steps 2-5 until the error reaches an acceptable tolerance level.
- These steps are almost
神经网络的监督学习
identical to the supervised learning process in section " ". The only difference is the addition of step 6. Step 6 just shows that the entire training process is repeated. Once you complete step 5, train the model with each data point. So why do we use all the same training data to train it? This is because the delta rule searches for a solution while repeating the process, rather than solving it in one go. 3The entire process is repeated because retraining the model with the same data may improve the model.
训练过程