"Principles of Artificial Neural Networks" Reading Notes (3)-Early Adaptive Neural Networks

Summary of all notes: "Principles of Artificial Neural Networks"-Summary of Reading Notes

1. Perceptron

Perceptron model structure

Based on the MP model and Hubb learning rules, a single-layer perceptron model with self-learning ability is proposed.

  • No interconnected hierarchies within the layers without feedback
  • The input layer (perception layer) corresponds
    to the input pattern composed of 0 and 1 signals.
    Information processing capabilities of each neuron
  • Hidden layer (connection layer)
  • Output layer (reaction layer)
  • Full interconnection between neurons in each
    layer Single-layer perceptron model: no hidden layer
    Multi-layer perceptron model: 1 or more hidden layers

Insert picture description here

Perceptron Processing Unit Model

Neurons

Perceptron learning algorithm

Insert picture description here
Insert picture description here

The limitations of the perceptron

  • Linear separable and linear inseparable.
    If two types of samples can be separated by a straight line, plane or hyperplane, it is called linearly separable, otherwise it is called linearly inseparable.

Insert picture description here

  • Limitations of the perceptron model The
    single-layer perceptron model only has the ability to classify linearly separable problems. The
    multi-layer perceptron model only allows one layer of connection weight to be adjusted, and the perceptron learning algorithm cannot allow the hidden layer processing unit to have the ability to learn.
    The perceptron model uses a threshold-type function as the transfer function, and ultimately only has a discrete output such as 0/1 or -1/1, which limits the classification ability of the perceptron model.

Perceptron convergence

If the sample input function is linearly separable, then the perceptron model must be able to converge to the correct connection weight after a finite number of iterations during learning.

If the sample input function is not linearly separable, the learning process of the perceptron model may oscillate, which cannot guarantee that it will converge to the correct result.

Two, adaptive linear components

ADALINE model structure

The adaptive linear element takes continuous linear analog as input.
Insert picture description here
Insert picture description here
Insert picture description here

ADALINE learning algorithm

The learning algorithm is the least mean square error algorithm (LMS), also known as the Widrow-Hoff algorithm.

The LMS algorithm follows the principle of minimizing the error sum of squares, and iteratively corrects each connection weight.

LMS learning algorithm
Insert picture description here
Insert picture description here
Insert picture description here

The next chapter Portal: "Principles of Artificial Neural Networks" Reading Notes (4)-Error Back Propagation Neural Network

Guess you like

Origin blog.csdn.net/qq_41485273/article/details/114002820