Explanation of terms involved in statistical learning methods

For initial understanding only, statements explained may not be correct.

Perceptron

Perceptronn. Perceptron, [count] Perceptron (a pattern recognition machine that simulates the human optic nerve control system)

The perceptron is an artificial neural network invented by Frank Rosenblatt in 1957.

A perceptron is a simple abstraction of biological nerve cells. The structure of nerve cells can be roughly divided into: dendrites, synapses, cell bodies and axons. A single nerve cell can be thought of as a machine with only two states - 'yes' when it's activated, and 'no' when it's not. The state of nerve cells depends on the amount of input signals received from other nerve cells, and the strength of the synapses (inhibition or reinforcement). When the sum of the semaphores exceeds a certain threshold, the cell body is excited, producing electrical impulses. Electrical impulses travel along axons and through synapses to other neurons. In order to simulate the behavior of nerve cells, the corresponding basic concepts of perceptrons are proposed, such as weights (synapses), biases (thresholds), and activation functions (cell bodies).

It can be thought of as a feedforward neural network in its simplest form, a binary linear classifier. In the field of artificial neural network, the perceptron is also referred to as a single-layer artificial neural network to distinguish it from the more complex multi-layer perceptron (Multilayer Perceptron). As a linear classifier, a (single-layer) perceptron is arguably the simplest form of a feedforward artificial neural network. Despite its simple structure, perceptrons are capable of learning and solving fairly complex problems. The main intrinsic flaw of the perceptron is that it cannot handle linearly inseparable problems.

linear

If there is a linear functional relationship between two variables, it is said that there is a linear relationship between them.

two-class classification

A sample belongs to one of two classes.

Feature vector

Enter image description

When the coordinates in the figure are flipped left and right, the direction of the vertical red vector in the middle remains unchanged, and the direction of the black vector in the horizontal direction is completely reversed. They are both eigenvectors of the left-right flipped transformation. The red vector has a constant length and its eigenvalue is 1. The length of the black vector is also unchanged but the direction has changed, and its eigenvalue is -1. The green vector is not on the same line as the original vector after flipping, so it is not an eigenvector.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325392172&siteId=291194637