The concept and principle of convolutional neural network, the theoretical basis of convolutional neural network

Popular understanding of convolutional neural network

Convolutional Neural Networks (CNN) is a type of Feedforward Neural Networks (Feedforward Neural Networks) that includes convolution calculations and has a deep structure. It is one of the representative algorithms for deep learning.

Convolutional neural network has the ability of representation learning, and can perform shift-invariant classification on input information according to its hierarchical structure, so it is also called "shift-invariant artificial neural network".

Google AI Writing Project: Neural Network Pseudo-Original

What is Convolutional Neural Network Algorithm?

One-dimensional construction, two-dimensional construction, and full convolution construction .

Convolutional Neural Networks (CNN) is a type of Feedforward Neural Networks (Feedforward Neural Networks) that includes convolution calculations and has a deep structure. It is one of the representative algorithms for deep learning.

Convolutional neural network has the ability of representation learning, and can perform shift-invariant classification on input information according to its hierarchical structure, so it is also called "shift-invariant artificial neural network". Neural Networks, SIANN)".

The connectivity of the convolutional neural network: The connection between the convolutional layers in the convolutional neural network is called a sparse connection (sparse connection), that is, compared to the full connection in the feedforward neural network, the neurons in the convolutional layer are only Connect to some, but not all, neurons in its adjacent layers.

Specifically, any pixel (neuron) in the feature map of layer l of the convolutional neural network is only a linear combination of pixels in the receptive field defined by the convolution kernel in layer l-1.

The sparse connection of the convolutional neural network has a regularization effect, which improves the stability and generalization ability of the network structure and avoids overfitting. At the same time, the sparse connection reduces the total amount of weight parameters, which is conducive to the rapid learning of the neural network. and reduce memory overhead when computing.

All pixels in the same channel of the feature map in the convolutional neural network share a set of convolution kernel weight coefficients, which is called weight sharing.

Weight sharing distinguishes convolutional neural networks from other neural networks that contain locally connected structures, which use sparse connections but have different weights for different connections. Weight sharing, like sparse connections, reduces the total number of parameters in convolutional neural networks and has a regularizing effect.

From the perspective of fully connected network, the sparse connection and weight sharing of convolutional neural network can be regarded as two infinitely strong priors (pirior), that is, all weight coefficients of a hidden layer neuron outside its receptive field are constant. is 0 (but the receptive field can move in space); and in a channel, the weight coefficients of all neurons are the same.

What is a Convolutional Neural Network? why they are important

Convolutional Neural Network (CNN) is a feedforward neural network. Its artificial neurons can respond to surrounding units within a part of the coverage area, and it has excellent performance for large-scale image processing.

[1] It includes alternating convolutional layers and pooling layers. Convolutional neural network is an efficient recognition method that has been developed in recent years and has attracted widespread attention.

In the 1960s, Hubel and Wiesel found that its unique network structure can effectively reduce the complexity of the feedback neural network when studying the neurons used for local sensitivity and direction selection in the cat cerebral cortex, and then proposed the convolutional neural network ( Convolutional Neural Networks-referred to as CNN).

Now, CNN has become one of the research hotspots in many scientific fields, especially in the field of pattern classification, because the network avoids the complex preprocessing of images and can directly input original images, so it has been more widely used.

The new recognition machine proposed by K.Fukushima in 1980 was the first implementation network of convolutional neural network. Subsequently, more researchers improved the network.

Among them, the representative research result is the "improved cognitive machine" proposed by Alexander and Taylor, which combines the advantages of various improved methods and avoids time-consuming error backpropagation.

 

Guess you like

Origin blog.csdn.net/Supermen333/article/details/127509416