Lecture 5 - Convolutional Neural Networks - Lesson 13 - Convolution and Pooling

fully connected


------------------------------------------------------------------------------------------------------------------------------

convolution

The convolution reduces the number of parameters (very efficiently) by sharing the parameters of the convolution kernel.

The convolution kernel is the parameter, which needs to be learned by backpropagation.

The convolution kernel is different from the weight parameter of the full connection, it only cares about the local features. And the weight W of fully connected NN pays attention to all features.

-------------------------------------------------------------------------------------------------------------------------

Convolution also has an activation function, which also requires RELU

---------------------------------------------------------------------------------------------------------------------------

Due to the need for enough nodes in each layer, the original image gradually becomes smaller and deeper after multi-layer convolution (n_H, n_W decreases, n_c increases)

---------------------------------------------------------------------------------------------------------------------------

! Note that whether it is full connection or convolution, it is through continuous feature transformation that the image features become linearly separable


---------------------------------------------------------------------------------------------------------------------------

size after convolution

n_new = (n+2p-f)/s + 1 

Note that the size of the convolution kernel needs to be suitable for the image, and the position that cannot be convolved cannot be vacated. i.e. n_new needs to be an integer

---------------------------------------------------------------------------------------------------------------------------

The lack of corner sampling can be compensated for by padding. At the same time, the image size (n_H, n_W) can be kept unchanged!

If s=1 and the convolution kernel is f*f, the padding of (f-1)/2 is used on the left and right sides to keep the size unchanged.

If padding is not used, the size will decrease quickly, and the effect of such a convolutional neural network will not be good.


---------------------------------------------------------------------------------------------------------------------------

The number k of convolution kernels is generally an exponential multiple of 2



Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325896446&siteId=291194637