Interpretation of paper "ImageNet Classification with Deep Convolutional Neural Networks"

This paper presents AlexNet, laid a position in the CV depth study in the field.
1. ReLu activation function
2. Dropout
3. Enhanced Data


 

Reduce over-fitting (Reducing Overfitting)

Motivation: As the entire network has 60 million parameters; although ILSVRC of 1000 training class so that each example of applying constraints on the map 10 from the image to the label very necessary to consider the issue of over-fitting.

Extended Data (Data Augmentation)

The image data expansion, namely the expansion of artificial data set is simple and fit the phenomenon most commonly used method to reduce the over-the authors used two different data expansion method:

- first form comprises generating an image reflecting the level of translation and, particularly, their species from an image of 256 * 256 randomly selected image patch 224 * 224 for training, the size of which will increase our training set 2048 times, resulting training examples although of course highly interdependent.

 

Guess you like

Origin www.cnblogs.com/ChenKe-cheng/p/11371858.html