table of Contents
For example
Inception module in the application of
Reference material
Can reduce the amount of calculation can be increased ability nonlinear discriminant
For example |
Suppose a height 30, width 40 and a depth of 200 and a three-dimensional tensor 55 height 5, a width of 5 and a depth of convolution same convolution kernel 200, step = 1, the result is a high 30, a width of 40, a depth of a three-dimensional tensor 55, as shown:
Multiplying the convolution calculation process amount of about 5 * 200 * 5 * 30 * 40 * 55 = 330 000 000, the large amount of calculation.
Then we can consider the second convolution, convolution of nuclear first use 1 * 1 on the depth dimension reduction, then rose dimension:
Calculating the convolution process the amount of about:
Step: 1 * 1 * 30 * 200 * 40 * 20 = 4800000
Step: 5 * 5 * 20 * 30 * 40 * 55 = 33000000
The total volume of about multiplication: 37800000
Obviously, to obtain the same end result, use the second method, i.e., first dimension reduction in the depth direction, a second amount of calculation is the first one 37800000/330 000 000 = 0.11.
Further, since the layer structure and the introduction of the activation function may introduce additional linear capability.
Inception module in the application of |
In googlenet the inception module on the use of this convolution kernel do dimension reduction 1 * 1 to reduce the amount of calculation ability and increasing non-linear discriminant
Reference material |
"Illustrates the depth of learning and neural networks: from tensor to achieve TensorFlow" _ Zhang Ping
inceptionV1-Going Deeper with Convolutions
"Deep - of - learning - learning - nuclear - Heart - technology - art - and - real - practice"