2020-06-05: Comparative analysis of mobilenet series algorithms

1 、 mobilenet_V1 :

  • The residual block uses standard residuals, and feature extraction is limited by the number of input channels;
  • The network finally uses the relu6 activation function, which will cause the convolution kernel of the depthwise part to be abolished;

2 、 mobilenet_V2 :

  • The inverted residual used by the residual error quickly solves the problem that the feature is limited by the number of channels;
  • The network finally replaced the relu6 activation function with the Liner activation function to avoid the loss of the depthwise part of the convolution kernel caused by the further loss of feature information;

3 、 mobilenet_V3 :

  • The use of complementary search algorithms reduces computational overhead;
  • Changed the method of calculating features (in V2, the inverted residual method is used to extract features, which increases the computational cost. V3 replaces the 3*3 convolution of feature extraction in V2 with 1*1 convolution, and then the average pooling in front of it Removal and other methods reduce the computational overhead);

  • The h-swith activation function is used in the network to improve the accuracy and reduce the delay;

  • Two network structures of large and small are proposed.

Guess you like

Origin blog.csdn.net/weixin_38192254/article/details/106564596