mobilenet v1笔记

                    **Mobilenet v1**

Abstract
(1)use depth-wise separable convolutions to build light weight deep neural network
(2)Introduce two simple global hyper-parameters that trade off between latency and accuracy

Introduce
(1)trend to make deeper and more complicates network in order to achieve higher accuracy, but not necessary to speed and latency
(2)The paper describe a network architecture and a set of two hyper-parameters in order to build small and low latency models

Prior work
(1)构建小而有效地network:2类:compress pretrained network压缩预训练模型 and train small network

MobileNet Architecture
(1)depthwise separable convolutions factorize: a depthwise convolution and 1x1 pointwise convolution.
(2)Depthwise conv每一个输入channel都有一个filter。 Pointwise conv applies a 1x1 conv to combline the outputs the depthwise conv
(3)Reduce the computation and model size
在这里插入图片描述
(4)All layers are followed by bn and relu 除了final fc 用softmax。。Down sampling 在depthwise conv的stride处理。。Final average pool 使resolution减少到1*1.
在这里插入图片描述
(5)95%的计算和75%的参数都在pointwise conv处,在depthwise conv 权重缩减几乎没有,因为几乎没有param
在这里插入图片描述
(5)两个参数α called width multiplier and resolution multiplier ρ.都在(0,1】之间
在这里插入图片描述
a用于减少输入输出的channel, p用于减少输入图像的size,得到更小更快的模型

原创文章 7 获赞 8 访问量 260

猜你喜欢

转载自blog.csdn.net/qq_41997237/article/details/105157368