经典网络论文讲解链接(持续更新~~)

ResNet

https://blog.csdn.net/weixin_43624538/article/details/85049699

CNN

https://my.oschina.net/u/876354/blog/1634322

YoloV2

https://blog.csdn.net/shanlepu6038/article/details/84778770

https://blog.csdn.net/yanhx1204/article/details/81017134

YoloV3

http://www.pianshen.com/article/3460315699/

BN

https://www.cnblogs.com/guoyaohua/p/8724433.html

GoogleNet

传统情况下,最直接提升深度神经网络的方法就是 增加网络的尺寸,包括宽度和深度。深度也就是网络中的层数,宽度指每层中所用到的神经元的个数。但是这种简单直接的解决方式存在的两个重大缺点
(1) 网络尺寸的增加也意味着参数的增加,也就使得网络更加容易过拟合
(2) 计算资源的增加。

作者提出,根本的解决方法就是将 全连接的结构转化成稀疏连接的结构

https://blog.csdn.net/weixin_43624538/article/details/84863685

https://www.jianshu.com/p/22e3af789f4e

Inception v2&v3

https://blog.csdn.net/sunbaigui/article/details/50807418

https://blog.csdn.net/weixin_43624538/article/details/84963116   

从Inception v1,v2,v3,v4,RexNeXt到Xception再到MobileNets,ShuffleNet,MobileNetV2,ShuffleNetV2,MobileNetV3

https://blog.csdn.net/qq_14845119/article/details/73648100

DeepID系列

https://blog.csdn.net/weixin_42546496/article/details/88537882

CenterLoss

https://www.aiuai.cn/aifarm102.html

L-Softmax loss

https://blog.csdn.net/u014380165/article/details/76864572

A-Softmax loss

https://blog.csdn.net/weixin_42546496/article/details/88062272(论文翻译)

FaceNet

https://blog.csdn.net/qq_15192373/article/details/78490726(论文讲解)

https://blog.csdn.net/ppllo_o/article/details/90707295(论文翻译)

ArcFace/IncightFace

https://blog.csdn.net/weixin_42546496/article/details/88387325

https://blog.csdn.net/hanjiangxue_wei/article/details/86368948(损失函数代码讲解)

SqueezeNet

https://blog.csdn.net/csdnldp/article/details/78648543

猜你喜欢

转载自blog.csdn.net/weixin_42149550/article/details/102732132
今日推荐