Links to classic network papers (continuously updated~~)

ResNet

https://blog.csdn.net/weixin_43624538/article/details/85049699

CNN

https://my.oschina.net/u/876354/blog/1634322

YoloV2

https://blog.csdn.net/shanlepu6038/article/details/84778770

https://blog.csdn.net/yanhx1204/article/details/81017134

YoloV3

http://www.pianshen.com/article/3460315699/

BN

https://www.cnblogs.com/guoyaohua/p/8724433.html

GoogleNet

Traditionally, the most direct way to improve deep neural networks is to increase the size of the network , including width and depth. Depth refers to the number of layers in the network, and width refers to the number of neurons used in each layer. However, there are two major disadvantages in this simple and direct solution :
(1) An increase in network size also means an increase in parameters, which makes the network easier to overfit .
(2) An increase in computing resources .

The author proposes that the fundamental solution is to  transform the fully connected structure into a sparsely connected structure .

https://blog.csdn.net/weixin_43624538/article/details/84863685

https://www.jianshu.com/p/22e3af789f4e

Inception v2&v3

https://blog.csdn.net/sunbaigui/article/details/50807418

https://blog.csdn.net/weixin_43624538/article/details/84963116   

From Inception v1, v2, v3, v4, RexNeXt to Xception to MobileNets, ShuffleNet, MobileNetV2, ShuffleNetV2, MobileNetV3

https://blog.csdn.net/qq_14845119/article/details/73648100

Deep ID series

https://blog.csdn.net/weixin_42546496/article/details/88537882

CenterLoss

https://www.aiuai.cn/aifarm102.html

L-Softmax loss

https://blog.csdn.net/u014380165/article/details/76864572

A-Softmax loss

https://blog.csdn.net/weixin_42546496/article/details/88062272 (translation of the paper)

FaceNet

https://blog.csdn.net/qq_15192373/article/details/78490726 (Paper explanation)

https://blog.csdn.net/ppllo_o/article/details/90707295 (paper translation)

ArcFace/IncightFace

https://blog.csdn.net/weixin_42546496/article/details/88387325

https://blog.csdn.net/hanjiangxue_wei/article/details/86368948 (Explanation of loss function code)

SqueezeNet

https://blog.csdn.net/csdnldp/article/details/78648543

Guess you like

Origin blog.csdn.net/weixin_42149550/article/details/102732132