神经网络学习笔记2Mnemonic Descent Method

这篇主要基于2016年CVPR的一篇论文:Mnemonic Descent Method-A?? Recurrent Process Applied for End-To-End Face Alignment_CVPR_2016_paper来讲。这篇论文中采用了一种Mnemonic Descent Method,MDM保持了记忆单元来对所有过去输入空间地信息。论文中介绍的方法有如下特点:在这里插入图片描述在这里插入图片描述输入图像,给定初始值x0,选取特征在这里插入图片描述点局部块,然后输入到卷积网络中提取特征,fc是卷积的特征结果,然后fr是全连接层。其最大的特点,是把每一级的第一层全连接层的结果h,和下一级的卷积层结果fc结合起来输入到下一级的全连接层,意思是不损失上一级梯度下降的信息,同时,所有的级可以一起训练了,而不必分开训练了
在这里插入图片描述
在用一篇差不多内容的论文来讲一下:
Cascaded convolutional networks网络结构可以看到,在第一层,有三个Deep CNN:F1, EN1,和 N- M1(whose input regions cover the whole face (F1), eyes and nose (EN1), nose and mouth (NM1))。第二层和第三层的以第一层预测的点周围区域为输入(Networks at the second and third levels take local patches centered at the predicted positions of facial points from previous levels as input and are only allowed to make small changes to previous predictions),并且逐级输入范围会逐渐减小(The sizes of patches and search ranges keep reducing along the cascade),并且Predictions at the last two levels are strictly restricted because local appearance is sometimes ambiguous and unreliable. The predicted position of each point at the last two levels is given by the average of the two networks with different patch sizes

发布了47 篇原创文章 · 获赞 10 · 访问量 1718

猜你喜欢

转载自blog.csdn.net/Antonio_Salieri/article/details/104095632