RefineNet笔记

RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation

https://arxiv.org/pdf/1611.06612.pdf

Thought

1、多利用residual 结构, 包括广义的,利于优化
2、与 FPN 同时间出现,当时流行特征融合
3、Chained residual pooling 相当于自融合,何时有效?

institution

1、subsampling 损失空间信息
2、deconv 不能 recover the low-level visual features (increase computation and memory, computation)

solution

1、encoder-decoder结构,feature fusion前后加了处理,更好恢复空间信息
2、大量使用残差结构:Short-range residual 局部, long-range residual 在block间,
Short-range residual connections refer to local shot-cut connections in one RCU or the residual pooling component, while long-range residual connections refer to the connection between RefineNet modules and the ResNet blocks.

RefineNet
RCU: batch-normalization layers are removed, channel is set to 512 for RefineNet-4 and 256 for the remaining ones
Multi-resolution fusion: output channel 以inputs中最小为准, output scale 以inputs中最大为准, 单input不用处理
Chained residual pooling: aims to capture background context from a large image region, stride 1, 类似于自融合
ReLU is important for the effectiveness of subsequent pooling operations and it also makes the model less sensitive to changes in the learning rate
fusion block : only linear transformation are employed
在这里插入图片描述
在这里插入图片描述

Experiments

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

发布了37 篇原创文章 · 获赞 28 · 访问量 1万+

猜你喜欢

转载自blog.csdn.net/DreamLike_zzg/article/details/104113615