Module 1: The Feature Pyramid Attention Module (FPA)
combines the spatial pyramid structure proposed by PSPNet or DeepLab and the attention mechanism of SENet. It combines context information at different scales and at the same time provides better pixel levels for high-level feature maps. Attention features, expand the receptive field and effectively realize the classification of small targets.
Module 2: Global Attention Upsampling Module (GAU)
extracts the global context information of high-level features as a guide for the weighted calculation of low-level features (use high-level features to guide low-level features)
Overall network model: Pyramid Attention Network (PAN)
ablation experiment
1.
FPA module
2. GAU module 3. Comparison with other classic network models
Article link: https://blog.csdn.net/guleileo/article/details/80544835
Question: Why is multiplying pixel by pixel in FPN, and what is the significance? When do you use addition and when do you use multiplication?