[Papers understanding] Squeeze-and-Excitation Networks

Squeeze-and-Excitation Networks

Brief introduction

SENet feature presents a better representation structure, by studying the effect of the branch structure to better represent input feature. A branch structure is used to learn how to assess the association between channels and acting up to the original feature map, realized on a calibration input. Learning to help branch of the neural network is more appropriate representation. For the network to measure the channel related by global information structure used to capture the overall global pooling of information, and then connecting the two fully connected layers, an input applied to the up, i.e., complete re-alignment of the input and allows the network to better learn FIG.

SQUEEZE-AND-EXCITATION BLOCKS

Structure of a block as follows:

Squeeze Fsq figure above process is, of Fex Excitation process is then by studying Fscale on the input to the weighting effect.

Squeeze: Global Information Embedding

Squeeze the process of global information embedding process is called, because the process is actually a squeeze of feature map using the global pooling to integrate global features.

Excitation: Adaptive Recalibration

Excitation of the process is called re-calibration process, because the process of learning to this branch by weight, acting up to the original inputs, to achieve scored for each channel, i.e. the channel Score learning, it must learn to linear As a result, so the authors used excitation structure fc-relu-fc-sigmoid to achieve the score map.

According to the authors, for example, you can clearly see Squeeze and Excitation processes to Inception Case:

The procedure is the corresponding Fscale multiplying the weights corresponding to each of channels corresponding to the input multiplied by the channel feature.

This paper is better understood.

Original paper: https://arxiv.org/pdf/1709.01507.pdf

Guess you like

Origin www.cnblogs.com/aoru45/p/11486528.html