HVS / Attention notes

Human visual system(HVS)

Mechanisms of Visual Attention in The Human Cortex
(Annual Review of Neuroscience 2003)


1. Competition among multiple stimuli

Evidence indicates that, first, there is competition among multiple stimuli for representation in visual cortex (神经皮质).

  • Thus,multiple stimuli presented at the same time are not processed independently but rather interact with each other in a mutually suppressive (相互压制) way.

2. Different mechanisms

SECOND,, Competition Among Multiple stimulican BE biased by both- bottom-up, Sensory-Driven Mechanisms (bottom-up sensory drive mechanism) and top-down feed-back ( top-down feedback mechanism) mechanisms.

Selective Attention :

  • the enhancement of neural responses to attended stimuli;

    Response to growing concern stimulate the nerve signals;

  • the filtering of unwanted information by counteracting the suppression induced by nearby distracters;

    Filter to suppress unnecessary information by offsetting disturbances caused by nearby;

  • the biasing of signals in favor of an attended location by increases of baseline activity in the absence of visual stimulation;

    In the absence of visual stimulation, the change will affect the activation of brain regions signal of future competitive advantage;

    Thus,stimuli at attended locations are biased to “win” the competition for processingresources.

  • the increase of stimulus salience by enhancing the neuron’s sensitivity to stimulus contrast.

    Be increased by improving the sensitivity of neurons to stimulate significant stimulus contrast.

    In conclusion, Selective the Attention There are four ways:

  • bottom-up: Enhanced focus on stimulus / weaken irrelevant stimulus

  • top-down: Baseline shift / contrast sensitivity strengthen

    Note that adjusting the visual cortex activity may also occur in the case where there is no visual stimulation.


3. Where does top-down biasing signals from?

although competition is ultimately resolved within visual cortex, the source of top-down biasing signals derives from a network of areas outside visual cortex.


4. Signal for guiding

Fourth, and finally, the stimulus that wins the competition for representation in visual cortex will gain further access to memory systems for mnemonic encoding and retrieval and to motor systems for guiding action and behavior.

* The nature of HVS in Attention

Provide for inter-competing signal processing strategy.


Attention

Attention attention, originated from Human visual system (HVS), a given external stimulus Stimuli, HVS will produce a first time corresponding saliency map, attention is this corresponding significant regions.

Overall, Attention is the area weight learning problems:

  • Hard-attention, the problem is 0/1, which areas are attentioned, which does not concern the region
  • Soft-attention, [0,1] between successive distributions problems, low degree of each region of interest is represented by a score 0 ~ 1
  • Self-attention, is self-learning among the feature map, assigning a weight (can be spatial, can be temporal, can also be inter-channel)

SE-Net (channel attention)

  1. Squeeze operation, we follow to reduce feature space dimensions, the two-dimensional feature of each channel into a real number, the real number has a somewhat global receptive field, wherein the channel dimensions and the number of inputs and outputs match . It is characterized by the features in the global distribution of the channel response, and so that the layer can be obtained near the input of global receptive field, a number of tasks that are very useful.

    To implement the features about to enter the Global AVE pooling, to give 1 * 1 * Channel

  2. Excitation operation, it is a cycle of the door mechanism is similar to the neural network. W parameter to the channel generates a weight for each feature where w is the parameter used to study the correlation between the channel characteristic is explicitly modeled.

  3. Reweight operation, the output will be a heavy weight Excitation seen as the importance of each characteristic after passage through the feed feature selection, and the channel-wise multiplication by the weighting to previous features, to complete the original features on the channel dimensions recalibration.


CBAM: Convolutional Block Attention Module ECCV 2018(+spatial attention)

channel-wise attention as a teaching network Look 'what'; and spatial attention as a teaching network Look 'where'

Spatial attention module

And different channel attention, spatial attention focused on location (where).

AVE input characteristics and Max pooling between Channel, then concatenation, again a large convolution 7 * 7, and finally Sigmoid


DANet, CVPR2019 (channel & spatial attention)

把Self-attention的思想用在图像分割,可通过long-range上下文关系更好地做到精准分割。

把deep feature map进行spatial-wise self-attention,同时也进行channel-wise self-attetnion,最后将两个结果进行 element-wise sum 融合。

参考链接:

SE-Net:https://www.jianshu.com/p/7244f64250a8
CBAM:https://blog.csdn.net/qq_21612131/article/details/83217371
太过久远了,还有一些参考链接已经找不到了,sry
希望有点有用的东西

Guess you like

Origin blog.csdn.net/Excaliburrr/article/details/93737571