Image Removal of Rain, Snow and Fog Thesis Study Record

All_in_One_Bad_Weather_Removal_Using_Architectural_Search

This paper was published at CVPR2020 and proposes a denoising model that can cope with a variety of severe weather, and can simultaneously perform rain, snow, and fog removal operations. But that part of the code doesn't seem to be open source.
The problem raised:
the current model can only deal with one kind of severe weather, and cannot be applied to a variety of complex severe weather. The
current denoising data sets are all artificially produced, which are different from real data.
insert image description here

Innovation 1: All-in-one denoising model

The overall structure of the method is shown in the figure below, which is designed based on the confrontational neural network model, including a generator (Generator) and a discriminator (Discriminator). Unlike the past, which can only deal with one kind of severe weather noise, this paper proposes an all-in-one denoising model, which can simultaneously complete the operations of rain removal, snow removal, and fog removal.

insert image description here
In the generator, it mainly includes three feature extraction modules (雨雪雾 FE,Feature Exactor), one 特征选择模块(Feature Search) and one 解码器模块(Decoder), and the discriminator judges whether the generated image is true, returns the result to the generator, calculates the loss, and passes the reverse Propagate to update the parameters in the generator.

The generator consists of encoders with multiple tasks, each associated with a specific severe weather type, and neural architecture search is used to optimize the image features extracted from each encoder and convert these features into clean images. That is, the idea is: input the image containing rain, snow and fog into the generator, perform feature extraction through the encoder (FE) in the generator, optimize the extracted features through neural architecture search, select good feature information, and extract the extracted features The information is sent to the decoder to generate a clean image, that is, the denoising process is completed.

generator module

Multiple encoders are used to extract clean features of different severe weather images for restoration and produce clean images.
insert image description here

Innovation point 2: Feature Search module

Neural architecture query is actually finding clean features and converting clean features into clean images.

insert image description here

insert image description here
It can be seen that in addition to the conventional convolution operation in the FeatureSearch module, there are residual connections, self-attention mechanisms, etc.
Conventional defogging and haze removal models are defined as follows:
insert image description here

It can also be expressed as follows: extract and learn M through 1x1 convolution, so as to estimate M, and the realized operation is shown in 4.1.

insert image description here

Innovation point 3: multi-class auxiliary discriminator

The discriminator based on the generative confrontation network (GNN) is trained to judge the effect of the restored image (that is, to judge the authenticity of the generated image), but it does not provide an error signal. For the all-in-one model, it is not enough to only know the true and false It is necessary to know the type of the generated image, so that the encoder can update the parameters according to different types, so a multi-class auxiliary discriminator is proposed to classify the image, so that only the parameters of the corresponding discriminator are updated when the backpropagation discriminant loss .

insert image description here

Specific ideas

Haze Image Modeling

insert image description here

Among them, I(x) is a foggy image, more specifically, I(x) is the rain image at position x, J(x) is the reflected light of the observed target, that is, the image after dehazing, A is the atmospheric light coefficient, t(x) is the atmospheric transmittance, t(x)= e^-βd(x), where d(x) is the scene depth map, and β is the atmospheric light scattering coefficient. From the formula (1), it can be clearly known that the fog-free image J(x) can be recovered from the foggy image I(x) only by obtaining t(x) and A.

The physical model of rainy image and foggy image is very similar, so it can be defined as:

insert image description here
Among them, Ri represents the rain line of the i-th layer.

Rain Image Modeling

insert image description here
where I(x) is the color raindrop image and M(x) is the binary image mask. J(x) is the background image, that is, a clean image, and K is the attached raindrop brought by the image, which represents the environment where the blurred image forms light reflection.

Snowflake Image Modeling

insert image description here
where S represents snowflakes and z is a binary mask representing the location of the snow.

According to the above physical model formula, the definitions of different severe weather noise images are different, which is why the original model is a model to deal with a kind of severe weather noise, but according to the formula, we can also see its internal connection, which can be The severe weather noise image model is defined as follows:
insert image description here

Guess you like

Origin blog.csdn.net/pengxiang1998/article/details/132262417