reference based image enhancement paper research

Enhance Images as You Like with Unpaired Learning

Insert image description here

  • This is an article from IJCAI 2021
  • The article proposes a conditional GAN ​​model. Using the reference image as a condition, the dark image enhancement model can be trained on unpaired images, so that the enhancement results adjust the hue brightness and contrast according to the reference image. The supervision of training is divided into four parts. One is that when the input image is used as the condition, the GAN generated must be the input image. The other is the spatial correlation between the enhancement result and the input image. The other is the global tonal correlation between the enhancement result and the reference image. is the GAN loss.
  • The network structure diagram is shown in the figure below, and the names of some modules are not unified. The self-Mod above is PSM, and the cond-Mod is CCM. PSM is a combination of Unet skip connections, with some normalization tricks added, while CCM uses the extracted condition vectors to change the feature map.
    Insert image description here
    Insert image description here
  • The formula used by CCM is as follows. After two layers of full connection, this modulation code predicts 4 vectors, operates on x, and generates the final output m(x):
    Insert image description here
  • The experimental results given by the method are as follows:
    Insert image description here
    Insert image description here
  • The number of parameters is 8915727. The article on the experimental results of the indicator claims that it is obtained by selecting different reference images for testing and calculating the average and minimum values ​​of PSNR, but does not explain the source of the reference image.
  • Overall, the network structure is a bit complicated, and the application has not been further explored.

Exemplar‑guided low‑light image enhancement

Insert image description here

  • This is a 2022 paper published by Multitimeia systems, a journal in District 4 of the Chinese Academy of Sciences. It proposes a method of using reference images to guide enhancement, and creates a data set. In fact, it uses paired images like LOL and performs a certain process on GT. The rotation/scale/padding operation is used as a reference image different from GT to guide dark image enhancement. The structure is shown below:
    Insert image description here
  • The AFS here is an attention-like mechanism
  • The experimental results are shown in the figure below:
    Insert image description here
  • In fact, it is a hydrology. It is too painful to read and full of errors and omissions.

Enhancement by Your Aesthetic: An Intelligible Unsupervised Personalized Enhancer for Low-Light Images

Insert image description here

  • This is an article from ACMMM2022. The article proposes an enhancement method based on reference image. As shown below:
    Insert image description here
  • The method decomposes the picture into L and R components, extracts the histogram of the L component of the reference image to guide the enhancement of the L component, and extracts the histogram of the chromaticity map and the histogram of the color saturation map of the R component of the input image and reference image. , calculate the similarity between these histograms of the input image and reference image, as the input of the fully connected layer, the prediction coefficients are used to scale and bias the features of the enhanced network of the R component, as shown below, where miu and sigma are features The mean variance of the graph itself can be regarded as instance norm, while gamma and beta are guidance values ​​from similarity.
    Insert image description here
  • In terms of loss, in order to make the enhancement result similar to the ref image, the L1 loss of the histogram of the hue and saturation of the R component of the enhancement result and the ref image is added.
  • The denoiseNet below uses the method in LLFlow to estimate the noise map, and then uses the same method as above to calculate the similarity of the histogram of the noise map. The denoise module is not the focus of this investigation and will not be expanded.
  • The experimental results are as follows, where the reference is randomly selected from LOL FIiveK and ExDark (this is very strange, why should the reference image be selected in exdark), and then the average result of 50% is taken
    Insert image description here
  • In addition, parameters for adjusting hue, noise level, brightness, and color saturation are provided (actually, the value of the similarity input to the network is adjusted)

Insert image description here

  • Summary: The idea of ​​reference based on histogram similarity is adopted, and adjustable parameters based on similarity are provided. The random reference experiment is not very reasonable, the psnr is not too high, and there is no indicator result to verify the similarity with the reference image. Edited using a decomposition and modulation approach.

StarEnhancer: Learning Real-Time and Style-Aware Image Enhancement

Insert image description here

  • This is a paper from ICCV 2021. The article proposes to first train a style classifier, then remove the subsequent classification layer, use the network to extract the style code, and then use a fully connected layer to map the style code to the parameters used in the normalization layer of the network, so that pictures of similar styles have similar style code, which can produce a variety of enhanced results for different style clustering.
    Insert image description here
    The experimental results on fivek are given. The average style code is used when generating test images (but it is not clear on what data set the average is):
    Insert image description here
  • It feels like many experimental details are not fully explained. Overall, it still looks like personalized image enhancement, but you can give a picture that the user likes, extract its style code, and use it for enhancement, without placing much emphasis on the reference image.

Guess you like

Origin blog.csdn.net/weixin_44326452/article/details/132647434
Recommended