[Image denoising research] study notes on existing mainstream image denoising research results

Existing mainstream image denoising research results study notes

brief description

Key causes of image degradation: Atmospheric interference, light interference, noise interference, camera shake, time-lapse shooting . An image consists of different image components: smooth areas, edges and details. In order to obtain images with high-quality denoising results, different pixel points of the same feature map should be treated differently: stronger constraints are placed on smooth areas, and effective information is retained as much as possible for edge and detail areas. Therefore, it is more inclined to retain details and edge information while removing noise. An excellent denoising network should apply discriminative learning capabilities for different image regions and perform discriminative denoising.

Common noise types: Gaussian white noise, salt and pepper noise, Poisson noise

Existing main denoising methods: NLM(non-local block), ResB(residual block)

NLM captures the global information of the image and improves the expressive ability of the network;

ResB ensures the stability of network training;

Research status: Contains two types of traditions - image denoising based on modeling optimization, deep learning denoising


List of various common denoising algorithms

Traditional - Image Denoising Based on Modeling Optimization

Mean filtering, non-mean filtering (NLMF), wavelet shrinkage estimation filter denoising algorithm, empirical Wiener filtering method, block matching 3D filtering (BM3D) algorithm,

  • Spatial domain filtering: directly calculate pixel values ​​on the basis of noise images, low-pass filtering, neighborhood averaging, and median filtering

  • Transform domain filtering: spatial domain-transform domain denoising space domain, images in the transform domain noise data will present certain characteristics, common: Fourier transform, Walsh-Hadamard transform, cosine transform, KL transform, wavelet transform

Deep Learning Denoising

The CNN method breaks through the bottleneck of the optimization method, region-aware denoising (RAID), Gaussian denoising (DnCNN), Noise2Noise, Noise2Void, Noise2Self, convolutional blind denoising network (CBDNet)

Variational method: total variation (TV), image denoising by optimizing the energy function. Subsequent TV formula optimization involves the selection of diffusion parameters, which are the key to determining the quality of image denoising. p9

Spatial Domain Filtering Algorithm

  • Median filter: the denoising effect of impulse noise is good
  • Weighted median filtering: assign larger weights to similar pixels, assign smaller weights to pixels with large differences, and finally weighted average
  • Non-mean filtering (NLMF): traverse the image blocks of the entire image, use the Euclidean distance to determine the similarity between the image block and the target image block, and filter out similar image blocks according to the similarity to assist in denoising

Transform Domain Filtering Algorithm

  • Wavelet transform: denoising in the frequency domain, decomposing the signal to remove the noise, and then restoring it, which has a better effect on the low-frequency part of the image
  • Wiener filter denoising
  • BM3D: Combining the advantages of spatial domain filtering and transform domain filtering, it first absorbs the idea of ​​​​calculating similar blocks in NLMF, and then uses wavelet transform denoising method and Wiener filtering denoising method, which belongs to multi-step denoising processing

Deep learning denoising algorithm

  • CNN

  • CBDNet: A total variation loss is proposed. While using prior knowledge, weight coefficients are added to the loss item, which is conducive to smoothness constraints on the image.

  • Noise2Self: Use the coverage (Mask) convolution kernel to cover the target pixel value, and use the weights corresponding to the surrounding pixels to represent the target pixel value through network learning.

  • RIDNet: For real noise removal, general denoising uses two stages (noise estimation-noise removal), RIDNet uses a model for only one stage of noise removal; RIDNet is the first article to apply channel attention mechanism to image denoising


Evaluation index of image denoising

  • Subjective evaluation: Observe the image clarity, color saturation, comparison with the real denoising image effect, and the smoothness of denoising results with the naked eye. The experimenters are divided into two groups: those with professional image processing knowledge and those without professional image processing knowledge.

  • Objective evaluation: Image quality estimation is performed using a method with a reference map in order to evaluate the accuracy. Evaluation indicators: mean square error, signal-to-noise ratio, peak signal-to-noise ratio, structural similarity
    均方误差(MSE): calculate the mean square error between each pixel between the noise image and the pure image
    insert image description here

    信噪比(SNR): Common parameters for detecting and calculating images, the larger the value, the higher the image quality
    insert image description here

    峰值信噪比(PSNR): positively correlated with image quality
    insert image description here

    Y(i,j)修复图像的像素
    IGT(i,j)真值图像的像素
     H是长度、W是宽度,SNR和PSNR均可反应图像清晰度,数值越高图像质量越高;
    

    结构相似性(SSIM): Calculated based on brightness, contrast and structure of sample Y and IGT
    insert image description here


Analysis of Datasets and Experimental Results

Synthetic color datasets: CBSD68, Kodak24, Urban100, Set14

Gray datasets: Set12, BSD68 (grayscale version of CBSD68)

Real Noise Image Dataset: RNI15

Three test data sets of CBSD68, Kodak24 and Urban100 are used, and the Berkeley split data set (BSD500) is used as the training set. Tested at three standard deviation noise levels: 30, 50, 70.

Glossary

image denoising image denoising

convolutional neural network convolutional neural network

spatial attention mechanism spatial attention mechanism

  • Channel Attention: The attention mechanism originates from the study of human vision. In cognitive science, due to the bottleneck of information processing, humans will selectively focus on a part of all information while ignoring other visible information. In order to rationally utilize limited visual information processing resources, humans need to select a specific part in the visual area and then focus on it.
    There is no strict mathematical definition of the attention mechanism. For example, traditional local image feature extraction, sliding window method, etc. can be regarded as a kind of attention mechanism. In neural networks, the attention mechanism is usually an additional neural network that can hard-select certain parts of the input, or assign different weights to different parts of the input. The attention mechanism can filter out important information from a large amount of information.
    There are many ways to introduce the attention mechanism in the neural network. Taking the convolutional neural network as an example, the attention mechanism can be added in the space dimension, or the attention mechanism (SE) can be added in the channel dimension. Of course, there are also mixed dimensions (CBAM ), that is, the spatial dimension and the channel dimension increase the attention mechanism.
    The channel attention mechanism originated from SENet. It is not only applicable to image segmentation, image recognition, image classification, but also image restoration (image deblurring, image super-resolution reconstruction, image denoising) and other fields. Through a series of pooling and full connection operations to fuse the correlation between channels, this can better achieve the goal in the field of image processing and ensure that the characteristics of the image are not destroyed . After the emergence of the channel attention mechanism, space-based attention mechanisms have also emerged one after another, which can better promote the network to understand the correlation between image channels and spaces.

特征图:https://blog.csdn.net/MengYa_Dream/article/details/123705503

残差块:https://zhuanlan.zhihu.com/p/42706477

小波去噪算法: Frequency domain denoising, decomposes the signal to remove the noise, and then restores it, which has a better effect on the low frequency part of the image

维纳滤波器: Wiener filtering is an algorithm for processing images in the frequency domain. It is a very classic image enhancement algorithm. It can not only perform image denoising, but also can be applied to the field of image deblurring.

梯度消失: During the deepening process of the network, the distribution has shifted or changed. Batch normalization BN is used in DnCNN to solve this problem.

消融实验: Set up a control group to prove the necessity of the module by eliminating a certain module in the designed system. If the performance of the whole system drops significantly after the module is eliminated or the results obtained are poor, it means that the module plays an important role in the system. It's working.

Ablation studies are crucial to deep learning research. Understanding causality in a system is the most direct way to generate reliable knowledge (the goal of any research).
Ablation is a very low-effort way to study causality. If you take any complex deep learning experiment setup, you might remove some modules (or replace some trained features with random ones) without performance loss.
Removing the noise from the research process: Conducting an ablation study. If you can't fully understand your system? Lots of moving parts, wondering if the reason it works is closely related to your hypothesis? Try to delete something. Spend at least ~10% of your lab time honestly refuting your thesis.

Guess you like

Origin blog.csdn.net/weixin_47407066/article/details/128540108
Recommended