-
This is a paper on traditional dark image enhancement in the 2020 TMM journal, which proposes GTV-based brightness layer estimation.
-
Initially, it was the following formula, using Gaussian filter kernel and Gaussian function to normalize the gradient:
But this is too difficult to optimize, so it is further reduced to the following form to facilitate optimization:
Thus, the previous fraction is set to ω \omegaω , the following final formula can be obtained, which is the GTV loss formula used in retinex papers in the past two years, where S is the original image:
-
The optimal value can be found iteratively:
-
After finding I, then find R. You can use the same method to find a smooth R to achieve denoising:
-
I won’t go into details about the optimization formula here. The experimental results look pretty good according to NIQE and the pictures.
It can also remove fog:
-
The inspiration is to first estimate the smooth I, and then estimate the smooth R based on the smooth I and the original image, which can achieve denoising to a certain extent.
Low-Light Image Enhancement With Semi-Decoupled Decomposition
Guess you like
Origin blog.csdn.net/weixin_44326452/article/details/131754001
Recommended
Ranking