Image processing --- inverse filtering and Wiener filtering


foreword

This paper mainly introduces two methods of degraded image restoration: inverse filtering and Wiener filtering.


1. Inverse filtering

The expression of image degradation:
g ( x , y ) = h ( x , y ) ⊙ f ( x , y ) + η ( x , y ) \begin{aligned} g(x,y)=h(x,y )\odot f(x,y)+\eta(x,y) \end{aligned}g(x,y)=h(x,y)f(x,y)+n ( x ,y)
f ( x , y ) : f(x,y): f(x,y): input image
h ( x , y ): h(x,y):h(x,y): Degenerate function
η ( x , y ) : \eta(x,y):n ( x ,y): noise term
g : g:g: Degraded image

Converted to the frequency domain by the convolution theorem:
G ( u , v ) = H ( u , v ) ∗ F ( u , v ) + N ( u , v ) \begin{aligned} G(u,v)=H (u,v)*F(u,v)+N(u,v) \end{aligned}G(u,v)=H(u,v)F(u,v)+N(u,v)

The inverse filtering operation is to remove the degenerate function:
F ^ = G ( u , v ) H ( u , v ) = F ( u , v ) + N ( u , v ) H ( u , v ) \begin{aligned} \ hat{F}=\displaystyle\frac{G(u,v)}{H(u,v)}=F(u,v)+\frac{N(u,v)}{H(u,v) } \end{aligned}F^=H(u,v)G(u,v)=F(u,v)+H(u,v)N(u,v)

According to the above formula, there are two key problems in inverse filtering:

  1. Degenerate function H ( u , v ) H(u,v)H(u,The estimate of v ) is indeed accurate, the clearer the recovery.
  2. When the degradation function H ( u , v ) H(u,v)H(u,When v ) tends to 0, the value of the latter term becomes too large, and the final formula is affected by the noise termN ( u , v ) N(u,v)N(u,v ) the influence will increase.

1.1 Estimated degradation function H ( u , v ) H(u,v)H(u,v)

There are mainly 3 methods:
( 1 ) (1)( 1 ) : observation method;( 2 ) (2)( 2 ) : test method;( 3 ) (3)( 3 ) : Mathematical modeling methods
These methods are all processed in the frequency domain

1.1.1 Observation method

Find a region with strong signal content ( high-contrast region ), thereby ignoring the influence of the noise term, i.e.:
H s ( u , v ) = G s ( u , v ) F s ^ ( u , v ) \begin{aligned } {H_{s}(u,v)}=\displaystyle\frac{G_{s}(u,v)}{\hat{F_{s}}(u,v)} \end{aligned}Hs(u,v)=Fs^(u,v)Gs(u,v)
G s ( u , v ) : {G_{s}}(u,v): Gs(u,v): The Fourier transform of the image currently to be restored.
F s ^ ( u , v ) : {\hat{F_{s}}(u,v)}:Fs^(u,v): The restored image after processing (sharpening, mean filtering, etc.).

1.1.2 Test method

In short, it is to do experiments to simulate the real degradation process, and then obtain the degradation function according to the following formula.
H ( u , v ) = G ( u , v ) A \begin{aligned} {H(u,v)}=\displaystyle\frac{G(u,v)}{A} \end{aligned}H(u,v)=AG(u,v)
Assume that there is a certain device that can obtain an image very similar to the degraded image, and then image an impulse (small light spot, the small light spot should be bright enough to reduce the influence of noise), and obtain the FFT after the impulse degeneration (corresponding to the above G ( u , v ) G(u,v)G(u,v ) ), while the FFT of an impulse is a constantaaA

1.1.3 Modeling method \bigstar

( 1 ) (1) ( 1 ) Atmospheric turbulence model
H ( u , v ) = e − k ( u 2 + v 2 ) 5 / 6 \begin{aligned} {H(u,v)}=e^{-k(u^{2 }+v^{2})^{5/6}} \end{aligned}H(u,v)=ek(u2+v2)5/6
k is the turbulence constant, which reflects the intensity of turbulence.
k = { 0.00025 Slight turbulence 0.001 Medium turbulence 0.0025 Severe turbulence k= \left\{ \begin{array}{ll} 0.00025\quad\quad Slight turbulence\\ 0.001\quad\quad\quad Medium turbulence\\0.0025\quad\ quad\quad severe turbulence\end{array} \right.k= 0.00025slight turbulence0.001moderate turbulence0.0025severe turbulence
code:

import cv2
import numpy as np
from matplotlib import pyplot as plt

# 读取图像
img = cv2.imread('A.png',0)

# 进行傅里叶变换
f = np.fft.fft2(img)
fshift = np.fft.fftshift(f)

#-------------大气湍流模型-----------------
k = 0.001  #湍流程度
degeneration_img = np.zeros(fshift.shape,dtype=complex)   #一定要指定类型
for u in range(fshift.shape[0]):
    for v in range(fshift.shape[1]):
        H = np.exp(-k*np.power(np.float64((u-fshift.shape[0]//2)**2+(v-fshift.shape[1]//2)**2),5/6))
        degeneration_img[u][v] = fshift[u][v] * H
#-----------------------------------------

# 将频域图像在空域进行显示做的处理
img1  = np.fft.ifftshift(degeneration_img)
img1  = np.fft.ifft2(img1)
img1 = np.abs(img1)
img1 = cv2.normalize(img1, None, 0, 255, cv2.NORM_MINMAX)


# 显示原始图像和退化的图像
plt.figure(figsize=(20, 20))
plt.subplot(121), plt.imshow(img, cmap='gray')
plt.title('Input Image'), plt.xticks([]), plt.yticks([])
plt.subplot(122), plt.imshow(img1, cmap='gray')
plt.title('degeneration img'), plt.xticks([]), plt.yticks([])
plt.show()

insert image description here
Note: When degeneration_imginitializing the object, be sure to specify the type dtype=complex. In the process of writing, I forgot to specify the type, resulting in a wrong degraded image (you can try it). Generally speaking, it is not a big problem not to specify the type, but since this is a plural type , if the type is not specified, the imaginary part will be automatically discarded during the degeneration_img[u][v] = fshift[u][v] * Hassignment ! Refer to the picture below:

insert image description here
insert image description here

I didn't know the pitfalls of dynamic languages ​​before, but now I have fallen into the trap once, and I must add type annotations when writing code in Python in the future ! !

另外,也可以用矩阵的方法生成:(下面代码参考文章:https://blog.csdn.net/youcans/article/details/123027287)
def turbulenceBlur(img, k=0.001):  # 湍流模糊传递函数
        # H(u,v) = exp(-k(u^2+v^2)^5/6)
        M, N = img.shape[1], img.shape[0]
        u, v = np.meshgrid(np.arange(M), np.arange(N))
        radius = (u - M//2)**2 + (v - N//2)**2
        kernel = np.exp(-k * np.power(radius, 5/6))
        return kernel

( 2 ) (2) ( 2 ) Motion blur model
When the image is atxxThe speed in the x direction is x 0 ( t ) = at / T x_{0}(t) = at/Tx0(t)=a t / T uniform linear motion, at the same time inyyThe speed in the y direction is y 0 ( t ) = bt / T y_{0}(t) = bt/Ty0(t)=b t / T by free definition
H ( u , v ) = T π ( ua + vb ) sin [ π ( ua + vb ) ] e − j π ( ua + vb ) \begin{aligned} H(u, v)=\frac{T}{\pi(ua+vb)}sin[\pi(ua+vb)]e^{-j\pi(ua+vb)} \end{aligned}H(u,v)=π ( u a+vb)Ts in [ π ( et al+vb)]e- ( u a + v b )

code:

# 功能:实现运动模糊的效果
import cv2
import numpy as np
from matplotlib import pyplot as plt

# 读取图像
img = cv2.imread('0526.tif',0)

# 进行傅里叶变换
f = np.fft.fft2(img)
fshift = np.fft.fftshift(f)

#-------------运动模糊模型-----------------
a = b = 0.1
T = 1
degeneration_img = np.zeros(fshift.shape,dtype=np.complex64)   #一定要指定类型
for u in range(fshift.shape[0]):
    for v in range(fshift.shape[1]):
        uv = ((u-fshift.shape[0]//2)*a + (v-fshift.shape[1]//2)*b) * np.pi
        if uv == 0:
            degeneration_img[u][v] = fshift[u][v]
        else:
            H = T * np.sin(uv) * np.exp(np.complex64(-1j)*uv) /uv
            degeneration_img[u][v] = fshift[u][v] * H
#-----------------------------------------

# 将频域图像在空域进行显示做的处理
img1  = np.fft.ifftshift(degeneration_img)
img1  = np.fft.ifft2(img1)
img1 = np.abs(img1)
img1 = cv2.normalize(img1, None, 0, 1, cv2.NORM_MINMAX)


# 显示原始图像和退化的图像
plt.figure(figsize=(20, 20))
plt.subplot(121), plt.imshow(img, cmap='gray')
plt.title('Input Image'), plt.xticks([]), plt.yticks([])
plt.subplot(122), plt.imshow(img1, cmap='gray')
plt.title('degeneration img'), plt.xticks([]), plt.yticks([])
plt.show()

insert image description here

( 3 ) (3) (3)高斯模糊模型
H ( u , v ) = e − D 2 ( u , v ) / 2 D 0 2 D ( u , v ) = [ ( u − P / 2 ) 2 + ( v − Q / 2 ) 2 ] 1 / 2 \begin{aligned} {H(u,v)}=e^{-D^{2}(u,v)/2D_{0}^{2}}\quad\quad\\{D(u,v) = {[(u-P/2)^{2}+(v-Q/2)^{2}]^{1/2}}} \end{aligned} H(u,v)=eD2(u,v)/2D02D(u,v)=[(uP/2)2+(vQ/2)2]1/2
code:

import cv2
import numpy as np

# 读取输入图像
img = cv2.imread('A.png', 0)

# 进行傅里叶变换
f = np.fft.fft2(img)
fshift = np.fft.fftshift(f)

# 构造高斯滤波器
rows, cols = img.shape
crow, ccol = rows//2, cols//2
d = 60  # 滤波器大小
gauss = np.zeros((rows, cols))
for i in range(rows):
    for j in range(cols):
        gauss[i, j] = np.exp(-((i-crow)**2+(j-ccol)**2)/(2*d**2))

# 将高斯滤波器应用于频域图像
filtered = np.multiply(fshift, gauss)

# 进行反傅里叶变换
f_ishift = np.fft.ifftshift(filtered)
img_back = np.fft.ifft2(f_ishift)
img_back = np.abs(img_back)
img_back = cv2.normalize(img_back, None, 0, 1, cv2.NORM_MINMAX)

# 显示结果
cv2.imshow('Input', img)
cv2.imshow('Output', img_back)
cv2.waitKey(0)
cv2.destroyAllWindows()

1.2 Direct inverse filtering

Disregarding noise:
F ^ = G ( u , v ) H ( u , v ) \begin{aligned} \hat{F}=\displaystyle\frac{G(u,v)}{H(u,v)} \end{aligned}F^=H(u,v)G(u,v)
【noise1.png】:The image after Gaussian blur and Gaussian noise.
Without any prior knowledge, the degradation function of the image is estimated using the atmospheric turbulence model, and then the inverse filtering process is performed.
code show as below:

import cv2
import numpy as np
from matplotlib import pyplot as plt

# 读取图像
img = cv2.imread('noise1.png',0)

# 进行傅里叶变换
f = np.fft.fft2(img)
fshift = np.fft.fftshift(f)

#-------------利用大气湍流模型估计退化传递函数-----------------
k = 0.001  #湍流程度
degeneration = np.zeros(fshift.shape,dtype=complex)   
for u in range(fshift.shape[0]):
    for v in range(fshift.shape[1]):
        H = np.exp(-k*np.power(np.float64((u+fshift.shape[0]//2)**2+(v-fshift.shape[1]//2)**2),5/6))
        degeneration[u][v] =  H
#-----------------------------------------


# 逆滤波
img1 = np.divide(fshift,degeneration)

# 将频域图像在空域进行显示做的处理
img1  = np.fft.ifftshift(img1)
img1  = np.fft.ifft2(img1)
img1 = np.abs(img1)
img1 = cv2.normalize(img1, None, 0, 255, cv2.NORM_MINMAX)


# 显示原始图像和恢复后的图像
plt.figure(figsize=(20, 20))
plt.subplot(121), plt.imshow(img, cmap='gray')
plt.title('Input Image'), plt.xticks([]), plt.yticks([])
plt.subplot(122), plt.imshow(img1, cmap='gray')
plt.title('rec img'), plt.xticks([]), plt.yticks([])
plt.show()

Filtering effect:
insert image description here
However, you only need to change the above statement degeneration[u][v] = H → \rightarrowdegeneration[u][v] = H + 0.001 , using 0.001 as an estimate of the noise, a certain effect is obtained:
insert image description here

1.3 Radius limited inverse filtering

考虑噪声:
F ^ = F ( u , v ) + N ( u , v ) H ( u , v ) \begin{aligned} \hat{F}=F(u,v)+\frac{N(u,v)}{H(u,v)} \end{aligned} F^=F(u,v)+H(u,v)N(u,v)

Use a low-pass filter to filter out high-frequency noise and then divide by H(u, v)

code:

import cv2
import numpy as np
from matplotlib import pyplot as plt

# 读取图像
img = cv2.imread('noise3.png',0)

# 进行傅里叶变换
f = np.fft.fft2(img)
fshift = np.fft.fftshift(f)

#-------------大气湍流模型-----------------
k = 0.001  #湍流程度
degeneration = np.zeros(fshift.shape,dtype=complex)   #要指定类型
for u in range(fshift.shape[0]):
    for v in range(fshift.shape[1]):
        H = np.exp(-k*np.power(np.float64((u+fshift.shape[0]//2)**2+(v-fshift.shape[1]//2)**2),5/6))
        degeneration[u][v] =  H
#-----------------------------------------


# 构建低通滤波器---理想低通
# rows,cols = fshift.shape
# radius = 1
# mask = np.zeros((rows, cols),dtype=np.float64)
# for i in range(rows):
#     for j in range(cols):
#         if np.sqrt((i-rows//2)**2 + (j-cols//2)**2) <= radius:
#             mask[i, j] = 0.1
# fshift = np.multiply(fshift, mask)

# # 构建高斯低通滤波器
# rows, cols = img.shape
# crow, ccol = rows//2, cols//2
# d = 50  # 滤波器大小
# gauss = np.zeros((rows, cols))
# for i in range(rows):
#     for j in range(cols):
#         gauss[i, j] = np.exp(-((i-crow)**2+(j-ccol)**2)/(2*d**2))

# 构建巴特沃斯低通滤波器
rows, cols = img.shape
crow, ccol = rows//2, cols//2
d = 70  # 滤波器大小
butter = np.zeros((rows, cols))
for i in range(rows):
    for j in range(cols):
        butter[i, j] = 1/(1 + np.power((np.power((i-crow)**2+(j-ccol)**2,0.5)/d),20))

fshift = np.multiply(fshift, butter)
# 逆滤波
img1 = np.divide(fshift,butter)

# 将频域图像在空域进行显示做的处理
img1  = np.fft.ifftshift(img1)
img1  = np.fft.ifft2(img1)
img1 = np.abs(img1)
img1 = cv2.normalize(img1, None, 0, 1, cv2.NORM_MINMAX)


# 显示原始图像和恢复后的图像
plt.figure(figsize=(20, 20))
plt.subplot(121), plt.imshow(img, cmap='gray')
plt.title('Input Image'), plt.xticks([]), plt.yticks([])
plt.subplot(122), plt.imshow(img1, cmap='gray')
plt.title('rec img'), plt.xticks([]), plt.yticks([])
plt.show()

cv2.imshow('org',img)
cv2.imshow('rev',img1)
cv2.waitKey(0)

Filter effect:

insert image description here

Second, the minimum mean square error (Wiener) filter

The estimated Fourier transform of the degraded image is:
F ( u , v ) ^ = [ 1 H ( u , v ) ∣ H ( u , v ) ∣ 2 ∣ H ( u , v ) ∣ 2 + K ] G ( u , v ) \begin{aligned} \hat{F(u,v)}=[\frac{1}{H(u,v)}\frac{|H(u,v)|^{2}} {|H(u,v)|^{2}+K}]{G(u,v)} \end{aligned}F(u,v)^=[H(u,v)1H(u,v)2+KH(u,v)2]G(u,v)

import cv2
import numpy as np
from matplotlib import pyplot as plt

# 读取图像
img = cv2.imread('noise2.png',0)

# 进行傅里叶变换
f = np.fft.fft2(img)
fshift = np.fft.fftshift(f)

#-------------大气湍流模型-----------------
k = 0.001  #湍流程度
degeneration = np.zeros(fshift.shape,dtype=complex)   #要指定类型
for u in range(fshift.shape[0]):
    for v in range(fshift.shape[1]):
        H = np.exp(-k*np.power(np.float64((u+fshift.shape[0]//2)**2+(v-fshift.shape[1]//2)**2),5/6))
        degeneration[u][v] =  H
#-----------------------------------------


# 维纳滤波
K = 0.001
degeneration += 0.1   #加上估计的噪声
img1 = (np.conj(degeneration)/(np.conj(degeneration)*degeneration + K)) * fshift

# 将频域图像在空域进行显示做的处理
img1  = np.fft.ifftshift(img1)
img1  = np.fft.ifft2(img1)
img1 = np.abs(img1)
img1 = cv2.normalize(img1, None, 0, 1, cv2.NORM_MINMAX)


# 显示原始图像和恢复后的图像
plt.figure(figsize=(20, 20))
plt.subplot(121), plt.imshow(img, cmap='gray')
plt.title('Input Image'), plt.xticks([]), plt.yticks([])
plt.subplot(122), plt.imshow(img1, cmap='gray')
plt.title('rec img'), plt.xticks([]), plt.yticks([])
plt.show()

Filter effect:
insert image description here

Summarize

Two methods of Wiener filtering and inverse filtering are mainly shown, but the filtering effect is not obvious, which may be caused by the inconsistency between the estimated degradation function and the real degradation function.

references

[1] Ruan Qiuqi, translated by Ruan Yuzhi; (US) Raphael C. Gonzalez, Richard E. Woods. Digital Image Processing 4th Edition of Foreign E-books and Communication Textbook Series [M]. Beijing: Electronic Industry Press, 2020

Guess you like

Origin blog.csdn.net/m0_46366547/article/details/129826768