Image enhancement practice: spatial filtering, frequency domain filtering and degradation restoration filter (with all codes)

Airshow image enhancement

1. Design background

Affected by various factors such as weather conditions, air quality, imaging distance, imaging equipment performance, and relative motion, the images of the aerial flight show at the 14th China International Aviation and Aerospace Expo in November 2022 are degraded and not "clear" problem, as shown in Figure 1. In the field of digital image processing, spatial domain and frequency domain enhancement, and image restoration processing are usually used to improve image quality and "clarity".

Figure 1 Unclear image

2. Design goals

Observe and analyze the characteristics of Airshow images and corresponding videos, make full use of the knowledge learned in this course, design relevant algorithms, improve image quality, and conduct subjective quality evaluation.

3. Design content

3.1 Spatial filtering - histogram equalization

3.1.1 Principle

Histogram equalization is a simple and effective image enhancement technology, which changes the gray level of each pixel in the image by changing the histogram of the image, and is mainly used to enhance the contrast of images with a small dynamic range. The original image may be concentrated in a narrow range due to its gray distribution, resulting in an image that is not clear enough. For example, the gray level of an overexposed image is concentrated in the high brightness range, while underexposure will make the gray level of the image concentrated in the low brightness range. Using histogram equalization, the histogram of the original image can be transformed into a uniform distribution (balanced) form, which increases the dynamic range of the gray value difference between pixels, thereby achieving the effect of enhancing the overall contrast of the image. In other words, the basic principle of histogram equalization is to widen the grayscale value with a large number of pixels in the image (that is, the grayscale value that plays a major role in the picture), and to widen the grayscale value with a small number of pixels (that is, the grayscale value that plays a major role in the picture). Merge the gray values ​​that do not play a major role in the picture), thereby increasing the contrast, making the image clear, and achieving the purpose of enhancement.

The concept of histogram: For a grayscale image, its histogram reflects the statistics of different gray levels in the image. Figure 2 shows an example of a histogram, where picture (a) is an image, and its grayscale histogram can be expressed as picture (b), where the horizontal axis represents each gray level of the image, and the vertical axis represents the The number of pixels in each gray level. (It should be noted that the gray level histogram represents the distribution of individual gray levels in the image, while the contrast of the image depends on the relationship between the gray levels between adjacent pixels.)

Figure 2 Histogram of the image

Theoretical basis of histogram equalization: For the convenience of discussion, r and s represent the normalized original image gray level and the image gray level after histogram equalization (because of normalization, so r and s The value is between 0 and 1). When r = s = 0, it means black; when r = s = 1, it means white; when r, s ∈ (0, 1), it means that the grayscale of the pixel changes between black and white. (The so-called histogram equalization is actually to transform the gray value of the pixel according to the histogram, which belongs to the point operation range. In other words, that is: given r, find its corresponding s.) In the [0,1] interval For any r, the transformation function T(r) can generate a corresponding s, and s=T(r)

In the formula, T(r) should meet the following two conditions:

  1. Within 0≤r≤1, T(r) is a monotonically increasing function; (this condition ensures that the order of the gray level of the image after equalization is unchanged from black to white)

  1. 0≤T(r)≤1 within 0≤r≤1. (This condition ensures that the pixel gray value of the equalized image is within the allowed range)

According to the probability theory, if the probability density of the random variable r is known , and the random variable s is a function of r, then the probability density of s can be calculated by Assume that the distribution function of the random variable s is denoted by , according to the definition of the distribution function, we have

Formula 1

And because the probability density function is the derivative of the distribution function, the derivative of s on both sides of formula 1 can be obtained:

Formula 2

It can be seen from formula 2 that the probability density function of the gray level of the image can be controlled by the transformation function T(r) , thereby improving the gray level of the image, which is the theoretical basis of histogram equalization.

Also: Considering the visual characteristics of the human eye, if the gray histogram of an image is evenly distributed, then the image will look better (refer to Section 3.3 of Gonzales Digital Image Processing). Therefore, to do histogram equalization, here should be a probability density function of uniform distribution.

It can be seen from the knowledge of probability theory that for a uniform distribution on the interval [a,b], its probability density function is equal to . If the original image is not normalized, that is , then , after normalization , so here

It can be known from formula 6 , and because of , so there is . Integrating both sides of this equation gives:

formula 3

Equation 3 is the transformation function T(r) we seek. It shows that when the transformation function T(r) is the cumulative distribution probability of the original image histogram, the purpose of histogram equalization can be achieved.

For digital images with discrete gray levels, the frequency is used instead of probability, and the discrete form of the transformation function can be expressed as:

formula 4

In the formula, (Note: here , means the gray level after normalization; k means the gray level before normalization). It can be known from formula 4 that the gray level of each pixel after equalization can be directly calculated from the histogram of the original image. It should be noted that here is also the normalized gray level, and its value is between 0 and 1; sometimes it needs to be multiplied by L-1 and then rounded to make the gray level range from 0 to L- 1, consistent with the original image.

3.1.2 Implementation

Since we are dealing with color pictures, we perform histogram equalization on the RGB color components separately, and finally combine the processed color components. An enhanced image can be obtained.

The equalization implementation steps of each component:

The first step is to calculate the gray histogram of the original image .

The second step is to calculate the total number of pixels in the original image.

The third step is to calculate the gray distribution frequency of the original image.

The fourth step is to calculate the gray cumulative distribution frequency of the original image.

In the fifth step, the normalized value is multiplied by L-1 and then rounded, so that the gray level of the equalized image is consistent with the original image before normalization.

In the sixth step, according to the above mapping relationship, referring to the pixels in the original image, the image after histogram equalization can be written.

It's worth noting that sometimes doing this can result in distorted colors in the resulting image. Therefore, we also tried to perform histogram equalization on the V component after converting the RGB space to HSV to ensure that the image color is not distorted. (HSV refers to Hue, Saturation, Brightness respectively).

The formula for converting RGB to HSV is:

3.2 Frequency domain filtering - Butterworth high-pass enhanced filtering

3.2.1 Principle

By observing the picture that needs to be enhanced, I found that the main component that causes fog or blur in the picture is the low-pass component in the picture, so I consider using a high-pass filter to filter the frequency domain of the picture. A Butterworth high-pass filter was chosen for the experiments.

Before filtering, the image needs to be converted from the spatial domain to the frequency domain, that is, the image is converted using a two-dimensional fast Fourier transform (FFT).

The essence of two-dimensional FFT is to decompose an image into the sum of several complex plane waves .

Butterworth high-pass filter: the expression is

D(u,v) represents the distance from the frequency domain midpoint to the frequency domain plane, which is the cutoff frequency. When D(u,v) increases, the corresponding H(u,v) gradually approaches 1, so that the high-frequency part can pass; and when D(u,v) decreases, H(u,v) gradually approaches 0, to achieve low-frequency part filtering.

Butterworth high-pass enhanced filter: Through experiments, we found that if a constant is added to the denominator of the Butterworth filter expression, the filtering effect will be much better than direct Butterworth high-pass filtering. This is because direct Butterworth high-pass filtering will also remove useful low-frequency information in the picture, resulting in unclear images. The Butterworth high-pass enhanced filter can well retain effective information, filter out low-frequency noise, and achieve image enhancement. Its expression is as follows:

Among them, we will try different values ​​for the value of the constant C, from 0.5 to 1, and we finally use 0.9 as the parameter with the best enhancement effect.

3.2.2 Implementation

We try to traverse the value of C from 0 to 1. When C is equal to 0, it is a Butterworth high-pass filter. Observe the enhancement effect of the picture under different values ​​of C to get the best parameters. The pre-test is as follows:

original image

C=0

C=0.5

C=0.7

C=0.9

C=1

It can be found that with the increase of C, the brightness of the image will increase, that is, the retention of low-frequency components will increase, and the image enhancement effect is the best when C=0.9 in subjective judgment. Therefore, C=0.9 is selected.

In addition, we default the cutoff frequency of the Butterworth filter to 5. We also tried the filter effect after the cutoff frequency is 50 and 250. As shown below:

original image

d=5

d=50

d=250

显然,截止频率取5增强效果最好,所以,截止频率d参数选取5、加强常数C选取0.9。

3.3 退化恢复滤波器-反锐化恢复滤波器

3.3.1 原理

我们将图片从清晰变模糊的过程假设定义为反锐化过程,即降低图像的对比率,让图像显得“平滑”,也即降低图像中轮廓等细节的过程。现在我们使用该过程的反过程对图片进行恢复。即采用反锐化恢复滤波器。

反锐化恢复滤波器本质是一个钝化模糊的偏移矩阵,将图像与矩阵进行卷积后即让图片通过滤波器即可实现对图像的增强处理。

3.3.2 实现

反锐化恢复滤波器我们采用matlab自带的函数fspecial('unsharp')

来生成偏移矩阵,其中有一个参数ALPHA fspecial('unsharp',ALPHA),ALPHA取值为[0,1],其作用是控制滤波器的形状,默认值为0.2,我们就采用0.2的默认参数值。生成的矩阵如图。

四、结果测试与分析

4.1 空域滤波-直方图均衡化

直方图对比如图所示

可以看到,我们的方案较好地实现了直方图的均衡化,图像增强效果也很好。

如果将RGB转为HSV再进行直方图均衡化,结果对比如图

RGB直方图均衡化

HSV直方图均衡化

显然,直接使用RGB直方图均衡化的效果更好。

4.2三种方式的最优参数设置对比

三种方式的最优参数图像增强结果如图:

原图

空域滤波-直方图均衡化

频域滤波-巴特沃斯加强滤波

退化恢复滤波器-反锐化恢复滤波器

在这幅测试样例中,空域滤波的效果最好,其次是频域滤波,最后反锐化恢复滤波器只是加强了轮廓,并没有很好地去除雾气,但是仍起到了一些效果。

五、设计总结与反思

对于不同的图片,有时可能需要采用不同的方法进行图像增强,才能达到最优的效果,没有一种方法可以适用于所有图片。

六、附件(代码)

6.1空域滤波(一)直方图均衡化

%主函数
function coloraverage()
I=imread('g.png');
imshow(I);
I1=I(:,:,1);%提取红色分量
I2=I(:,:,2);%提取绿色分量
I3=I(:,:,3);%提取蓝色分量
I1=histogram(I1);   %构造的函数
I2=histogram(I2);
I3=histogram(I3);
c=cat(3,I1,I2,I3);  %cat用于构造多维数组
subplot(2,2,1);imshow(I);title('输入图像'); 
subplot(2,2,2);imhist(I);title('输入图像的直方图'); %显示图片灰度值的统计结果直方图
subplot(2,2,3);imshow(c);title('处理后图像'); 
subplot(2,2,4);imhist(c);title('处理后图像的直方图'); %显示图片灰度值的统计结果直方图
imwrite(c,'g1.png');


%调用的(直方图均衡化)构造函数
function d=histogram(I)%构造histogram函数
J=I;
[m,n]=size(I);      %确定矩阵大小
area=m*n;
a=zeros(1,256);     %产生1*256的零矩阵a,用来存放原始图像各个灰度值的个数
b=zeros(1,256);
for i=1:m           %记录各个灰度值的个数
    for j=1:n
        d=I(i,j)+1;   %获取(i,j)位置的灰度值(注意:灰度值为0-255,对应矩阵的1-256)
        a(1,d)=a(1,d)+1;    %矩阵a上对应灰度值的计数+1
    end
end
for i=1:256         %均衡化
    sum=0;
    for j=1:i
        sum=sum+a(1,j);
    end
    b(1,i)=sum*255/area;
end
for i=1:m           %用均衡化后的数据代替原位置的数据
    for j=1:n
        d=J(i,j)+1;
        J(i,j)=b(1,d);
    end
end
d=J;

6.2空域滤波(二)RGB转HSV后均衡化

I=imread('tst1.png');
R=rgb2hsi(I);
H1=R(:,:,1);
S1=R(:,:,2);
X1=R(:,:,3);
g1=histeq(H1);
g2=histeq(S1);
g3=histeq(X1);
I1=cat(3,H1,S1,g3);
f1=hsi2rgb(I1);
subplot(2,2,1);imshow(I);title('输入图像'); 
subplot(2,2,2);imhist(I);title('输入图像的直方图'); 
subplot(2,2,3);imshow(f1);title('HSI均衡化图像'); 
subplot(2,2,4);imhist(f1);title('HSI均衡化图像的直方图'); 
imwrite(f1,'05HSI均衡化图像.jpg');

function hsi = rgb2hsi(rgb)
 
% 抽取图像分量
rgb = im2double(rgb);
r = rgb(:, :, 1);
g = rgb(:, :, 2);
b = rgb(:, :, 3);
 
% 执行转换方程
num = 0.5*((r - g) + (r - b));
den = sqrt((r - g).^2 + (r - b).*(g - b));
theta = acos(num./(den + eps)); %防止除数为0
 
H = theta;
H(b > g) = 2*pi - H(b > g);
H = H/(2*pi);
 
num = min(min(r, g), b);
den = r + g + b;
den(den == 0) = eps; %防止除数为0
S = 1 - 3.* num./den;
 
H(S == 0) = 0;
 
I = (r + g + b)/3;
 
% 将3个分量联合成为一个HSI图像
hsi = cat(3, H, S, I);
end


function rgb=hsi2rgb(hsi)
H=hsi(:,:,1)*2*pi;
S=hsi(:,:,2);
I=hsi(:,:,3);
%实现转换方程式
R=zeros(size(hsi,1),size(hsi,2));
G=zeros(size(hsi,1),size(hsi,2));
B=zeros(size(hsi,1),size(hsi,2));
% R G (0<=H<2*pi/3).
idx=find((0<=H)&(H<2*pi/3));
B(idx)=I(idx).*(1-S(idx));
R(idx)=I(idx).*(1+S(idx).*cos(H(idx))./cos(pi/3-H(idx)));
G(idx)=3*I(idx)-(R(idx)+B(idx));
% B G (2*pi/3<=H<4*pi/3).
idx=find((2*pi/3<=H)&(H<4*pi/3));
R(idx)=I(idx).*(1-S(idx));
G(idx)=I(idx).*(1+S(idx).*cos(H(idx)-2*pi/3)./cos(pi-H(idx)));
B(idx)=3*I(idx)-(R(idx)+G(idx));
% B R
idx=find((4*pi/3<=H)&(H<=2*pi));
G(idx)=I(idx).*(1-S(idx));
B(idx)=I(idx).*(1+S(idx).*cos(H(idx)-4*pi/3)./cos(5*pi/3-H(idx)));
R(idx)=3*I(idx)-(G(idx)+B(idx));
%将所有三个结果合并为RGB图像。裁剪到[0,1]以补偿浮点算术舍入效果
rgb=cat(3,R,G,B);
rgb=max(min(rgb,1),0);
end

6.3巴特沃斯频域滤波

clc;
clear all;
close all;
J=imread('j.png');
% if size(J, 3)==3
%     J = rgb2gray(J);
% end 
J1=J(:,:,1);%提取红色分量
J2=J(:,:,2);%提取绿色分量
J3=J(:,:,3);%提取蓝色分量
subplot(1,3,1);imshow(uint8(J));xlabel('input');
con=0.9;
dc=5;
%-------------------------------彩色分量R----------------------------------
J1=double(J1);
f=fft2(J1);      %采用傅里叶变换
g=fftshift(f);   %数据矩阵平衡
[M,N]=size(f);
n1=floor(M/2);
n2=floor(N/2);
n=2;

d1=dc;%截止频率
for i=1:M        %进行巴特沃兹高通滤波和巴特沃兹高通加强滤波
    for j=1:N
        d=sqrt((i-n1)^2+(j-n2)^2);
        if d==0
            h1=0;
            h2=con;
        else
            h1=1/(1+(d1/d)^(2*n));
            h2=1/(1+(d1/d)^(2*n))+con;
        end
        gg1(i,j)=h1*g(i,j);
        gg2(i,j)=h2*g(i,j);
    end
end
gg1=ifftshift(gg1);
gg1=uint8(real(ifft2(gg1))); 

gg2=ifftshift(gg2); 
gg2=uint8(real(ifft2(gg2))); 
%--------------------------巴特沃斯高通滤波结束-----------------------------




%--------------------------彩色分量G---------------------------------------
J2=double(J2);
f=fft2(J2);      %采用傅里叶变换
g=fftshift(f);   %数据矩阵平衡
[M,N]=size(f);
n1=floor(M/2);
n2=floor(N/2);
n=2;

d1=dc;%截止频率
for i=1:M        %进行巴特沃兹高通滤波和巴特沃兹高通加强滤波
    for j=1:N
        d=sqrt((i-n1)^2+(j-n2)^2);
        if d==0
            h1=0;
            h2=con;
        else
            h1=1/(1+(d1/d)^(2*n));
            h2=1/(1+(d1/d)^(2*n))+con;
        end
        gg3(i,j)=h1*g(i,j);
        gg4(i,j)=h2*g(i,j);
    end
end
gg3=ifftshift(gg3);
gg3=uint8(real(ifft2(gg3))); 

gg4=ifftshift(gg4); 
gg4=uint8(real(ifft2(gg4))); 
%--------------------------------------------------------------------------
%-----------------------------彩色分量B------------------------------------
J3=double(J3);
f=fft2(J3);      %采用傅里叶变换
g=fftshift(f);   %数据矩阵平衡
[M,N]=size(f);
n1=floor(M/2);
n2=floor(N/2);
n=2;

d1=dc;%截止频率
for i=1:M        %进行巴特沃兹高通滤波和巴特沃兹高通加强滤波
    for j=1:N
        d=sqrt((i-n1)^2+(j-n2)^2);
        if d==0
            h1=0;
            h2=con;
        else
            h1=1/(1+(d1/d)^(2*n));
            h2=1/(1+(d1/d)^(2*n))+con;
        end
        gg5(i,j)=h1*g(i,j);
        gg6(i,j)=h2*g(i,j);
    end
end
gg5=ifftshift(gg5);
gg5=uint8(real(ifft2(gg5))); 

gg6=ifftshift(gg6); 
gg6=uint8(real(ifft2(gg6))); 
%--------------------------------------------------------------------------
c1=cat(3,gg1,gg3,gg5);  %cat用于构造多维数组
c2=cat(3,gg2,gg4,gg6);  %cat用于构造多维数组
subplot(1,3,2);imshow(c1);   %显示巴特沃兹高通滤波
xlabel('巴特沃兹高通滤波 5');
% imwrite(c1,'05 巴特沃兹高通滤波 5.jpg');

subplot(1,3,3);imshow(c2);   %显示巴特沃兹高通加强滤波
xlabel('巴特沃兹高通加强滤波 5');
imwrite(c2,'j2.png');



6.4反锐化恢复滤波器

I=imread('c.png');
subplot(221);
imshow(I);
title('src');
H=fspecial('motion',20,30); %运动卷积的偏移矩阵
MotionBlur=imfilter(I,H); %卷积
subplot(222);
imshow(MotionBlur);
title('MotionBlur')
H1=fspecial('disk',10); %圆盘状偏移矩阵
disk=imfilter(I,H1); %卷积
subplot(223);
imshow(disk);
title('disk')
H2=fspecial('unsharp'); %钝化模糊的偏移矩阵
unsharp=imfilter(I,H2); %卷积
subplot(224);%H = fspecial(‘unsharp’,alpha)为对比度增强滤波器。参数alpha用于控制滤波器的形状,范围为【0,1】,默认值为0.2。返回的也是3×3大小的矩阵
imshow(unsharp);
title('unsharp')
imwrite(unsharp,'c3.png');

Guess you like

Origin blog.csdn.net/m0_63859672/article/details/128751753