Research on Infrared and Visible Light Image Fusion Algorithm

Infrared and visible light image fusion system based on MATLAB [with evaluation index]

1. Subject introduction

Infrared technology, as a new modern tool for human beings to understand and explore nature, has been widely used in the fields of biology, medicine, geology and military reconnaissance by various countries. The infrared image directly reflects the temperature distribution on the surface of the object, but because the infrared radiation of the target is very complex, and there are many factors affecting the infrared radiation of the target, the clarity of the infrared thermal image is far inferior to that of the visible image. Visible light images can well describe the shape and structure of each object in the scene, and have good contour expression, so the integration of infrared and visible light images has a very good effect. Image fusion is an effective way to achieve this effect. The fused image has higher reliability, less blur, better understandability, and is more suitable for human vision and further analysis, understanding and target detection of the source image. , identification or tracking. Image fusion makes full use of the redundant information and complementary information contained in multiple fused images, and is different from image enhancement in the general sense. It is a new technology in the field of computer vision and image understanding.

This paper studies the fusion algorithm of infrared and visible light images. By using computer image processing methods, the infrared image and the visible light image of the same scene are fused to obtain a single fused image, which successfully contains the information of the two source images. This paper mainly studies the processing and fusion of infrared and visible light images using MATLAB software, using the fusion method of taking large values, small values, and average values ​​of corresponding pixels, comparing regional energy and regional contrast, and using information entropy for the fusion result images ,standard

The evaluation indexes of difference, average gradient and spatial frequency were analyzed and compared. The results show that the fusion result image

It not only preserves the clear outline information of the visible light image, but also shows the surface temperature distribution of the target object.


2. The concept of image fusion

Image fusion (Image Fusion) refers to the image data of the same target collected by multi-source channels through image processing and computer technology, etc., to maximize the extraction of beneficial information in each channel, and finally synthesize high-quality images to Improve the utilization of image information, improve the accuracy and reliability of computer interpretation, improve the spatial resolution and spectral resolution of the original image , and facilitate monitoring. The schematic diagram of image fusion is shown in Figure 1-4. The image to be fused has been registered and the pixel bit width is consistent, and the information of two or more multi-source images is synthesized and extracted.






redundant information

complementary information







Figure 1-4 Schematic diagram of image fusion

interest. Two (multiple) source images to be fused that have been registered and have the same pixel bit width, if the registration is not good and the pixel bit width is inconsistent, the fusion effect will not be good. The main purpose of image fusion is to improve the reliability of images by processing redundant data between multiple images, and to improve the clarity of images by processing complementary information between multiple images.

The main purposes of image fusion include:

  1. Increase the content of useful information in the image, improve the clarity of the image, and enhance the image quality in a single sensor

certain features that cannot be seen or clearly seen in the image;

(2) Improve the spatial resolution of images, increase the content of spectral information, and obtain supplementary image information to improve detection, classification, understanding, and recognition performance;

(3) Detect changes in scenes or targets through the fusion of image sequences at different times;

(4) Generate a three-dimensional image with stereo vision by fusing multiple two-dimensional images, which can be used for three-dimensional reconstruction or stereo projection, measurement, etc.;

(5) Use images from other sensors to replace or make up for missing or faulty information in one sensor image.





3. Simulation results and source code


Matlab is an important tool for image processing, which encapsulates a series of algorithm functions to improve the speed and efficiency of image processing. Therefore, in the process of image processing in this paper, Matlab toolbox is used for image fusion.

In this paper, a set of visible light and infrared images are used to achieve image fusion by taking large and small values ​​of corresponding pixels, taking the average value and weighted average value of corresponding pixels, and region-based energy and contrast fusion methods. The experimental results are as follows, and According to the evaluation method in the previous section, the fusion image was objectively evaluated for fusion quality, see Table 5-1 for details.

1. The corresponding pixel takes the maximum value and the small value

1) The corresponding pixel takes a large value:

MATLAB simulation program:

clear all;
A=imread('p.jpg');                   %读取灰度图像
B=imread('q.jpg');                    %读取红外图像
A1=double(A);B2=double(B);          %将图像转换为double类型
[x,y]=size(A);
for i=1:x
    for j=1:y
        if A(i,j)>B(i,j)
            C(i,j)=A(i,j);
        else C(i,j)=B(i,j);
        end
    end
end
subplot(2,2,1);imshow(A);
xlabel('a)可见光图像')
subplot(2,2,2);imshow(B);
xlabel('b)红外图像')
subplot(2,2,3);imshow(C,[])
xlabel('c)对应像素取大值融合后图像')


v2-66b3f33b8ea90782e21c90af3429e3db_b.jpg

Fusion result:

Figure 5- The image after taking the maximum value of the corresponding pixel


2) The corresponding pixel takes a small value:

clear all;
A=imread('p.jpg');                  %读取灰度图像
B=imread('q.jpg');                  %读取红外图像
A1=double(A);B2=double(B);        %将图像转换为double类型
[x,y]=size(A);
for i=1:x
    for j=1:y
        if A1(i,j)>B2(i,j)
            C(i,j)=B(i,j);
        else C(i,j)=A(i,j);
        end
    end
end
subplot(2,2,1);imshow(A);
xlabel('a)可见光图像')
subplot(2,2,2);imshow(B);
xlabel('b)红外图像')
subplot(2,2,3);imshow(C,[])
xlabel('c)对应像素取小值融合后图像')

Fusion result:


v2-564c4597b0f5fc21a47a685a0418b17e_b.jpg


Figure 5- The fused image after taking the minimum value of the corresponding pixels

2. Take the average value of the corresponding pixel

MATLAB simulation program:

clear all;

A=imread('p.jpg');

B=imread('q.jpg');

K=imadd(A,B,'double'); %Convert two images into double type, and then add

C1=imdivide(K,2);

subplot(2,2,1);imshow(A);

xlabel('a) visible light image')

subplot(2,2,2);imshow(B);

xlabel('b) infrared image')

subplot(2,2,3);imshow(C,[]);

xlabel('c) corresponds to the average value of the pixels and the fused image')

Fusion result:


v2-1426b708f63b2d452153ae3dacdab644_b.jpg


Figure 5- The average fused image of the corresponding pixels

3. Take the weighted average of the corresponding pixels

MATLAB simulation program:

clear all;
P1=imread('p.jpg');                    %读取灰度图像
P2=imread('q.jpg');                   %读取红外图像
L1=double(P1);L2=double(P2);          %将图像转换为double类型
A=immultiply(L1,0.3);
subplot(2,2,1);imshow(P1);
xlabel('a)可见光图像')
subplot(2,2,2);imshow(P2);
xlabel('b)红外图像')
subplot(2,2,3);imshow(C,[])
xlabel('c)对应像素加权平均融合后图像')
融合图像结果:


v2-0dbdb62e8304b4b4623efe7dff8046e0_b.jpg


Figure 5- Corresponding pixel weighted average fused image

4. Fusion based on regional energy comparison

1) The fusion of the largest regional energy:

MATLAB simulation program:

clear all;
P1=imread('p.jpg');                    %读取灰度图像
P2=imread('q.jpg');                   %读取红外图像
L1=double(P1);L2=double(P2);          %将图像转换为double类型
A=L1.^2;B=L2.^2;
[x,y]=size(P1);
for m=2:x-1
    for n=2:y-1
        a=m-1;b=m+1;c=n-1;d=n+1;
     if sum(sum(A([a:b],[c:d])))>sum(sum(B([a:b],[c:d])));  
         C(m,n)=P1(m,n);
     else C(m,n)=P2(m,n);
     end
         end
end
subplot(2,2,1);imshow(P1);
xlabel('a)可见光图像')
subplot(2,2,2);imshow(P2);
xlabel('b)红外图像')
subplot(2,2,3);imshow(C,[])
xlabel('c)基于区域能量取大融合后图像')
融合结果:


v2-ba40cec21478663f5cc4103aa991f403_b.jpg


Figure 5- Based on the regional energy to take the image after the fusion


  1. Area energy takes a small fusion:

MATLAB simulation program:

clear all;
P1=imread('p.jpg');                    %读取灰度图像
P2=imread('q.jpg');                    %读取红外图像
L1=double(P1);L2=double(P2);          %将图像转换为double类型
A=L1.^2;B=L2.^2;
[x,y]=size(P1);
C=P1;
for m=2:x-1
    for n=2:y-1
        a=m-1;b=m+1
     if sum(sum(A([a:b],[c:d])))<sum(sum(B([a:b],[c:d])));  
         C(m,n)=P1(m,n);
     else C(m,n)=P2(m,n);
     end
         end
end
subplot(2,2,1);imshow(P1);
xlabel('a)可见光图像')
subplot(2,2,2);imshow(P2);
xlabel('b)红外图像')
subplot(2,2,3);imshow(C,[])
xlabel('c)基于区域能量取小融合后图像')


v2-e4c4a8a7a7c34835436c1b84b9629273_b.jpg


Figure 5- Based on the regional energy to take a small fusion image

5. Fusion based on regional contrast comparison

MATLAB simulation program:

clear all;
P1=imread('p.jpg');                    %读取灰度图像
P2=imread('q.jpg');                    %读取红外图像
A=double(P1);B=double(P2);             %将图像转换为double类型
[x,y]=size(P1);
C1=A;C2=B;
D1=minus(A,C1);D2=minus(B,C2);   %A-C1
E1=rdivide(D1,C1);E2=rdivide(D2,C2);
F=A;
for m1=2:x-1
    for n1=2:y-1
        a=m1-1;b=m1+1;c=n1-1;d=n1+1;
     if mean(mean(E1([a:b],[c:d])))>mean(mean(E2([a:b],[c:d])))
         F(m1,n1)=P1(m1,n1);
     else F(m1,n1)=P2(m1,n1);
     end
         end
end
subplot(2,2,1);imshow(P1);
xlabel('a)可见光图像')
subplot(2,2,2);imshow(P2);
xlabel('b)红外图像')
subplot(2,2,3);imshow(F,[])
xlabel('c)基于区域对比度取大融合后图像')
融合结果:


v2-1d0b5f6f8fd8c5dc54f5da5b4e0746f5_b.jpg


Figure 5 - Image after fusion based on regional contrast


  1. The following table lists the evaluation results of images obtained by different fusion methods


Table 5 - Fusion result image evaluation

Image Fusion Method evaluation standard
Information entropy E standard deviation std mean gradient spatial frequency
Take the maximum value of the corresponding pixel 6.6948 29.5083 4.2927 9.1607
The corresponding pixel takes the minimum value 6.5223 23.8603 4.3085 8.4296
Image Fusion Method evaluation standard
Information entropy E standard deviation std mean gradient spatial frequency
Take the average value of the corresponding pixel 6.2178 22.2666 3.2953 6.4550
Corresponding pixel weighted average 6.2819 21.4988 3.3771 6.6744
Take the largest based on the area energy 6.7375 29.9311 4.6958 11.4168
Smaller based on area energy 6.5445 24.3389 4.5668 9.5732
Based on area contrast 1.8530e-04 33.9577 16.2664 36.5167


From the fusion result image and the above table, it can be seen that the fusion method based on area energy is better than the fusion method of corresponding pixels. The image quality obtained by the fusion method based on the large area energy is the best, and the values ​​of information entropy, standard deviation, average gradient, and spatial frequency are the largest. The information contained in the fusion image is rich and the image is clear.

Among the methods corresponding to pixel fusion, the method of taking the average value of corresponding pixels results in the poorest image quality, the information contained is not rich enough, and the image is blurred. The method of taking the weighted average value of corresponding pixels has the smallest standard deviation, the color tone is single and uniform, and there is not much information to be seen. The fused images obtained by taking large pixel values ​​and taking small pixel values ​​do not have a good grasp of the overall image, either the overall image is dim, or the overall image is bright, and the edge information is blurred and the contrast is low.

The image obtained by the fusion method based on the regional contrast is blurred and the information entropy is small. The reason for analyzing and writing the program and algorithm may be that the gray value exceeds the range of 0-255 during the calculation process, or the calculated high-frequency component value is too small. The contrast value is too large, close to infinity, resulting in partial brightening or dimming of the image, and the quality of the fusion image is not good enough.

Comparing the fusion image in the above experimental results with the original visible light image and infrared image, it is not difficult to see that the fusion result image not only has the edge features in the visible light image, but also reflects the temperature distribution information of the target object or scene in the infrared image. The effective combination of light image characteristics and infrared image characteristics can be realized, and the information contained in the target object or scene can be expressed more comprehensively and accurately, and practical applications such as target recognition and monitoring can be well realized.

Guess you like

Origin blog.csdn.net/TuTu998/article/details/120177089