Matlab image processing realizes simple machine vision

Use matlab to perform simple image processing and analyze the characteristics of different processing methods

Equalize images with different exposure levels

data code segment

%直方图均衡化
figure;
srcimage=imread('C:\Users\27019\Desktop\机器视觉\图1-2.jpg');
info=imfinfo('C:\Users\27019\Desktop\机器视觉\图1-2.jpg');
subplot(2,3,1);
imshow(srcimage);
title('原灰度图像');
subplot(2,3,2);
imhist(srcimage);
title('灰度直方图');
subplot(2,3,3);
H1=histeq(srcimage);
imhist(H1);
title('直方图均衡化');
subplot(2,3,4);
histeq(H1);
title('均衡化处理后的图像');

figure;
srcimage=imread('C:\Users\27019\Desktop\Machine Vision\Figure 1-2.jpg');
info=imfinfo('C:\Users\27019\Desktop\Machine Vision\Figure 1-2. jpg');
subplot(2,3,1);
imshow(srcimage);
title('original grayscale image');
subplot(2,3,2);
imhist(srcimage);
title('grayscale histogram' );
subplot(2,3,3);
H1=histeq(srcimage);
imhist(H1);
title('Histogram equalization');
subplot(2,3,4);
histeq(H1);
title(' Equalized image');

The following three images are processed respectively
Picture 1-1
Figure 1-2
Figure 1-3
. These three images are images with a large gray scale distribution, overexposed and underexposed images.

The processing results are as follows
Processing results of Figure 1-1
Processing results of Figure 1-2
Processing results of Figure 1-3
From the results of image processing, the main differences are: the gray histogram of the overexposed image is mostly concentrated in the area with high gray value, and the gray histogram of the underexposed image is mainly concentrated in the gray value In areas with lower values, the histogram distribution after equalization processing is relatively uniform, and the brightness of the image is relatively moderate. The edge effect of the processed image is better, and the information of the image is more obvious, achieving the effect of enhancing the image.

Denoising the image
Using different methods to denoise the image, get the characteristics of various processing methods.
Figure 1-4

  1. Spatial domain template convolution (different templates, different sizes)
  2. Frequency-domain low-pass filter (different filtering models, different cut-off frequencies)
  3. Median filtering method (different windows)
  4. Mean filtering method (different windows)
    Remarks: Analyze and compare the experimental results of different methods and different parameters of the same method. For example, the spatial domain convolution template can include Gaussian templates, rectangular templates, triangular templates, and templates designed by yourself according to your needs. etc.; the template size can be 3×3, 5×5, 7×7 or larger. Frequency domain filtering can adopt low-pass filter models such as rectangular or Butterworth, and the cutoff frequency is also optional.
    data code segment
%空间域模板卷积
clc
clear all %清屏
b=imread('C:\Users\27019\Desktop\机器视觉\图1-4.jpg'); %导入图像
a1=double(b)/255;
figure;
subplot(5,3,1),imshow(a1);
title('原图像');
 
Q=ordfilt2(b,6,ones(3,3));%二维统计顺序滤波
subplot(5,3,4),imshow(Q);
title('二维顺序滤波,窗口为3')
 
Q=ordfilt2(b,6,ones(5,5));%二维统计顺序滤波
subplot(5,3,5),imshow(Q);
title('二维顺序滤波,窗口为5')
 
Q=ordfilt2(b,6,ones(7,7));%二维统计顺序滤波
subplot(5,3,6),imshow(Q);
title('二维顺序滤波,窗口为7')
 
m=medfilt2(b,[3,3]);%中值滤波
subplot(5,3,7),imshow(m);
title('中值滤波,窗口为3')
 
m=medfilt2(b,[5,5]);%中值滤波
subplot(5,3,8),imshow(m);
title('中值滤波,窗口为5')
 
m=medfilt2(b,[7,7]);%中值滤波
subplot(5,3,9),imshow(m);
title('中值滤波,窗口为7')
 
x=1/16*[0 1 1 1;1 1 1 1;1 1 1 1;1 1 1 0];
a=filter2(x,a1);%邻域滤波*16
subplot(5,3,10),imshow(a);
title('邻域滤波*16')
 
x=1/32*[0 1 1 1;1 1 1 1;1 1 1 1;1 1 1 0];
a=filter2(x,a1);%邻域滤波*32
subplot(5,3,11),imshow(a);
title('邻域滤波*32')
 
h = fspecial('average',[3,3]);
A=filter2(h,a1)
subplot(5,3,13),imshow(A);
title('均值滤波,窗口为3')
 
h = fspecial('average',[5,5]);
A=filter2(h,a1)
subplot(5,3,14),imshow(A);
title('均值滤波,窗口为5')
 
h = fspecial('average',[7,7]);
A=filter2(h,a1)
subplot(5,3,15),imshow(A);
title('均值滤波,窗口为7')
%高斯模板
figure;
A=imread('C:\Users\27019\Desktop\机器视觉\图1-4.jpg');
subplot(2,3,1);
imshow(A);
title('原始图像');
p=[3,3];
h=fspecial('gaussian',p);
B1=filter2(h,A)/255;
subplot(2,3,2);
imshow(B1);
title('高斯模板,窗口为3');
p=[5,5];
h=fspecial('gaussian',p);
B2=filter2(h,A)/255;
subplot(2,3,3);
imshow(B2);
title('高斯模板,窗口为5');
p=[7,7];
h=fspecial('gaussian',p);
B3=filter2(h,A)/255;
subplot(2,3,4);
imshow(B3);
title('高斯模板,窗口为7');
%均值模板
figure;
A=imread('C:\Users\27019\Desktop\机器视觉\图1-4.jpg');
subplot(2,3,1);
imshow(A);
title('原始图像');
B1=filter2(fspecial('average',3),A)/255;
subplot(2,3,2);
imshow(B1);
title('均值滤波,窗口为3');
B2=filter2(fspecial('average',5),A)/255;
subplot(2,3,3);
imshow(B2);
title('均值滤波,窗口为5');
B3=filter2(fspecial('average',7),A)/255;
subplot(2,3,4);
imshow(B3);
title('均值滤波,窗口为7');
%中值模板
figure;
A=imread('C:\Users\27019\Desktop\机器视觉\图1-4.jpg');
subplot(2,3,1);
imshow(A);
title('原始图像');
B0=medfilt2(A);
subplot(2,3,2);
imshow(B0);
title('中值滤波,窗口为[3,3]');
p=[5,5];
B1=medfilt2(A,p);
subplot(2,3,3);
imshow(B1);
title('中值滤波,窗口为[5,5]');
p=[7,7];
B2=medfilt2(A,p);
subplot(2,3,4);
imshow(B2);
title('中值滤波,窗口为[7,7]');

%Spatial domain template convolution
clc
clear all %Clear screen
b=imread('C:\Users\27019\Desktop\Machine Vision\Figure 1-4.jpg'); %Import image
a1=double(b)/255;
figure;
subplot(5,3,1),imshow(a1);
title('original image');

Q=ordfilt2(b,6,ones(3,3));% Two-dimensional statistical sequential filtering
subplot(5,3,4),imshow(Q);
title('Two-dimensional sequential filtering, the window is 3')

Q=ordfilt2(b,6,ones(5,5));% two-dimensional statistical sequential filtering
subplot(5,3,5), imshow(Q);
title('Two-dimensional sequential filtering, the window is 5')

Q=ordfilt2(b,6,ones(7,7));% two-dimensional statistical sequential filtering
subplot(5,3,6), imshow(Q);
title('two-dimensional sequential filtering, the window is 7')

m=medfilt2(b,[3,3]);% median filter
subplot(5,3,7),imshow(m);
title('median filter, window is 3')

m=medfilt2(b,[5,5]);% median filter
subplot(5,3,8),imshow(m);
title('median filter, window is 5')

m=medfilt2(b,[7,7]);% median filter
subplot(5,3,9),imshow(m);
title('median filter, window is 7')

x=1/16*[0 1 1 1;1 1 1 1;1 1 1 1;1 1 1 0]; a=filter2
(x,a1);% neighborhood filter 16
subplot(5,3,10) ,imshow(a);
title('Neighborhood filtering
16')

x=1/32*[0 1 1 1;1 1 1 1;1 1 1 1;1 1 1 0]; a=filter2
(x,a1);% neighborhood filter 32
subplot(5,3,11) ,imshow(a);
title('Neighborhood filtering
32')

h = fspecial('average',[3,3]);
A=filter2(h,a1)
subplot(5,3,13),imshow(A);
title('average filter, window is 3')

h = fspecial('average',[5,5]);
A=filter2(h,a1)
subplot(5,3,14),imshow(A);
title('average filter, window is 5')

h = fspecial('average',[7,7]);
A=filter2(h,a1)
subplot(5,3,15),imshow(A);
title('average filter, window is 7')
% Gaussian Template
figure;
A=imread('C:\Users\27019\Desktop\Machine Vision\Figure 1-4.jpg');
subplot(2,3,1);
imshow(A);
title('Original Image') ;
p=[3,3];
h=fspecial('gaussian',p);
B1=filter2(h,A)/255;
subplot(2,3,2);
imshow(B1);
title('Gaussian template , the window is 3');
p=[5,5];
h=fspecial('gaussian',p);
B2=filter2(h,A)/255;
subplot(2,3,3);
imshow(B2) ;
title('Gaussian template, window is 5');
p=[7,7];
h=fspecial('gaussian',p);
B3=filter2(h,A)/255;
subplot(2,3,4 );
imshow(B3);
title('Gaussian template, window is 7');
% mean template
figure;
A=imread('C:\Users\27019\Desktop\Machine Vision\Figure 1-4.jpg');
subplot(2,3,1 );
imshow(A);
title('original image');
B1=filter2(fspecial('average',3),A)/255;
subplot(2,3,2);
imshow(B1);
title(' Average filter, window is 3');
B2=filter2(fspecial('average',5),A)/255;
subplot(2,3,3);
imshow(B2);
title('Average filter, window is 5 ');
B3=filter2(fspecial('average',7),A)/255;
subplot(2,3,4);
imshow(B3);
title('average filter, window is 7');
% median Template
figure;
A=imread('C:\Users\27019\Desktop\Machine Vision\Figure 1-4.jpg');
subplot(2,3,1);
imshow(A);
title('Original Image') ;
B0=medfilt2(A);
subplot(2,3,2);
imshow(B0);
title('median filter, window is [3,3]');
p=[5,5];
B1=medfilt2(A,p);
subplot( 2,3,3);
imshow(B1);
title('Median filtering, the window is [5,5]');
p=[7,7];
B2=medfilt2(A,p);
subplot(2, 3,4);
imshow(B2);
title('median filtering, the window is [7,7]');
the processing results are as follows The image obtained by
insert image description here
insert image description here
the Gaussian filter model
using the Gaussian template filter is clearer than the original image, reaching effect of removing noise. At the same time, when the window of the Gaussian template is larger, the obtained image will be more blurred.
insert image description here
Median filtering method, the window is 3, 5, 7
The principle of median filtering is to use the median value of the neighborhood to replace the original value of the pixel, and the obtained image is clearer. However, the larger the window of the median filtering method used, the more blurred the image will be.
insert image description here
The average filtering method, the window is 3, 5, and 7.
The average filtering can be seen to have a good filtering effect on the noise of the particles, and the average value of the neighborhood is used to replace the value of the pixel. In the mean filtering method, the larger the window, the more blurred the image will be.

The following processing methods are repeatedly tried in the post-processing. Avoid repeating the description here, only display the results, analyze the characteristics, and avoid repeating the description

insert image description here
Butterworth filter model
insert image description here
sobel template
The denoising effect of the Sobel template in the horizontal and vertical directions can achieve the effect of obtaining the edge of the image. The images obtained by
insert image description here
Prewitt template Prewitt all have the effect of highlighting the edges of the image.

Image denoising result analysis

  1. For which type of image enhancement is histogram equalization most effective? Why is the histogram not flat after equalization?
  2. How do the filtering effects of different spatial domain convolver templates differ?
  3. How does the size of the spatial domain convolver template affect the filtering effect?
  4. How do the different frequency domain filters work differently?

answer:

  1. The enhancement effect is most obvious for images with a large gray scale distribution interval. This is because it is impossible to convert pixels with the same gray value to different pixels due to the discreteness of the gray value in the discrete case. That is to say, the different gray values ​​with decimals obtained through the gray histogram equalization transformation function will be merged after being rounded. Therefore, it is impossible to obtain a completely flat histogram in practical applications, but the gray histogram of the resulting image is much flatter than the original image histogram.
  2. The best effect is mean filtering, and the worst is two-dimensional sequential filtering. The image obtained by mean filtering is relatively smooth, while the image obtained by two-dimensional sequential filtering still has more noise. There are still some noises in the median-filtered image, and the brightness in the field-filtered image is low.
  3. The larger the template, the blurrier the image, while the smaller the template, the sharper the image.
  4. The filtered information of the image obtained by different filters is different.

Get the inner circle and calculate the distance.
In the figure, an I-kid robot A takes a picture of another I-kid robot B on the football field. Please first use the feature extraction algorithm to extract the edge of the inner circle with a ground radius of 4.37m, and then use the learned camera projection model to solve the distance ||OA|| between robot A and the center of the inner circle at this moment. (Assuming that the camera of robot A is installed vertically, the optical center is located on the center line of the robot, and its optical center is 0.5m away from the ground of the court; at the same time, the resolution of the camera is 640*480 pixels, and the main point is at the center of its image; the camera’s Horizontal effective focal length is 300 pixels, vertical effective focal length is 1200 pixels)

  1. Use the function in the Matlab image toolbox to extract the inner circle edge of the stadium marking line in Figure 2.
  2. Knowing that Figure 3 is the approximate position of the stadium marking line and robot A, please use the camera linear projection model to calculate the distance ||OA|| between robot A and the center point O of the inner circle.
    insert image description hereinsert image description here
    Remarks: Move the camera coordinate system of robot A down 0.5m in the vertical direction to the foot of robot A, and then translate to the center point O. The distance from the foot of the robot to the center O is the distance ||OA|| to be sought.
    data code segment
%Prewitt模板
figure;
A=imread('C:\Users\27019\Desktop\机器视觉\图2.jpg');
subplot(2,3,1);
imshow(A);
title('原始图像');
h=[1 1 1;0 0 0;-1 -1 -1];
B1=filter2(h,A)/255;
subplot(2,3,2);
imshow(B1);
title('Prewitt模板,水平方向');
h=[1 0 -1;1 0 -1;1 0 -1];
B2=filter2(h,A)/255;
subplot(2,3,3)
imshow(B2);
title('Prewitt模板,垂直方向');
%Sobel模板
A=imread('C:\Users\27019\Desktop\机器视觉\图2.jpg');
subplot(2,3,1);
imshow(A);
title('原始图像');
h=[1 2 1;0 0 0;-1 -2 -1];
B1=filter2(h,A)/255;
subplot(2,3,2);
imshow(B1);
title('Prewitt模板,水平方向');
h=[1 0 -1;2 0 -2;1 0 -1];
B2=filter2(h,A)/255;
subplot(2,3,3)
imshow(B2);
title('Prewitt模板,垂直方向');
%低通滤波器
a=imread('C:\Users\27019\Desktop\机器视觉\图1-4.jpg');%读入图像
figure;
subplot(2,4,1);
imshow(a);
z=double(a)/255;
b=fft2(double(a));%对图片进行傅里叶变换
c=log(1+abs(b));
d=fftshift(b);%将变换的原点调整到频率矩阵的中心
e=log(1+abs(d));
%代入巴特沃斯公式进行滤波处理
[m,n]=size(d);
for i=1:256
    for j=1:256
        d1(i,j)=(1/(1+((i-128)^2+(j-128)^2)^0.5/500)^2)*d(i,j);%截至频率
    end;
end;
for i=1:256
    for j=1:256
        d2(i,j)=(1/(1+((i-128)^2+(j-128)^2)^0.5/1000)^2)*d(i,j);
    end;
end;
for i=1:256
    for j=1:256
        d3(i,j)=(1/(1+((i-128)^2+(j-128)^2)^0.5/2000)^2)*d(i,j);
    end;
end;
for i=1:256
    for j=1:256
        d4(i,j)=(1/(1+((i-128)^2+(j-128)^2)^0.5/4000)^2)*d(i,j);
    end;
end;
FF1=ifftshift(d1);
FF2=ifftshift(d2);
FF3=ifftshift(d3);
FF4=ifftshift(d4);
ff1=real(ifft2(FF1));%取傅里叶反变换
ff2=real(ifft2(FF2));
ff3=real(ifft2(FF3));
ff4=real(ifft2(FF4));
subplot(2,4,5);imshow(uint8(ff1));xlabel('截止频率为500');
subplot(2,4,6);imshow(uint8(ff2));xlabel('截止频率为1000');
subplot(2,4,7);imshow(uint8(ff3));xlabel('截止频率为2000');
subplot(2,4,8);imshow(uint8(ff4));xlabel('截止频率为4000');
%不同算子
clear;clc;
close all;
I=imread('C:\Users\27019\Desktop\机器视觉\图2.jpg');
imshow(I,[]);
title('Original Image');
 
sobelBW=edge(I,'sobel');
figure;
imshow(sobelBW);
title('Sobel Edge');
 
robertsBW=edge(I,'roberts');
figure;
imshow(robertsBW);
title('Roberts Edge');
 
prewittBW=edge(I,'prewitt');
figure;
imshow(prewittBW);
title('Prewitt Edge');
 
logBW=edge(I,'log');
figure;
imshow(logBW);
title('Laplasian of Gaussian Edge');
 
cannyBW=edge(I,'canny');
figure;
imshow(cannyBW);
title('Canny Edge');

%标记内圆
img=imread('C:\Users\27019\Desktop\机器视觉\图2.jpg');%读取原图
i=img;
img=im2bw(img); %二值化
[B,L]=bwboundaries(img);
[L,N]=bwlabel(img);
img_rgb=label2rgb(L,'hsv',[.5 .5 .5],'shuffle');
imshow(i);
title('end result')
hold on;
k = 109;
boundary = B{k}; 
plot(boundary(:,2),boundary(:,1),'y','LineWidth',1);%显示内圆标记线

%Prewitt template
figure;
A=imread('C:\Users\27019\Desktop\machine vision\Figure 2.jpg');
subplot(2,3,1);
imshow(A);
title('Original image') ;
h=[1 1 1;0 0 0;-1 -1 -1];
B1=filter2(h,A)/255;
subplot(2,3,2);
imshow(B1);
title('Prewitt template , horizontal direction');
h=[1 0 -1;1 0 -1;1 0 -1];
B2=filter2(h,A)/255;
subplot(2,3,3)
imshow(B2);
title ('Prewitt template, vertical direction');
%Sobel template
A=imread('C:\Users\27019\Desktop\Machine Vision\Figure 2.jpg');
subplot(2,3,1);
imshow(A) ;
title('Original Image');
h=[1 2 1;0 0 0;-1 -2 -1];
B1=filter2(h,A)/255;
subplot(2,3,2);
imshow( B1);
title('Prewitt template, horizontal direction');
h=[1 0 -1;2 0 -2;1 0 -1];
B2=filter2(h,A)/255;
subplot(2,3,3)
imshow(B2);
title('Prewitt template, vertical direction');
% low-pass filter
a=imread('C:\Users\27019\Desktop\machine vision\Figure 1-4.jpg');% read in image
figure;
subplot(2,4,1);
imshow(a);
z=double(a)/255;
b=fft2(double(a));% Perform Fourier transform on the picture
c=log(1+abs(b));
d=fftshift(b) ;% Adjust the origin of the transformation to the center of the frequency matrix
e=log(1+abs(d));
% Substitute into the Butterworth formula for filtering
[m,n]=size(d);
for i=1:256
for j=1:256
d1(i,j)=(1/(1+((i-128) 2+(j-128) 2) 0.5/500) 2)*d(i,j);% up to Frequency
end;
end;
for i=1:256
for j=1:256
d2(i,j)=(1/(1+((i-128)2+(j-128)2)0.5/1000)2)*d(i,j);
end;
end;
for i=1:256
for j=1:256
d3(i,j)=(1/(1+((i-128)2+(j-128)2)0.5/2000)2)*d(i,j);
end;
end;
for i=1:256
for j=1:256
d4(i,j)=(1/(1+((i-128)2+(j-128)2)0.5/4000)2)*d(i,j);
end;
end;
FF1=ifftshift(d1);
FF2=ifftshift(d2);
FF3=ifftshift(d3);
FF4=ifftshift(d4);
ff1=real(ifft2(FF1));%取傅里叶反变换
ff2=real(ifft2(FF2));
ff3=real(ifft2(FF3));
ff4=real(ifft2(FF4));
subplot(2,4,5);imshow(uint8(ff1));xlabel('cutoff frequency is 500');
subplot(2,4,6);imshow(uint8(ff2));xlabel('cutoff frequency is 1000');
subplot(2,4,7);imshow(uint8(ff3));xlabel('cutoff frequency is 2000');
subplot(2,4,8);imshow(uint8(ff4));xlabel( 'Cutoff frequency is 4000');
% different operators
clear;clc;
close all;
I=imread('C:\Users\27019\Desktop\Machine Vision\Figure 2.jpg');
imshow(I,[]) ;
title('Original Image');

sobelBW=edge(I,‘sobel’);
figure;
imshow(sobelBW);
title(‘Sobel Edge’);

robertsBW=edge(I,‘roberts’);
figure;
imshow(robertsBW);
title(‘Roberts Edge’);

prewittBW=edge(I,‘prewitt’);
figure;
imshow(prewittBW);
title(‘Prewitt Edge’);

logBW=edge(I,‘log’);
figure;
imshow(logBW);
title(‘Laplasian of Gaussian Edge’);

cannyBW=edge(I,‘canny’);
figure;
imshow(cannyBW);
title(‘Canny Edge’);

%Mark the inner circle
img=imread('C:\Users\27019\Desktop\Machine Vision\Figure 2.jpg');% Read the original image
i=img;
img=im2bw(img); % Binarization
[B ,L]=bwboundaries(img);
[L,N]=bwlabel(img);
img_rgb=label2rgb(L,'hsv',[.5 .5 .5],'shuffle');
imshow(i);
title ('end result')
hold on;
k = 109;
boundary = B{k};
plot(boundary(:,2),boundary(:,1),'y','LineWidth',1);% display inside The results of the circle marking line
processing
insert image description here
insert image description here
insert image description here
insert image description here
insert image description here
here I only marked the inner circle, and I failed to calculate the distance. I will add the
result analysis in the comment area in the future.

  1. What are the differences in the effects of different edge extraction operators?
  2. Why can't the inner circle marking line be fitted as a standard arc on the image?

answer:

  1. The Sobel operator detection method has a better effect on image processing with gray gradient and more noise. It has a smoothing effect on noise and provides more accurate edge direction information. The edge positioning accuracy is not high enough, and the edge of the image is more than one pixel. When the accuracy requirement is not very high, it is a more commonly used edge detection method.
    The Roberts operator detection method has a better effect on image processing with steep low noise, but the result of using the Roberts operator to extract the edge is that the edge is relatively thick, so the location of the edge is not very accurate.
    The Prewitt operator detection method has a better effect on image processing with grayscale gradient and more noise. But the edges are wider and there are more discontinuities.
    The Laplacian operator method is sensitive to noise, so this operator is rarely used to detect edges, but to judge whether edge pixels are regarded as bright areas or dark areas of the image.
    The Log operator is the combination of Gaussian filtering and Laplace edge detection. It has all the advantages of the Laplace operator, but also overcomes its disadvantage of being sensitive to noise.
    The Canny method is not easily disturbed by noise and can detect real weak edges. The advantage is that two different thresholds are used to detect strong edges and weak edges respectively, and only when the weak edges and strong edges are connected, the weak edges are included in the output image.
  2. When the image is taken, the lens is not perpendicular to the ground, so that the obtained image has certain distortion.

Guess you like

Origin blog.csdn.net/weixin_43659550/article/details/86657746