Positioning binocular vision sphere (matlab, with code)

Positioning binocular vision sphere (matlab, with code)

Tags (separated by spaces): Machine Vision


introduction

Binocular vision positioning is one of our curriculum design, recently finished, out to share with you, the purpose of the experiment is to identify the sphere in photographs taken, and to find the actual distance of the sphere to the camera's right, we need requirements with matlab but matlab call binocular camera (USB port) but always only be called a binocular camera in, but the use of Python OpenCV library but you can call two simultaneously, so we chose Python for taking pictures.

1. The basic flow


Note: Because the ball is slightly difficult to identify the disparity map obtained in accordance with, we did not use the disparity map a depth map to the depth derived spheres, but directly using the corrected difference image pixel into the formula obtained directly from.

1. Camera Calibration

First of all we need to know the camera model, generally involves four kinds of coordinate systems: the world coordinate system, camera coordinate system, the image coordinate system, pixel coordinate system. Camera calibration effect can be understood as find a way to transform the relationship between the world coordinate system and the image coordinate system.
Specific calibration can refer to this article: binocular vision camera calibration

  • 1. The pixel coordinate system (u, v):
    As the name suggests, each point is a pixel at the origin is usually provided at the upper left corner of the image, horizontal and vertical coordinates of the points representing the corresponding pixel is located on the first few lines in the image columns ;
  • The image coordinates (x, y):
    Set the reference point at the intersection of the image plane of the camera optical axis, horizontal axis, vertical axis and horizontal axis of the pixel coordinate system, parallel to the longitudinal axis; the center of the image plane is the origin of coordinates, in order to describe the process of imaging the object from the camera coordinate system is introduced to the projection coordinates of the image transmitting relationship, to facilitate further the pixel coordinates of the coordinate system. Image coordinate system is a position of a pixel in an image with physical units (e.g. mm).
  • 3. The camera coordinates (x, y, z):
    a camera stand on their angle measured object coordinates. The origin of the camera coordinate system of the camera in the light heart, z parallel to the optical axis of the camera. It is the bridgehead contact with the shooting of the object, the object in the world coordinate system must first undergo rigid to change the camera coordinate system, and then the image coordinate system in a relationship. It is the ties between the image coordinates and world coordinates, to communicate the farthest distance in the world. Units of length units such as mm.
  • 4. World coordinate system (Xw, Yw, Zw):
    coordinates of the object in the real world.

DETAILED matlab toolbox calibration using the following steps:
1. Print calibration plate:

2. Calibration holding plate photographed (photographed at the end of the text codes), as shown in FIG effect:

3. The photo above press left, right divided into two groups into two folders, open matlab calibration. In the command line window enter stereoCameraCalibrator, the following interface:

4. Then on top "Skew", "Tangential Distortion" and "2 Coefficients" other options selected, the "3 Coefficients" option removed as follows:

5. Load the image:

NOTE: Calibration standard printing plate that, in FIG. 25 is a side small box, box! Side length! Modified according to your situation.

6. Click the button to calibrate:

7. Check error histogram in some of the larger group, (picture on the left at the same time will also be checked), these will be deleted, and then he will automatically re-calibration.

8. I think there is no problem, and put calibration data export (the green check mark). After turn off this kit, a box will pop up, then select yes .mat save the resulting file. Remember to save your own path, with advance next time to open the file in matlab. Thus, the calibration is complete.

Including many calibration parameters, the internal control includes a camera, external reference, etc., there is a rotation matrix, and the like translation matrix. You can go look.

2. The image correction

图像校正是根据标定获取的畸变矩阵,来校正相机拍摄时造成的误差,包括径向畸变和切向畸变。两者合起来是一个15的矩阵,不过matlab标定的时候却只显示两个12的矩阵,查了知道缺失的一个参数可以设置为0,.如果你利用python双目测距这篇文章做的双目视觉定位,这句话肯定会给予你帮助。

3.识别圆球

这里采用了霍夫变换去识别小球,霍夫变换找圆的基本思想是,对于一个圆
我们可以把它放入以a,b,r为坐标轴的三维坐标系系中,而不是以x,y为坐标轴的二维坐标系中,又因为同一平面不在一条直线的三个点就可以确定一个圆,所以图中对应的A点(a,b,r)就是一个圆心,但是如果这样找的话,图中可以找到很多这样的三个点,所以也肯定能找到很多圆,因此需要进行‘投票’,即如果有很多点交于这一点,那么这个点极有可能是圆心:

具体代码如下,(github上找的):

function [Hough_space,Hough_circle_result,Para] = Hough_circle(BW,Step_r,Step_angle,r_min,r_max,p)  
%---------------------------------------------------------------------------------------------------------------------------  
% input:  
% BW:二值图像;  
% Step_r:检测的圆半径步长;  
% Step_angle:角度步长,单位为弧度;  
% r_min:最小圆半径;  
% r_max:最大圆半径;  
% p:以p*Hough_space的最大值为阈值,p取0,1之间的数.  
% a = x-r*cos(angle); b = y-r*sin(angle);  
%---------------------------------------------------------------------------------------------------------------------------  
% output:  
% Hough_space:参数空间,h(a,b,r)表示圆心在(a,b)半径为r的圆上的点数;  
% Hough_circle:二值图像,检测到的圆;  
% Para:检测到的圆的圆心、半径.  
%---------------------------------------------------------------------------------------------------------------------------  
circleParaXYR=[];  
Para=[];  
%得到二值图像大小  
[m,n] = size(BW);  
%计算检测半径和角度的步数、循环次数 并取整,四舍五入  
size_r = round((r_max-r_min)/Step_r)+1;  
size_angle = round(2*pi/Step_angle);  
%建立参数空间  
Hough_space = zeros(m,n,size_r);  
%查找非零元素的行列坐标  
[rows,cols] = find(BW);  
%非零坐标的个数  
ecount = size(rows);  
% Hough变换  
% 将图像空间(x,y)对应到参数空间(a,b,r)  
% a = x-r*cos(angle)  
% b = y-r*sin(angle)  
i = 1;
ecount = ecount(1);
for i=1:ecount
    for r=1:size_r %半径步长数按一定弧度把圆几等分  
        for k=1:size_angle  
            a = round(rows(i)-(r_min+(r-1)*Step_r)*cos(k*Step_angle));  
            b = round(cols(i)-(r_min+(r-1)*Step_r)*sin(k*Step_angle));  
            if (a>0&&a<=m&&b>0&&b<=n)  
                Hough_space(a,b,r)=Hough_space(a,b,r)+1;%h(a,b,r)的坐标,圆心和半径  
            end  
        end  
    end  
end  
% 搜索超过阈值的聚集点,对于多个圆的检测,阈值要设的小一点!通过调此值,可以求出所有圆的圆心和半径返回值就是这个矩阵的最大值  
max_para = max(max(max(Hough_space)));  
%一个矩阵中,想找到其中大于max_para*p数的位置  
index = find(Hough_space>=max_para*p);  
length = size(index);%符合阈值的个数  
Hough_circle_result=zeros(m,n);  
%通过位置求半径和圆心。  
length = length(1);
k = 1;
par = 1;
for i=1:ecount  
    for k=1:length  
        par3 = floor(index(k)/(m*n))+1;  
        par2 = floor((index(k)-(par3-1)*(m*n))/m)+1;  
        par1 = index(k)-(par3-1)*(m*n)-(par2-1)*m;  
        if((rows(i)-par1)^2+(cols(i)-par2)^2<(r_min+(par3-1)*Step_r)^2+5&&...  
          (rows(i)-par1)^2+(cols(i)-par2)^2>(r_min+(par3-1)*Step_r)^2-5)  
            Hough_circle_result(rows(i),cols(i)) = 1;%检测的圆  
        end  
    end  
end  
% 从超过峰值阈值中得到    
for k=1:length    
    par3 = floor(index(k)/(m*n))+1;%取整    
    par2 = floor((index(k)-(par3-1)*(m*n))/m)+1;    
    par1 = index(k)-(par3-1)*(m*n)-(par2-1)*m;    
    circleParaXYR = [circleParaXYR;par1,par2,par3];    
    Hough_circle_result(par1,par2)= 1; %这时得到好多圆心和半径,不同的圆的圆心处聚集好多点,这是因为所给的圆不是标准的圆    
end   
%集中在各个圆的圆心处的点取平均,得到针对每个圆的精确圆心和半径;  
while size(circleParaXYR,1) >= 1  
    num=1;  
    XYR=[];  
    temp1=circleParaXYR(1,1);  
    temp2=circleParaXYR(1,2);  
    temp3=circleParaXYR(1,3);  
    c1=temp1;  
    c2=temp2;  
    c3=temp3;  
    temp3= r_min+(temp3-1)*Step_r;  
    if size(circleParaXYR,1)>1  
        for k=2:size(circleParaXYR,1)  
            if (circleParaXYR(k,1)-temp1)^2+(circleParaXYR(k,2)-temp2)^2 > temp3^2  
                XYR=[XYR;circleParaXYR(k,1),circleParaXYR(k,2),circleParaXYR(k,3)];  %保存剩下圆的圆心和半径位置  
            else  
                c1=c1+circleParaXYR(k,1);  
                c2=c2+circleParaXYR(k,2);  
                c3=c3+circleParaXYR(k,3);  
                num=num+1;  
            end  
        end  
    end  
    c1=round(c1/num);  
    c2=round(c2/num);  
    c3=round(c3/num);  
    c3=r_min+(c3-1)*Step_r;  
    Para=[Para;c1,c2,c3]; %保存各个圆的圆心和半径的值  
    circleParaXYR=XYR;  
End

4.计算距离

计算距离采用了一个公式:,其中f为焦距,T为两个相机之间的距离,分母为两个图片的像素差,例如校正之后,球心在第一张图片的像素坐标为(u,v1),在第二章图片的像素坐标为(u,v2),则像素差为|v1-v2|。

注意这个参数,rectifyStereoImages()这个函数要用:

代码如下,读入两幅图,但是只在一张图上进行了距离标注,另一张图只负责提供圆心坐标
注意如果不能准确识别出圆球,需要修改最小,最大圆半径这两个参数

I1 = imread('E:\image\left_0.jpg');%读取左右图片
I2 = imread('E:\image\right_0.jpg');
%%%注意校正的第三个参数,他的名字由你得到的标定文件得到,看上图%%%%
[J1, J2] = rectifyStereoImages(I1,I2,calibrationSession.CameraParameters);
I1=rgb2gray(J1)
I=rgb2gray(J2)
%------------------------------------------
BW1=edge(I1,'sobel')
BW=edge(I2,'sobel')
%----------
Step_r = 0.5;  
%角度步长0.1,单位为弧度  
Step_angle = 0.1;  
%最小圆半径2  
minr =5;  
%最大圆半径30  
maxr = 8;  
%以thresh*hough_space的最大值为阈值,thresh取0-1之间的数  
thresh = 1;  
%-----------这个只负责提供其中一张图片的圆心坐标,这个函数是上一段代码 
[Hough_space,Hough_circle_result,Para] = Hough_circle(BW1,Step_r,Step_angle,minr,maxr,thresh);
%开始检测另一个圆
[Hough_space,Hough_circle_result,Para1] = Hough_circle(BW,Step_r,Step_angle,minr,maxr,thresh);  
%两幅图像素差
sub=(Para(2)-Para1(2))  %*100/67%如果这里有67%缩放比例
%焦距
jiaoju=3.6
%相机距离
jixian=12.2
depth=(jiaoju*jixian)/abs(sub)
%距离,转成字符串才能放上去
depth=num2str(depth)
axis equal  
figure(1);  
imshow(BW,[]),title('边缘(图一)');  
axis equal  
figure(2);  
imshow(Hough_circle_result,[]),title('检测结果(图一)');  
axis equal  
figure(3),imshow(I,[]),title('检测出图中的圆(图一)')  
hold on;  
%-------------------------------------------------------------- 
%以红色线标记出的检测圆心与圆  
plot(circleParaXYR(:,2), circleParaXYR(:,1), 'r+');
%打上距离值
text(circleParaXYR(:,2)+circleParaXYR(3), circleParaXYR(:,1)-circleParaXYR(3),depth,'color','red')
for k = 1 : size(circleParaXYR, 1)
    t=0:0.01*pi:2*pi;  
    x=cos(t).*circleParaXYR(k,3)+circleParaXYR(k,2);  
    y=sin(t).*circleParaXYR(k,3)+circleParaXYR(k,1);  
    plot(x,y,'r-');
End

得到的效果图:

随便改改代码甚至还可以皮一下:

5.生成视差图

视差图还好说,深度图却相当诡异,所以我们把深度图注释掉了。

clear all
clc
 
I1 = imread('E:\image\left_0.jpg');%读取左右图片
I2 = imread('E:\image\right_0.jpg');
%I1=imcrop(I1,[0,118,320,320])%去除黑边
%I2=imcrop(I2,[0,118,320,320])
figure
imshowpair(I1, I2, 'montage');
title('Original Images');
%---------------------------------------------------------------
load('E:\image\calibrationSession.mat');%加载你保存的相机标定的mat
%校正图片
[J1, J2] = rectifyStereoImages(I1,I2,calibrationSession.CameraParameters);
% figure
% imshow(J1)
figure
imshowpair(J1, J2, 'montage');
title('Undistorted Images');
%-----------------------------------------
figure; imshow(cat(3, J1(:,:,1), J2(:,:,2:3)), 'InitialMagnification', 100)
%----------------------------------------------------------------------------
%disparityRange = [-6 10];如此果想生成彩色视差图,将此注释解除
disparityMap1 = disparity(rgb2gray(J1),rgb2gray(J2),'DisparityRange',[0,16],'BlockSize',5,'ContrastThreshold',0.3,'UniquenessThreshold',5);
figure 
imshow(disparityMap1)%%%,disparityRange);如此果想生成彩色视差图,将此注释解除
% title('Disparity Map');%生成彩色视差图,将此注释解除
% colormap(gca,jet) %生成彩色视差图,将此注释解除
% colorbar%生成彩色视差图,将此注释解除
%---深度图注释掉
%pointCloud3D = reconstructScene(disparityMap1, %calibrationSession.CameraParameters);%深度图
%figure;
%imshow(pointCloud3D);

效果图:

6.拍照用的Python代码,参考python双目测距

import cv2
import time
AUTO = True  # 自动拍照,或手动按s键拍照
INTERVAL = 2 # 自动拍照间隔
cv2.namedWindow("shuangmu")
cv2.moveWindow("shuangmu", 400, 0)
shuangmu_camera = cv2.VideoCapture(1)#打开双目摄像头
counter = 0
utc = time.time()
###pattern = (12, 8) # 棋盘格尺寸
folder = "e:/keshe/" # 拍照文件目录
def shot(pos, frame):
    global counter
    path = folder + pos + "_" + str(counter) + ".jpg"

    cv2.imwrite(path, frame)#保存图像
    print("snapshot saved into: " + path)

while True:
    #ret是布尔值,如果读取帧是正确的则返回True,frame就是每一帧的图像
    ret,double_frame = shuangmu_camera.read()
    heigh=len(double_frame)
    width=len(double_frame[0])
    width2=int(width/2)
    left1=double_frame[:heigh,0:width2]
    right1=double_frame[:heigh,width2:width]

    cv2.imshow("shuangmu", double_frame)

    now = time.time()
    if AUTO and now - utc >= INTERVAL:
        shot("left", left1)
        shot("right", right1)
        counter += 1
        utc = now
    key = cv2.waitKey(1)#等待键盘输入,1表示延时1ms切换到下一帧图像,对于视频而言
    if key == ord("q"):
        break
    elif key == ord("s"):
        shot("right", right_frame)
        counter += 1

cv2.destroyWindow("shuangmu")#释放窗口

Guess you like

Origin www.cnblogs.com/starstrrys/p/11119017.html