Use opencv binocular ranging after MATLAB calibration


Preface

Recently, a project required the use of camera ranging, so I started to come into contact with opencv machine vision. After learning the knowledge about camera ranging for several days, I started to verify it. I started with monocular ranging. I made a Raspberry Pi development board, and then downloaded some codes from the Internet for verification. I found that the monocular needs to be known in advance. Because the object to be measured needs to achieve obstacle avoidance function, binocular ranging was chosen later. The current progress is that binocular ranging can be used on a PC. The accuracy of binocular ranging at close range is acceptable. In the future, the code needs to be transplanted to the Raspberry Pi B4 board, and the Raspberry Pi will further demonstrate the feasibility of the project.


Tip: The following is the text of this article. The following cases are for reference.

1. What is photo ranging?

I personally understand that there are two ways to measure distance in photos, monocular ranging and binocular ranging. Monocular only uses one camera. The prerequisite for ranging is to know what the object being measured is. Binocular ranging requires a binocular camera.

2. Binocular ranging steps

1.Dual goal setting

The original dual-target calibration directly uses the calibration function that comes with opencv, but I found that the requirements for pictures are relatively high, and the calibration results are not accurate in the subsequent ranging (maybe due to my inaccuracy during calibration). MATLAB will be used for calibration later.

I bought a binocular camera on Taobao and printed the checkerboard pattern on A4 paper. Photographs were taken using the image acquisition method provided by the manufacturer. We took 15 photos (15 photos for the left eye and 15 photos for the right eye). When calibrating with matlab, some pictures were deleted because the errors were a bit large. Finally, we took 12 photos for calibration. I won’t go into the calibration process. You can learn more about it through the following link
https://blog.csdn.net/u013451663/article/details/89954646

Let’s focus on the corresponding relationship between parameters obtained using matlab:
Insert image description here

Insert image description here
Let’s take a look at the distortion parameters first. The obtained values ​​are K1, K2, K3, and P1, P2. When assigning values ​​in the code, they should be K1, K2, P1, P2, K3.

Mat distCoeffL = (Mat_<double>(5, 1) << 0.310458259610102, -2.26297309194147, -0.00764246381079933, 0.00634878271949189, 6.11556840747635);

Specifically, let’s talk about the internal parameters. The internal parameters are a 3x3 matrix. Double-click the value in matlab and the resulting table will look like this. The
Insert image description here
data in this table and the code needs to be replaced.

fx 0 cx
0 fy cy
0  0  1

Insert image description here
The code assignment is as follows:

Mat cameraMatrixL = (Mat_<double>(3, 3) << 337.709436280709, 0.741962718557881, 150.799143051710,
	0, 337.863855604370, 123.314225146904,
	0, 0, 1);

The rotation matrix is ​​also 3x3, so it is replaced like the internal parameters.
The translation matrix can be used directly.
Insert image description here
The assignment code is attached as follows:

Mat rec = (Mat_<double>(3, 3) << 0.999989526750612, 0.00232022184998904, 0.00394499171785794,
	-0.00228582974058393, 0.999959540028791, -0.00870018895234802,
	-0.00396501847211287, 0.00869108025370437, 0.999954370835259); 
	  
Mat T = (Mat_<double>(3, 1) << -41.5775139873461, 0.241065317743041, 0.408815384344912);

At this point, the calibration is over.

2.Ranging

The distance measurement is based on the master's code, and I copied it.
https://guo-pu.blog.csdn.net/article/details/86744936?utm_medium=distribute.pc_relevant_t0.none-task-blog-BlogCommendFromBaidu-1.not_use_machine_learn_pai&depth_1-utm_source=distribute.pc_relevant_t0.none-task-blog- BlogCommendFromBaidu-1.not_use_machine_learn_pai

#include <opencv2/opencv.hpp>  
#include <iostream>  
#include <math.h> 

using namespace std;
using namespace cv;

const int imageWidth = 320;                             //图片大小  
const int imageHeight = 240;
Vec3f  point3;
float d;
Size imageSize = Size(imageWidth, imageHeight);

Mat rgbImageL, grayImageL;
Mat rgbImageR, grayImageR;
Mat rectifyImageL, rectifyImageR;

Rect validROIL;//图像校正之后,会对图像进行裁剪,这里的validROI就是指裁剪之后的区域  
Rect validROIR;

Mat mapLx, mapLy, mapRx, mapRy;     //映射表  
Mat Rl, Rr, Pl, Pr, Q;              //校正旋转矩阵R,投影矩阵P 重投影矩阵Q
Mat xyz;              //三维坐标

Point origin;         //鼠标按下的起始点
Rect selection;      //定义矩形选框
bool selectObject = false;    //是否选择对象

int blockSize = 0, uniquenessRatio = 0, numDisparities = 0;
Ptr<StereoBM> bm = StereoBM::create(16, 9);

Mat cameraMatrixL = (Mat_<double>(3, 3) << 337.709436280709, 0.741962718557881, 150.799143051710,
	0, 337.863855604370, 123.314225146904,
	0, 0, 1);

Mat distCoeffL = (Mat_<double>(5, 1) << 0.310458259610102, -2.26297309194147, -0.00764246381079933, 0.00634878271949189, 6.11556840747635);

Mat cameraMatrixR = (Mat_<double>(3, 3) << 337.093687218177, 0.587984682766586, 152.848985873868,
	0, 336.971855005274, 117.527331021388,
	0, 0, 1);

Mat distCoeffR = (Mat_<double>(5, 1) << 0.439757633387106, -5.17644381384173, -0.0103643563681475, 0.00184932125612765, 23.4806041578594);

Mat T = (Mat_<double>(3, 1) << -41.5775139873461, 0.241065317743041, 0.408815384344912);//T平移向量

Mat rec = (Mat_<double>(3, 3) << 0.999989526750612, 0.00232022184998904, 0.00394499171785794,
	-0.00228582974058393, 0.999959540028791, -0.00870018895234802,
	-0.00396501847211287, 0.00869108025370437, 0.999954370835259);                //rec旋转向量

Mat R;//R 旋转矩阵

/*****立体匹配*****/
void stereo_match(int, void*)
{
    
    
	bm->setBlockSize(2 * blockSize + 5);     //SAD窗口大小,5~21之间为宜
	bm->setROI1(validROIL);
	bm->setROI2(validROIR);
	bm->setPreFilterCap(31);
	bm->setMinDisparity(0);  //最小视差,默认值为0, 可以是负值,int型
	bm->setNumDisparities(numDisparities * 16 + 16);//视差窗口,即最大视差值与最小视差值之差,窗口大小必须是16的整数倍,int型
	bm->setTextureThreshold(10);
	bm->setUniquenessRatio(uniquenessRatio);//uniquenessRatio主要可以防止误匹配
	bm->setSpeckleWindowSize(100);
	bm->setSpeckleRange(32);
	bm->setDisp12MaxDiff(-1);
	Mat disp, disp8;
	bm->compute(rectifyImageL, rectifyImageR, disp);//输入图像必须为灰度图
	disp.convertTo(disp8, CV_8U, 255 / ((numDisparities * 16 + 16) * 16.));//计算出的视差是CV_16S格式
	reprojectImageTo3D(disp, xyz, Q, true); //在实际求距离时,ReprojectTo3D出来的X / W, Y / W, Z / W都要乘以16(也就是W除以16),才能得到正确的三维坐标信息。
	xyz = xyz * 16;
	imshow("disparity", disp8);
}

/*****描述:鼠标操作回调*****/
static void onMouse(int event, int x, int y, int, void*)
{
    
    
	if (selectObject)
	{
    
    
		selection.x = MIN(x, origin.x);
		selection.y = MIN(y, origin.y);
		selection.width = std::abs(x - origin.x);
		selection.height = std::abs(y - origin.y);
	}

	switch (event)
	{
    
    
	case EVENT_LBUTTONDOWN:   //鼠标左按钮按下的事件
		origin = Point(x, y);
		selection = Rect(x, y, 0, 0);
		selectObject = true;
		//cout << origin << "in world coordinate is: " << xyz.at<Vec3f>(origin) << endl;
		point3 = xyz.at<Vec3f>(origin);
		point3[0];
		cout << "世界坐标:" << endl;
		cout << "x: " << point3[0] << "  y: " << point3[1] << "  z: " << point3[2] << endl;
		d = point3[0] * point3[0] + point3[1] * point3[1] + point3[2] * point3[2];
		d = sqrt(d);   //mm
		d = d / 10.0;   //cm
		cout << "距离是:" << d << "cm" << endl;
		break;
	case EVENT_LBUTTONUP:    //鼠标左按钮释放的事件
		selectObject = false;
		if (selection.width > 0 && selection.height > 0)
			break;
	}
}


/*****主函数*****/
int main()
{
    
    
	/*
	立体校正
	*/
	Rodrigues(rec, R); //Rodrigues变换
	stereoRectify(cameraMatrixL, distCoeffL, cameraMatrixR, distCoeffR, imageSize, R, T, Rl, Rr, Pl, Pr, Q, CALIB_ZERO_DISPARITY,
		0, imageSize, &validROIL, &validROIR);
	initUndistortRectifyMap(cameraMatrixL, distCoeffL, Rl, Pr, imageSize, CV_32FC1, mapLx, mapLy);
	initUndistortRectifyMap(cameraMatrixR, distCoeffR, Rr, Pr, imageSize, CV_32FC1, mapRx, mapRy);

	/*
	读取图片
	*/
	rgbImageL = imread("C:\\Users\\DELL\\picture\\left_33.png", CV_LOAD_IMAGE_COLOR);
	cvtColor(rgbImageL, grayImageL, CV_BGR2GRAY);
	rgbImageR = imread("C:\\Users\\DELL\\picture\\right_33.png", CV_LOAD_IMAGE_COLOR);
	cvtColor(rgbImageR, grayImageR, CV_BGR2GRAY);

	imshow("ImageL Before Rectify", grayImageL);
	imshow("ImageR Before Rectify", grayImageR);

	/*
	经过remap之后,左右相机的图像已经共面并且行对准了
	*/
	remap(grayImageL, rectifyImageL, mapLx, mapLy, INTER_LINEAR);
	remap(grayImageR, rectifyImageR, mapRx, mapRy, INTER_LINEAR);

	/*
	把校正结果显示出来
	*/
	Mat rgbRectifyImageL, rgbRectifyImageR;
	cvtColor(rectifyImageL, rgbRectifyImageL, CV_GRAY2BGR);  //伪彩色图
	cvtColor(rectifyImageR, rgbRectifyImageR, CV_GRAY2BGR);

	//单独显示
	imshow("ImageL After Rectify", rgbRectifyImageL);
	imshow("ImageR After Rectify", rgbRectifyImageR);

	//显示在同一张图上
	Mat canvas;
	double sf;
	int w, h;
	sf = 600. / MAX(imageSize.width, imageSize.height);
	w = cvRound(imageSize.width * sf);
	h = cvRound(imageSize.height * sf);
	canvas.create(h, w * 2, CV_8UC3);   //注意通道

										//左图像画到画布上
	Mat canvasPart = canvas(Rect(w * 0, 0, w, h));                                //得到画布的一部分  
	resize(rgbRectifyImageL, canvasPart, canvasPart.size(), 0, 0, INTER_AREA);     //把图像缩放到跟canvasPart一样大小  
	Rect vroiL(cvRound(validROIL.x * sf), cvRound(validROIL.y * sf),                //获得被截取的区域    
		cvRound(validROIL.width * sf), cvRound(validROIL.height * sf));
	cout << "Painted ImageL" << endl;

	//右图像画到画布上
	canvasPart = canvas(Rect(w, 0, w, h));                                      //获得画布的另一部分  
	resize(rgbRectifyImageR, canvasPart, canvasPart.size(), 0, 0, INTER_LINEAR);
	Rect vroiR(cvRound(validROIR.x * sf), cvRound(validROIR.y * sf),
		cvRound(validROIR.width * sf), cvRound(validROIR.height * sf));
	cout << "Painted ImageR" << endl;

	//画上对应的线条
	for (int i = 0; i < canvas.rows; i += 16)
		line(canvas, Point(0, i), Point(canvas.cols, i), Scalar(0, 255, 0), 1, 8);
	imshow("rectified", canvas);

	/*
	立体匹配
	*/
	namedWindow("disparity", CV_WINDOW_AUTOSIZE);
	// 创建SAD窗口 Trackbar
	createTrackbar("BlockSize:\n", "disparity", &blockSize, 8, stereo_match);
	// 创建视差唯一性百分比窗口 Trackbar
	createTrackbar("UniquenessRatio:\n", "disparity", &uniquenessRatio, 50, stereo_match);
	// 创建视差窗口 Trackbar
	createTrackbar("NumDisparities:\n", "disparity", &numDisparities, 16, stereo_match);
	//鼠标响应函数setMouseCallback(窗口名称, 鼠标回调函数, 传给回调函数的参数,一般取0)
	setMouseCallback("disparity", onMouse, 0);
	stereo_match(0, 0);

	waitKey(0);
	return 0;
}

Test Results

The measured results are okay and the accuracy is quite high. Finally, the test result chart is attached. Record this much first, and then transplant it to the Raspberry Pi.

Insert image description here

おすすめ

転載: blog.csdn.net/weixin_30072103/article/details/110549798