【视觉=立体视觉】立体匹配算法 StereoBM/StereoSGBM/StereoVar(OpenCV中源码分析)+SAD块匹配算法+GC算法+HH算法


OpenCV2源码:

// OpenCVTest.cpp : 定义控制台应用程序的入口点。
//

#include "stdafx.h"
#include <stdio.h>

/*
*  stereo_match.cpp
*  calibration
*
*  Created by Victor  Eruhimov on 1/18/10.
*  Copyright 2010 Argus Corp. All rights reserved.
*
*/

#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/contrib/contrib.hpp"



using namespace cv;

static void print_help()
{
	printf("\nDemo stereo matching converting L and R images into disparity and point clouds\n");
	printf("\nUsage: stereo_match <left_image> <right_image> [--algorithm=bm|sgbm|hh|var] [--blocksize=<block_size>]\n"
		"[--max-disparity=<max_disparity>] [--scale=scale_factor>] [-i <intrinsic_filename>] [-e <extrinsic_filename>]\n"
		"[--no-display] [-o <disparity_image>] [-p <point_cloud_file>]\n");
}

static void saveXYZ(const char* filename, const Mat& mat)
{
	const double max_z = 1.0e4;
	FILE* fp = fopen(filename, "wt");
	for (int y = 0; y < mat.rows; y++)
	{
		for (int x = 0; x < mat.cols; x++)
		{
			Vec3f point = mat.at<Vec3f>(y, x);
			if (fabs(point[2] - max_z) < FLT_EPSILON || fabs(point[2]) > max_z) continue;
			fprintf(fp, "%f %f %f\n", point[0], point[1], point[2]);
		}
	}
	fclose(fp);
}

int _tmain(int argc, _TCHAR* argv[])
{
	const char* algorithm_opt = "--algorithm=";
	const char* maxdisp_opt = "--max-disparity=";
	const char* blocksize_opt = "--blocksize=";
	const char* nodisplay_opt = "--no-display";
	const char* scale_opt = "--scale=";

	//if (argc < 3)
	//{
	//	print_help();
	//	return 0;
	//}
	const char* img1_filename = 0;
	const char* img2_filename = 0;
	const char* intrinsic_filename = 0;
	const char* extrinsic_filename = 0;
	const char* disparity_filename = 0;
	const char* point_cloud_filename = 0;

	enum { STEREO_BM = 0, STEREO_SGBM = 1, STEREO_HH = 2, STEREO_VAR = 3 };
	int alg = STEREO_SGBM;
	int SADWindowSize = 0, numberOfDisparities = 0;
	bool no_display = false;
	float scale = 1.f;

	StereoBM bm;
	StereoSGBM sgbm;
	StereoVar var;

	//------------------------------

	/*img1_filename = "tsukuba_l.png";
	img2_filename = "tsukuba_r.png";*/

	img1_filename = "01.jpg";
	img2_filename = "02.jpg";

	int color_mode = alg == STEREO_BM ? 0 : -1;
	Mat img1 = imread(img1_filename, color_mode);
	Mat img2 = imread(img2_filename, color_mode);


	Size img_size = img1.size();

	Rect roi1, roi2;
	Mat Q;

	numberOfDisparities = numberOfDisparities > 0 ? numberOfDisparities : ((img_size.width / 8) + 15) & -16;

	bm.state->roi1 = roi1;
	bm.state->roi2 = roi2;
	bm.state->preFilterCap = 31;
	bm.state->SADWindowSize = SADWindowSize > 0 ? SADWindowSize : 9;
	bm.state->minDisparity = 0;
	bm.state->numberOfDisparities = numberOfDisparities;
	bm.state->textureThreshold = 10;
	bm.state->uniquenessRatio = 15;
	bm.state->speckleWindowSize = 100;
	bm.state->speckleRange = 32;
	bm.state->disp12MaxDiff = 1;

	sgbm.preFilterCap = 63;
	sgbm.SADWindowSize = SADWindowSize > 0 ? SADWindowSize : 3;

	int cn = img1.channels();

	sgbm.P1 = 8 * cn*sgbm.SADWindowSize*sgbm.SADWindowSize;
	sgbm.P2 = 32 * cn*sgbm.SADWindowSize*sgbm.SADWindowSize;
	sgbm.minDisparity = 0;
	sgbm.numberOfDisparities = numberOfDisparities;
	sgbm.uniquenessRatio = 10;
	sgbm.speckleWindowSize = bm.state->speckleWindowSize;
	sgbm.speckleRange = bm.state->speckleRange;
	sgbm.disp12MaxDiff = 1;
	sgbm.fullDP = alg == STEREO_HH;

	var.levels = 3;                                 // ignored with USE_AUTO_PARAMS
	var.pyrScale = 0.5;                             // ignored with USE_AUTO_PARAMS
	var.nIt = 25;
	var.minDisp = -numberOfDisparities;
	var.maxDisp = 0;
	var.poly_n = 3;
	var.poly_sigma = 0.0;
	var.fi = 15.0f;
	var.lambda = 0.03f;
	var.penalization = var.PENALIZATION_TICHONOV;   // ignored with USE_AUTO_PARAMS
	var.cycle = var.CYCLE_V;                        // ignored with USE_AUTO_PARAMS
	var.flags = var.USE_SMART_ID | var.USE_AUTO_PARAMS | var.USE_INITIAL_DISPARITY | var.USE_MEDIAN_FILTERING;

	Mat disp, disp8;
	//Mat img1p, img2p, dispp;
	//copyMakeBorder(img1, img1p, 0, 0, numberOfDisparities, 0, IPL_BORDER_REPLICATE);
	//copyMakeBorder(img2, img2p, 0, 0, numberOfDisparities, 0, IPL_BORDER_REPLICATE);

	int64 t = getTickCount();
	if (alg == STEREO_BM)
		bm(img1, img2, disp);
	else if (alg == STEREO_VAR) {
		var(img1, img2, disp);
	}
	else if (alg == STEREO_SGBM || alg == STEREO_HH)
		sgbm(img1, img2, disp);//------

	t = getTickCount() - t;
	printf("Time elapsed: %fms\n", t * 1000 / getTickFrequency());

	//disp = dispp.colRange(numberOfDisparities, img1p.cols);
	if (alg != STEREO_VAR)
		disp.convertTo(disp8, CV_8U, 255 / (numberOfDisparities*16.));
	else
		disp.convertTo(disp8, CV_8U);
	if (!no_display)
	{
		namedWindow("left", 1);
		imshow("left", img1);

		namedWindow("right", 1);
		imshow("right", img2);

		namedWindow("disparity", 0);
		imshow("disparity", disp8);

		imwrite("result.bmp", disp8);
		printf("press any key to continue...");
		fflush(stdout);
		waitKey();
		printf("\n");
	}


	return 0;
}


源码涉及到的三种算法:

  1.     StereoBM bm;  
  2.     StereoSGBM sgbm;  
  3.     StereoVar var;  

enum { STEREO_BM = 0, STEREO_SGBM = 1, STEREO_HH = 2, STEREO_VAR = 3 };
//int alg = STEREO_SGBM;//sxp-2018
int alg = STEREO_VAR;//sxp-2018
int SADWindowSize = 0, numberOfDisparities = 0;
bool no_display = false;
float scale = 1.f;

StereoBM bm;
StereoSGBM sgbm;
StereoVar var;

{说明:1这里的STEREO_HH博主暂时不了解}{var算法稍后介绍,先介绍了GC}

BM算法

SGBM算法 Stereo Processing by Semiglobal Matching and Mutual Information

GC算法 算法文献:Realistic CG Stereo Image Dataset with Ground Truth Disparity Maps

对OpenCV中涉及的三种立体匹配算法进行代码及各自优缺点总结:

首先我们看一下BM算法:

该算法代码:

 

  1. CvStereoBMState *BMState = cvCreateStereoBMState();  
  2. int SADWindowSize=15;   
  3.  BMState->SADWindowSize = SADWindowSize > 0 ? SADWindowSize : 9;  
  4.  BMState->minDisparity = 0;  
  5.  BMState->numberOfDisparities = 32;  
  6.  BMState->textureThreshold = 10;  
  7.  BMState->uniquenessRatio = 15;  
  8.  BMState->speckleWindowSize = 100;  
  9.  BMState->speckleRange = 32;  
  10.  BMState->disp12MaxDiff = 1;  
  11.  cvFindStereoCorrespondenceBM( left, right, left_disp_,BMState);  
  12.    cvNormalize( left_disp_, left_vdisp, 0, 256, CV_MINMAX );  

 

其中minDisparity是控制匹配搜索的第一个参数,代表了匹配搜苏从哪里开始,numberOfDisparities表示最大搜索视差数uniquenessRatio表示匹配功能函数,这三个参数比较重要,可以根据实验给予参数值。

该方法速度最快,一副320*240的灰度图匹配时间为31ms,视差图如下。

第二种方法是SGBM方法这是OpenCV的一种新算法:

 

[cpp]  view plain  copy
  1. cv::StereoSGBM sgbm;  
  2.         sgbm.preFilterCap = 63;  
  3.         int SADWindowSize=11;   
  4.         int cn = 1;  
  5.         sgbm.SADWindowSize = SADWindowSize > 0 ? SADWindowSize : 3;  
  6.         sgbm.P1 = 4*cn*sgbm.SADWindowSize*sgbm.SADWindowSize;  
  7.         sgbm.P2 = 32*cn*sgbm.SADWindowSize*sgbm.SADWindowSize;  
  8.         sgbm.minDisparity = 0;  
  9.         sgbm.numberOfDisparities = 32;  
  10.         sgbm.uniquenessRatio = 10;  
  11.         sgbm.speckleWindowSize = 100;  
  12.         sgbm.speckleRange = 32;  
  13.         sgbm.disp12MaxDiff = 1;  
  14.       
  15.         sgbm(left , right , left_disp_);  
  16.         sgbm(right, left  , right_disp_);  
 

 

各参数设置如BM方法,速度比较快,320*240的灰度图匹配时间为78ms,视差效果如下图。

第三种为GC方法:

 

[cpp]  view plain  copy
  1. CvStereoGCState* state = cvCreateStereoGCState( 16, 2 );  
  2.  left_disp_  =cvCreateMat( left->height,left->width, CV_32F );  
  3.  right_disp_ =cvCreateMat( right->height,right->width,CV_32F );  
  4.  cvFindStereoCorrespondenceGC( left, right, left_disp_, right_disp_, state, 0 );  
  5.  cvReleaseStereoGCState( &state );  
 

 

该方法速度超慢,但效果超好。

各方法理论可以参考文献。




立体匹配算法单独分析

StereoBM/StereoSGBM/StereoVar、SAD块匹配算法、GC算法-OpenCV实现



最简单的SAD块匹配算法

//最简单的SAD块匹配算法
//Stereo Match By SAD
#include <opencv2/opencv.hpp>
#include <vector>
#include <algorithm>
#include <iostream>  
#include <windows.h>  
#include <string>  

using namespace std;
using namespace cv;

DWORD t1;  
DWORD t2;  

void timebegin()  
{  
	t1 = GetTickCount();  
}  

void timeend(string str)  
{  
	t2 = GetTickCount();  
	cout << str << " is "<< (t2 - t1)/1000 << "s" << endl;  
}  


float sadvalue(const Mat &src1, const Mat &src2)
{
	Mat  matdiff = cv::abs(src1 -src2);
	int  saddiff = cv::sum(matdiff)[0];
	return saddiff;
}

float GetMinSadIndex(std::vector<float> &sad)
{
	float minsad = sad[0];
	int index = 0;
	int len = sad.size();
	for (int i = 1; i < len; ++i)
	{
		if (sad[i] < minsad)
		{
			minsad = sad[i];
			index = i;
		}
	}
	return index;
}

void MatDataNormal(const Mat &src, Mat &dst)
{
	normalize(src, dst, 255, 0, NORM_MINMAX );
	dst.convertTo(dst, CV_8UC1);
}


void GetPointDepthRight(Mat &disparity, const Mat &leftimg, const Mat  &rightimg, 
	const int MaxDisparity, const  int winsize)
{
	int row = leftimg.rows;
	int col = leftimg.cols;
	if (leftimg.channels() == 3 && rightimg.channels() == 3)
	{
		cvtColor(leftimg, leftimg, CV_BGR2GRAY);
		cvtColor(rightimg, rightimg, CV_BGR2GRAY);
	}

	//Mat disparity = Mat ::zeros(row,col, CV_32S);
	int w = winsize;
	int rowrange = row - w;
	int colrange = col - w - MaxDisparity;

	for (int i = w; i < rowrange; ++i)
	{
		int *ptr = disparity.ptr<int>(i);
		for (int j = w; j < colrange; ++j)
		{
			//Rect rightrect;
			Mat rightwin = rightimg(Range(i - w,i + w + 1),Range(j - w,j + w + 1)); 
			std::vector<float> sad(MaxDisparity);
			for (int d = j; d < j + MaxDisparity; ++d)
			{
				//Rect leftrect;
				Mat leftwin = leftimg(Range(i - w,i + w + 1),Range(d - w,d + w + 1));
				sad[d - j] = sadvalue(leftwin, rightwin);
			}
			*(ptr + j) = GetMinSadIndex(sad);
		}
	}
}

void GetPointDepthLeft(Mat &disparity, const  Mat &leftimg, const Mat  &rightimg, 
	const int MaxDisparity, const  int winsize)
{
	int row = leftimg.rows;
	int col = leftimg.cols;
	if (leftimg.channels() == 3 && rightimg.channels() == 3)
	{
		cvtColor(leftimg, leftimg, CV_BGR2GRAY);
		cvtColor(rightimg, rightimg, CV_BGR2GRAY);
	}

	//Mat disparity = Mat ::zeros(row,col, CV_32S);
	int w = winsize;
	int rowrange = row - w;
	int colrange = col - w;

	for (int i = w; i < rowrange; ++i)
	{
		int *ptr = disparity.ptr<int>(i);
		for (int j = MaxDisparity + w; j < colrange; ++j)
		{
			//Rect leftrect;
			Mat leftwin = leftimg(Range(i - w,i + w + 1),Range(j - w,j + w + 1)); 
			std::vector<float> sad(MaxDisparity);
			for (int d = j; d >  j -  MaxDisparity; --d)
			{
				//Rect rightrect;
				Mat rightwin = rightimg(Range(i - w,i + w + 1),Range(d - w,d + w + 1));
				sad[j - d] = sadvalue(leftwin, rightwin);
			}
			*(ptr + j) = GetMinSadIndex(sad);
		}
	}
}

//(Left-Right Consistency (LRC)
void CrossCheckDiaparity(const Mat &leftdisp, const Mat &rightdisp, Mat &lastdisp, 
	const int MaxDisparity, const int winsize)
{
	int row = leftdisp.rows;
	int col = rightdisp.cols;
	int w = winsize;
	int rowrange = row - w;
	int colrange = col - MaxDisparity - w;
	int diffthreshold = 2;
	for (int i = w; i < row -w; ++i)
	{
		const int *ptrleft = leftdisp.ptr<int>(i);
		const int *ptrright = rightdisp.ptr<int>(i);
		int *ptrdisp = lastdisp.ptr<int>(i);
		for (int j = MaxDisparity + w; j < col - MaxDisparity - w; ++j)
		{
			int leftvalue = *(ptrleft + j);
			int rightvalue = *(ptrright + j - leftvalue );
			int diff = abs(leftvalue - rightvalue);
			if (diff > diffthreshold)
			{
				*(ptrdisp + j) = 0;
			}else
			{
				*(ptrdisp + j) = leftvalue;
			}
		}
	}

}


int main()
{
	Mat leftimg = imread("left.png",0);   
	Mat rightimg = imread("right.png",0); 

	if (leftimg.channels() == 3 && rightimg.channels() == 3)
	{
		cvtColor(leftimg, leftimg, CV_BGR2GRAY);
		cvtColor(rightimg, rightimg, CV_BGR2GRAY);
	}

	float scale = 1;
	int row = leftimg.rows * scale;
	int col = leftimg.cols * scale;
	resize(leftimg, leftimg, Size( col, row));
	resize(rightimg,rightimg, Size(col, row));
	Mat depthleft = Mat ::zeros(row,col, CV_32S);
	Mat depthright = Mat ::zeros(row,col, CV_32S);
	Mat lastdisp = Mat ::zeros(row,col, CV_32S);
	int MaxDisparity = 60 * scale;
	int winsize = 31*scale;

	timebegin();
	GetPointDepthLeft(depthleft, leftimg, rightimg, MaxDisparity,  winsize);
	GetPointDepthRight(depthright, leftimg, rightimg, MaxDisparity,  winsize);
	CrossCheckDiaparity(depthleft,depthright, lastdisp, MaxDisparity, winsize);
	timeend("time ");

	MatDataNormal(depthleft,depthleft);
	MatDataNormal(depthright, depthright);
	MatDataNormal(lastdisp, lastdisp);
	namedWindow("left", 0);
	namedWindow("right", 0);
	namedWindow("depthleft", 0);
	namedWindow("depthright", 0);
	namedWindow("lastdisp",0);
	imshow("left", leftimg);
	imshow("right", rightimg);
	imshow("depthleft", depthleft);
	imshow("depthright", depthright);
	imshow("lastdisp",lastdisp);

	string strsave = "result_";
	imwrite(strsave +"depthleft.jpg", depthleft);
	imwrite(strsave +"depthright.jpg", depthright);
	imwrite(strsave +"lastdisp.jpg",lastdisp);
	waitKey(0);
	return 0;
}



GC算法 效果最好,速度最慢

#include <highgui.h>
#include <cv.h>
#include <cxcore.h>
#include <iostream>
using namespace std;
using namespace cv;
#include <opencv2/opencv.hpp>
#include <opencv2\imgproc\imgproc.hpp>
#include <opencv2\core\core.hpp>
#include <opencv2\highgui\highgui.hpp>
#include <opencv2\calib3d\calib3d.hpp>
#include <opencv2\features2d\features2d.hpp>
#include <opencv2\legacy\legacy.hpp>
   //GC算法---------XXX  有问题,头文件问题  //GC算法 效果最好,速度最慢
int main()
{

	//IplImage * img1 = cvLoadImage("left.png",0);
	//IplImage * img2 = cvLoadImage("right.png",0);
	//IplImage * img1 = cvLoadImage("tsukuba_l.png",0);
	//IplImage * img2 = cvLoadImage("tsukuba_r.png",0);
	IplImage * img1 = cvLoadImage("left.png",0);
	IplImage * img2 = cvLoadImage("right.png",0);
	CvStereoGCState* GCState=cvCreateStereoGCState(64,3);
	assert(GCState);
	cout<<"start matching using GC"<<endl;
	CvMat* gcdispleft=cvCreateMat(img1->height,img1->width,CV_16S);
	CvMat* gcdispright=cvCreateMat(img2->height,img2->width,CV_16S);
	CvMat* gcvdisp=cvCreateMat(img1->height,img1->width,CV_8U);
	int64 t=getTickCount();
	cvFindStereoCorrespondenceGC(img1,img2,gcdispleft,gcdispright,GCState);
	t=getTickCount()-t;
	cout<<"Time elapsed:"<<t*1000/getTickFrequency()<<endl;
	//cvNormalize(gcdispleft,gcvdisp,0,255,CV_MINMAX);
	//cvSaveImage("GC_left_disparity.png",gcvdisp);
	cvNormalize(gcdispright,gcvdisp,0,255,CV_MINMAX);
	cvSaveImage("GC_right_disparity.png",gcvdisp);


	cvNamedWindow("GC_disparity",0);
	cvShowImage("GC_disparity",gcvdisp);
	cvWaitKey(0);
	cvReleaseMat(&gcdispleft);
	cvReleaseMat(&gcdispright);
	cvReleaseMat(&gcvdisp);
	return 0;
}



BM算法:速度很快,效果一般

//BM算法:速度很快,效果一般
#include <highgui.h>
#include <cv.h>
#include <cxcore.h>
#include <iostream>
using namespace std;
using namespace cv;
int main()
{

	//IplImage * img1 = cvLoadImage("left.png",0);
	//IplImage * img2 = cvLoadImage("right.png",0);
	//IplImage * img1 = cvLoadImage("tsukuba_l.png",0);
	//IplImage * img2 = cvLoadImage("tsukuba_r.png",0);
	IplImage * img1 = cvLoadImage("left.png",0);
	IplImage * img2 = cvLoadImage("right.png",0);
	CvStereoBMState* BMState=cvCreateStereoBMState();
	assert(BMState);
	BMState->preFilterSize=9;
	BMState->preFilterCap=31;
	BMState->SADWindowSize=15;
	BMState->minDisparity=0;
	BMState->numberOfDisparities=64;
	BMState->textureThreshold=10;
	BMState->uniquenessRatio=15;
	BMState->speckleWindowSize=100;
	BMState->speckleRange=32;
	BMState->disp12MaxDiff=1;

	CvMat* disp=cvCreateMat(img1->height,img1->width,CV_16S);
	CvMat* vdisp=cvCreateMat(img1->height,img1->width,CV_8U);
	int64 t=getTickCount();
	cvFindStereoCorrespondenceBM(img1,img2,disp,BMState);
	t=getTickCount()-t;
	cout<<"Time elapsed:"<<t*1000/getTickFrequency()<<endl;
	cvSave("disp.xml",disp);
	cvNormalize(disp,vdisp,0,255,CV_MINMAX);
		namedWindow("left", 1);
		cvShowImage("left", img1);
		namedWindow("right", 1);
		cvShowImage("right", img2);
	//cvNamedWindow("BM_disparity",0);
	namedWindow("BM_disparity", 1);
	cvShowImage("BM_disparity",vdisp);
	cvWaitKey(0);
	//cvSaveImage("cones\\BM_disparity.png",vdisp);
	cvReleaseMat(&disp);
	cvReleaseMat(&vdisp);
	cvDestroyWindow("BM_disparity");
	return 0;
}



SGBM算法,作为一种全局匹配算法,立体匹配的效果明显好于局部匹配算法,但是同时复杂度上也要远远大于局部匹配算法

#include <highgui.h>
#include <cv.h>
#include <cxcore.h>
#include <iostream>
using namespace std;
using namespace cv;

//StereoSGBM方法     
//SGBM算法,作为一种全局匹配算法,立体匹配的效果明显好于局部匹配算法,但是同时复杂度上也要远远大于局部匹配算法。算法主要是参考Stereo Processing by Semiglobal Matching and Mutual Information。
//
//	opencv中实现的SGBM算法计算匹配代价没有按照原始论文的互信息作为代价,而是按照块匹配的代价。
//参考:http://www.opencv.org.cn/forum.php?mod=viewthread&tid=23854


int main()
{

	IplImage * img1 = cvLoadImage("left.png",0);
	IplImage * img2 = cvLoadImage("right.png",0);
	//IplImage * img1 = cvLoadImage("tsukuba_l.png",0);
	//IplImage * img2 = cvLoadImage("tsukuba_r.png",0);
	cv::StereoSGBM sgbm;
	int SADWindowSize = 9;
	sgbm.preFilterCap = 63;
	sgbm.SADWindowSize = SADWindowSize > 0 ? SADWindowSize : 3;
	int cn = img1->nChannels;
	int numberOfDisparities=64;
	sgbm.P1 = 8*cn*sgbm.SADWindowSize*sgbm.SADWindowSize;
	sgbm.P2 = 32*cn*sgbm.SADWindowSize*sgbm.SADWindowSize;
	sgbm.minDisparity = 0;
	sgbm.numberOfDisparities = numberOfDisparities;
	sgbm.uniquenessRatio = 10;
	sgbm.speckleWindowSize = 100;
	sgbm.speckleRange = 32;
	sgbm.disp12MaxDiff = 1;
	Mat disp, disp8;
	int64 t = getTickCount();
	sgbm((Mat)img1, (Mat)img2, disp);
	t = getTickCount() - t;
	cout<<"Time elapsed:"<<t*1000/getTickFrequency()<<endl;
	disp.convertTo(disp8, CV_8U, 255/(numberOfDisparities*16.));

	namedWindow("left", 1);
	cvShowImage("left", img1);
	namedWindow("right", 1);
	cvShowImage("right", img2);
	namedWindow("disparity", 1);
	imshow("disparity", disp8);
	waitKey();
	imwrite("sgbm_disparity.png", disp8);   
	cvDestroyAllWindows();
	return 0;
}

Var算法

var.levels = 3;                                 // ignored with USE_AUTO_PARAMS
	var.pyrScale = 0.5;                             // ignored with USE_AUTO_PARAMS
	var.nIt = 25;
	var.minDisp = -numberOfDisparities;
	var.maxDisp = 0;
	var.poly_n = 3;
	var.poly_sigma = 0.0;
	var.fi = 15.0f;
	var.lambda = 0.03f;
	var.penalization = var.PENALIZATION_TICHONOV;   // ignored with USE_AUTO_PARAMS
	var.cycle = var.CYCLE_V;                        // ignored with USE_AUTO_PARAMS
	var.flags = var.USE_SMART_ID | var.USE_AUTO_PARAMS | var.USE_INITIAL_DISPARITY | var.USE_MEDIAN_FILTERING;
//disp = dispp.colRange(numberOfDisparities, img1p.cols);
	if (alg != STEREO_VAR)
		disp.convertTo(disp8, CV_8U, 255 / (numberOfDisparities*16.));
	else
		disp.convertTo(disp8, CV_8U);
	if (!no_display)
	{
		namedWindow("left", 1);
		imshow("left", img1);

		namedWindow("right", 1);
		imshow("right", img2);

		namedWindow("disparity", 0);
		imshow("disparity", disp8);

		imwrite("result.bmp", disp8);
		printf("press any key to continue...");
		fflush(stdout);
		waitKey();
		printf("\n");
	}

Stereo Correspondence算法

class StereoVar
{
    StereoVar();
    StereoVar(    int levels, double pyrScale,
                                    int nIt, int minDisp, int maxDisp,
                                    int poly_n, double poly_sigma, float fi,
                                    float lambda, int penalization, int cycle,
                                    int flags);
    virtual ~StereoVar();

    virtual void operator()(InputArray left, InputArray right, OutputArray disp);

    int        levels;
    double    pyrScale;
    int        nIt;
    int        minDisp;
    int        maxDisp;
    int        poly_n;
    double    poly_sigma;
    float    fi;
    float    lambda;
    int        penalization;
    int        cycle;
    int        flags;

    ...
};

The class implements the modified S. G. Kosov algorithm [Publication] that differs from the original one as follows:

  • The automatic initialization of method’s parameters is added.
  • The method of Smart Iteration Distribution (SID) is implemented.
  • The support of Multi-Level Adaptation Technique (MLAT) is not included.
  • The method of dynamic adaptation of method’s parameters is not included.

StereoVar::StereoVar

用变分匹配算法计算立体匹配
该类实现了改进的S. G. Kosov算法[出版],从原来的不同如下:
   1增加了方法参数的自动初始化。
   2实现了智能迭代分布(SID)方法。
   3多层次自适应技术支持(MLAT)不包括在内。
   4方法参数的动态调整方法不包括在内。
C++: StereoVar:: StereoVar (int levels, double pyrScale, int nIt, int minDisp, int maxDisp, int poly_n, double poly_sigma, float fi, float lambda, int penalization, int cycle, int flags )

StereoVar

class StereoVar

用变分匹配算法计算立体匹配

该类实现了改进的S. G. Kosov算法[出版],从原来的不同如下:

l 增加了方法参数的自动初始化。

实现了智能迭代分布(SID)方法。

多层次自适应技术支持(MLAT)不包括在内。

l 方法参数的动态调整方法不包括在内。

StereoVar::StereoVar

C++: StereoVar::StereoVar()

C++: StereoVar::StereoVar(int levels, double pyrScale, int nIt, int minDisp, int maxDisp, int poly_n, double poly_sigma, float fi, float lambda, int penalization, int cycle, int flags)

参数:

l levels-金字塔层数,包括初始图像。levels= 1意味着不创建额外的层,只使用原始图像。

如果flaguse_auto_params设置此参数被忽略。

l pyrscale–指定图像的尺度(<1)为每个图像金字塔的建造。pyrscale = 0.5意味着经典的金字塔,在每一层比以前更小的两倍。(如果flaguse_auto_params设置此参数被忽略)。

算法在每个金字塔levels上的迭代次数。(如果flaguse_smart_id设置迭代次数将重 新分配,在这样一种方式,更多的迭代将在更大的程度。)

l mindisp–最小可能的视差值。如果左、右输入图像改变位置,则可能是负值。

l maxdisp–最大差异值。

l poly_n–大小用来寻找在每个像素的像素邻域多项式展开。较大的值意味着图像将被平滑的表面近似,从而产生更鲁棒的算法和更模糊的运动场。通常情况下,poly_n = 3, 57

l poly_sigma–的高斯是用来作为多项式展开的基础光滑衍生物的标准偏差。对于poly_n = 5可以设置poly_sigma = 1.1,为poly_n = 7好值将poly_sigma = 1.5

l fi-平滑参数,光滑项的权系数。

l lambdaλ-保边光滑阈值参数。(如果penalization_charbonnierpenalization_perona_malik使用。此参数被忽略)

l penalization –可能的值:penalization_tichonov线性平滑;penalization_charbonnier非线性边缘保持平滑;penalization_perona_malik非线性边缘增强平滑。(如果flaguse_auto_params设置此参数被忽略)。

l Cycle-多重网格循环的周期型。可能的值:cycle_o为null-cycle_v周期 v-cycles (如果flaguse_auto_params设置此参数被忽略)。

l flag–操作标志;可以是以下的组合:

use_initial_disparity:使用输入流作为初始流量近似。

use_equalize_hist:在预处理阶段使用直方图均衡化。

use_smart_id:使用智能迭代分配(SID)。

use_auto_params:允许初始化主要参数的方法。

use_median_filtering:使用在后处理阶段解决的中值滤波器。

首先构造函数初始化stereovar所有的默认参数。所以,你只需要至少设置stereovar::maxdisp/stereovar::mindisp。第二个构造函数允许您将每个参数设置为自定义值。

StereoVar::operator ()

C++: void StereoVar::operator()(const Mat& left, const Mat& right, Mat& disp)

用变分算法计算校正立体对的视差。

参数:

l left Left 8-bit单通道或3-是3通道的图像。

l right – 右图像和左视图相同的图像。

l disp – 视差图显示输出。它是一个与输入图像大小相同的8位带符号单通道图像。

执行变分算法计算校正立体对的视差。见stereo_match.cpp OpenCV的sample样例, 如何准备图片和调用方法。

注:

该方法不是恒定的,所以你不应该在不同的线程同时,使用相同的stereovar实例从不同的线程同时。

 OpenCV中的立体视觉算法比较图




STEREO_HH的疑问

不太了解HH算法指的是什么
可以参考论文 http://www.doc88.com/p-7327678137325.html

待续……

猜你喜欢

转载自blog.csdn.net/kyjl888/article/details/79250481