OpenCV function and class understanding records

1. detectAndCompute

This is a feature detection function, commonly used in ORB
detectAndCompute(cv::Mat image,cv::noArray(), cv::KeyPoints kps, cv::Mat desc)
The first parameter image is the image of the detected feature
The second parameter noArray() I don’t know what to use
The third parameter kps is the coordinate value of the feature point
The fourth The parameter is the descriptor of the feature point, represented by Mat matrix, and each row is the feature vector of each key point.

2. cv::DescriptorMatcher

A class used for feature point matching, usually has the following usage

cv::Ptr<cv::DescriptorMatcher> matcher = cv::DescriptorMatcher::create("BruteForce-Hamming");
matches.clear();
matcher->knnMatch(desclr.front(), desclr.back(), matches, 2);

knnMatch(cv::Mat, cv::Mat, std::vector<std::vector<cv::DMatch>>, int)
This function is to find the best k matches in the training set for each descriptor in the query set, k nearest neighbor match: the first
parameter is the query set,
the second parameter is the training set,
and the third parameter is the saved match A class of distance between points The
fourth parameter is to return the number of k nearest neighbors.
The descriptors found by this method generally need to check the distance ratio between the first nearest neighbor and the second nearest neighbor. If the difference is large, it is considered is a good match.
knnMatch is to find the top k best matches, and match only finds the best match

3. cv::DMatch

A class used to store the distance of matching points, which stores 4 variables, and the knnMatch method of opencv's cv::Descripter (see 2) will output a variable of type std::vector<std::vectorcv::DMatch> ( Assuming matches), then, you can use to matches[i][j].queryIdxfind the jth feature point that is most similar to the i-th feature point in the left image and the right image. The number of j is the k set in the previous knnMatch.

  1. queryIdx
  2. trainIdx
  3. imgIdx
  4. distance

4. cv::Mat

cv::Mat::row(i), the Mat returned by the row(i) function is the data address of the original matrix represented. The = assignment operations in OpenCV are all shallow copies, and only the CopyTo operation is a deep copy.
How to assign a row vector to a row of a matrix

        cv::Mat data_matrix = cv::Mat(10, 8, CV_32F);
        for(int i=0;i<data_matrix.rows;i++){
    
    
            float lxi = lptsx[i];
            float lyi = lptsy[i];
            float rxi = rptsy[i];
            float ryi = rptsy[i];
            float tmp[8] = {
    
    lxi*rxi,lxi*ryi,lxi,lyi*rxi,lyi*ryi,lyi,rxi,ryi};
            cv::Mat(1,8,CV_32F,tmp).copyTo(data_matrix.row(i));
        }

A fast and direct way to initialize cv::Mat

Mat K = (Mat_<double>(3, 3) << 520.9, 0, 325.1, 0, 521.0, 249.7, 0, 0, 1);

5. recoverPose

Used to restore the camera's rotation and translation matrix from the corresponding points of the two images, there are 3 overloaded methods

int cv::recoverPose	(	InputArray 	E,
InputArray 	points1,
InputArray 	points2,
InputArray 	cameraMatrix,
OutputArray 	R,
OutputArray 	t,
InputOutputArray 	mask = noArray() 
// E, 计算得到的本质矩阵
// points1, 第一张图像中的N个2D点数组
// points2, 第二张图像中的对应点数组
// cameraMatrix, 相机内参矩阵
// 
)	

6. drawMatches

Function to draw matching keypoints in two images
void cv::drawMatches (
InputArray img1,
const std::vector< KeyPoint > & keypoints1,
InputArray img2,
const std::vector< KeyPoint > & keypoints2,
const std::vector < DMatch > & matches1to2,
InputOutputArray outImg,
const Scalar & matchColor = Scalar::all(-1),
const Scalar & singlePointColor = Scalar::all(-1),
const std::vector< char > & matchesMask = std: :vector< char >(),
DrawMatchesFlags flags = DrawMatchesFlags::DEFAULT
)
img1, cv::Mat keypoints1 of the left image
, keypoint std::vector<cv::Keypoint>type
img2 of the left image, cv::Mat
keypoints2 of the right image, keypoint std::vector<cv::Keypoint>type of the right image
matches1to2, std::vector<std::vector<cv::DMatch>>type
outImg, output image
matchColor, the color of the connection
singlePointColor, the color
flgs of the key points, whether to draw

7. stereoRectify

The function used for image correction in the binocular camera is to construct two virtual cameras and remap the points in the three-dimensional space.

void cv::stereoRectify	(	
			InputArray 	cameraMatrix1,		// 第一个相机的内参矩阵,3×3的cv::Mat
			InputArray 	distCoeffs1,		// 第一个相机的畸变参数,如果没有可以cv::Matx<double,1,5>(0,0,0,0,0)
			InputArray 	cameraMatrix2,		// 第二个相机的内参矩阵
			InputArray 	distCoeffs2,		// 第二个相机的畸变参数
			Size 	imageSize,				// 用于双目校正的输出结果
			InputArray 	R,					// 从第一个相机到第二个相机的旋转
			InputArray 	T,					// 从第一个相机到第二个相机的平移
			OutputArray 	R1,				// rectify中用于旋转第一个相机到第一个虚拟相机的矩阵
			OutputArray 	R2,				// 与上同理
			OutputArray 	P1,				// 3×4的第一个虚拟相机的投影矩阵
			OutputArray 	P2,				// 3×4的第二个虚拟相机的投影矩阵
			OutputArray 	Q,				// 4×4的视差到深度的映射矩阵
			int 	flags = CALIB_ZERO_DISPARITY,
			double 	alpha = -1,
			Size 	newImageSize = Size(),
			Rect * 	validPixROI1 = 0,
			Rect * 	validPixROI2 = 0 
)

8.remap()

Some points in src (the row and column coordinates are specified by map1 and map2 respectively), but when the channel number of map1 is 2, map2 does not work. Extract it and save it in dst, that is, dst ( x , y ) = src ( mapx ( x , y ) , mapy ( x , y ) ) \texttt{dst} (x,y) = \texttt{src} (map_x( x,y),map_y(x,y))dst(x,y)=src(mapx(x,y),mapy(x,y )) , dst has the same shape as map1, the element type is the same as src, and the length of dst does not exceed the maximum value of short int.

void cv::remap	(	
	InputArray 	src,		// 原始图像
	OutputArray 	dst,	// 目标图像
	InputArray 	map1,		// map1(x,y) 原图像(x,y)处的像素对应的x坐标
	InputArray 	map2,		// map2(x,y) 原图像(x,y)处的像素对应的y坐标
	int 	interpolation,	// 插值方法,因为对应坐标可能为浮点数
	int 	borderMode = BORDER_CONSTANT,
	const Scalar & 	borderValue = Scalar() 
)

9. FileStorage class

The class used to store some data in OpenCV to the hard disk, the general usage is as follows:
from blog

// 1.create our writter
	cv::FileStorage fs("test.yml", FileStorage::WRITE);
	
	// 2.Save an int
	int imageWidth= 5;
	int imageHeight= 10;
	fs << "imageWidth" << imageWidth;
	fs << "imageHeight" << imageHeight;
 
	// 3.Write a Mat
	cv::Mat m1= Mat::eye(3,3, CV_8U);
	cv::Mat m2= Mat::ones(3,3, CV_8U);
	cv::Mat resultMat= (m1+1).mul(m1+2);
	fs << "resultMat" << resultMat;
 
	// 4.Write multi-variables 
	cv::Mat cameraMatrix = (Mat_<double>(3,3) << 1000, 0, 320, 0, 1000, 240, 0, 0, 1);
    cv::Mat distCoeffs = (Mat_<double>(5,1) << 0.1, 0.01, -0.001, 0, 0);
    fs << "cameraMatrix" << cameraMatrix << "distCoeffs" << distCoeffs;
 
	// 5.Save local time
	time_t rawtime; time(&rawtime); //#include <time.h>
	fs << "calibrationDate" << asctime(localtime(&rawtime));
 
	// 6.close the file opened
	fs.release();

Save the result as follows:
insert image description here

Guess you like

Origin blog.csdn.net/u013238941/article/details/127306276