SVD decomposition based on opencv to solve the transformation matrix

SVD decomposition based on opencv to solve the transformation matrix

1. Coordinate transformation relationship

In the field of machine vision, conversion between coordinate systems is essential. The essence of space coordinate transformation is to use the two sets of coordinates of the common point to derive the transformation relationship between the two coordinate systems: R (rotation matrix) and T (translation vector).

insert image description here
insert image description here

2. Algorithm principle

In fact, the point cloud registration process is to solve the rotation matrix R and the translation vector T. Here, the objective function is recorded as:
insert image description here
where n is the number of matching points. Assuming that the least square solution is R' and T', theninsert image description here

The same as the centroid of Q (determined by the principle of the least square method), where:
insert image description here
and the centroid of P is:

insert image description here
Then make:
insert image description here
At this time, the objective function can be rewritten as:
insert image description here
Decomposition:
insert image description here
Derivation, then:
insert image description here
In the above formula, H is a third-order square matrix:
insert image description here
Decompose the H matrix with SVD:
insert image description here
Let X=VUT, then:

insert image description here
It can be seen that XH is a symmetric positive definite matrix, so for any third-order orthogonal square matrix B, tr(XH)≥tr(BXH), then for all third-order orthogonal square matrices, only when the determinant of X is close to When it is 1 or equal to 1, the rotation matrix R=X. Then the translation matrix is:
insert image description here

3. Code implementation

//SVD计算坐标转换,输入为公共点在两个坐标系的坐标,输出为旋转矩阵和平移向量
void GetRigidTrans3D(cv::Point3f* srcPoints, cv::Point3f* dstPoints, int pointsNum, TRigidTrans3D& transform)
{
    
    
	double srcSumX = 0.0f;
	double srcSumY = 0.0f;	
	double srcSumZ = 0.0f;	
	double dstSumX = 0.0f;	
	double dstSumY = 0.0f;	
	double dstSumZ = 0.0f;	
	//计算质心	
	for (int i = 0; i < pointsNum; ++i)	
	{
    
    		
		srcSumX += srcPoints[i].x;		
		srcSumY += srcPoints[i].y;		
		srcSumZ += srcPoints[i].z;		
		dstSumX += dstPoints[i].x;		
		dstSumY += dstPoints[i].y;		
		dstSumZ += dstPoints[i].z;			
	}	
	cv::Point3f centerSrc, centerDst;	
	centerSrc.x = float(srcSumX / pointsNum);	
	centerSrc.y = float(srcSumY / pointsNum);	
	centerSrc.z = float(srcSumZ / pointsNum);	
	centerDst.x = float(dstSumX / pointsNum);	
	centerDst.y = float(dstSumY / pointsNum);	
	centerDst.z = float(dstSumZ / pointsNum);	
	cv::Mat srcMat(3, pointsNum, CV_32FC1);	
	cv::Mat dstMat(3, pointsNum, CV_32FC1);	
	float* srcDat = (float*)(srcMat.data);	
	float* dstDat = (float*)(dstMat.data);	
	for (int i = 0; i < pointsNum; ++i)	
	{
    
    	
		srcDat[i] = srcPoints[i].x - centerSrc.x;
		srcDat[pointsNum + i] = srcPoints[i].y - centerSrc.y;		
		srcDat[pointsNum * 2 + i] = srcPoints[i].z - centerSrc.z;
		dstDat[i] = dstPoints[i].x - centerDst.x;		
		dstDat[pointsNum + i] = dstPoints[i].y - centerDst.y;		
		dstDat[pointsNum * 2 + i] = dstPoints[i].z - centerDst.z;
	}
	//SVD分解	
	cv::Mat matS = srcMat * dstMat.t();	
	cv::Mat matU, matW, matV;	
	cv::SVDecomp(matS, matW, matU, matV);	
	cv::Mat matTemp = matU * matV;	
	double det = cv::determinant(matTemp);	
	float datM[] = {
    
     1, 0, 0, 0, 1, 0, 0, 0, det };	
	cv::Mat matM(3, 3, CV_64FC1, datM);	
	cv::Mat matR = matV.t() * matM * matU.t();	
	transform.matR[0] = matR.at<float>(0, 0);	
	transform.matR[1] = matR.at<float>(0, 1);	
	transform.matR[2] = matR.at<float>(0, 2);	
	transform.matR[3] = matR.at<float>(1, 0);	
	transform.matR[4] = matR.at<float>(1, 1);	
	transform.matR[5] = matR.at<float>(1, 2);	
	transform.matR[6] = matR.at<float>(2, 0);	
	transform.matR[7] = matR.at<float>(2, 1);	
	transform.matR[8] = matR.at<float>(2, 2); 	
	double* datR = (double*)(transform.matR);	
	transform.X = centerDst.x - (centerSrc.x * datR[0] + centerSrc.y * datR[1] + centerSrc.z * datR[2]);	
	transform.Y = centerDst.y - (centerSrc.x * datR[3] + centerSrc.y * datR[4] + centerSrc.z * datR[5]);	
	transform.Z = centerDst.z - (centerSrc.x * datR[6] + centerSrc.y * datR[7] + centerSrc.z * datR[8]); 
}

Guess you like

Origin blog.csdn.net/zhoufm260613/article/details/125948705