Vins-Fusion external reference camera-imu calibration

Vins-Fusion is an optimization-based multi-sensor state estimator, a major feature is to support real-time camera-imu extrinsic calibration (rotation between camera and IMU). Vins-Fusion external parameter camera-imu calibration is to calculate the camera pose between two frames and imu pre-integration to obtain the pose, construct a least squares problem, and solve the camera-imu external parameter rotation matrix through SVD. It is mainly divided into the following parts:

 1. The matching points between the two frames of the camera are calculatedR_{camera}


1. Obtain the essential matrix E from epipolar geometry

Call cv findFundamentalMat to obtain the essential matrix E. It should be noted that the input parameters of findFundamentalMat are the normalized coordinate points of the matching points of the two frames before and after.

cv::Mat E = cv::findFundamentalMat(ll, rr);

Among them, ll and rr are the normalized coordinate point sets of the matching points of the two frames before and after.


2. Use svd decomposition to obtain four possible solutions R, t

In the source code, decomposing the essential matrix through E is solved by the function decomposeE written by myself. In the function, cv svd is called to decompose the essential matrix to obtain R1, R2, t1, t2, and four sets of solutions can be combined.

decomposeE(E, R1, R2, t1, t2);


3. Determine the final correct solution by maximizing the depth of triangulation, and getR_{camera}

Construct the first frame as the unit matrix, the second frame is the pose R and t obtained earlier, call cv triangulatePoints to calculate the landmark pointclound, and count the number of positive depths according to whether the landmark point is a positive depth, and the larger one is to find the final solution.

 cv::triangulatePoints(P, P1, l, r, pointcloud);

It can be seen that triangulation is based on the calculation of the matching point and the rotation relationship between the matching point to obtain the landmark point, that is, pointclound.


2. Imu is calculated between two frames of the cameraR_{imu}

According to the pose delta_q_imu obtained by pre-integrating imu between two frames, it is calculatedR_{imu}


3. Construct the least squares problem to obtain the external parametersR_{camera \to imu}


Using svd to solve the least squares problem, obtainR_{camera \to imu}

1. Get two frames of camera coordinate system rotation matrixR_{ck \to ck+1}

R_{ck \to ck+1} = R_{ck \to ik} * R_{ik \to ik+1} * R_{ik+1 \to ck+1}

2. Calculate  the maximum angle difference between R_{camera} and R_{ck \to ck+1}

 Calculate the kernel function huber according to  the maximum angle difference between angular_distance R_{camera} and R_{ck \to ck+1}

huber = \begin{matrix} \left\{\begin{matrix} 1, angular < 5\\ 5/angular, otherwise \end{matrix}\right. \end{matrix}

Among them: angular is the angle difference corresponding to the camera rotation and the IMU rotation, and the angle difference should be 0 theoretically. The angle difference threshold in the source code is set to 5°. If the rotation angle error is greater than 5°, this item will be multiplied by a <1 coefficient to reduce the weight of this item.

 double huber = angular_distance > 5.0 ? 5.0 / angular_distance : 1.0;

 3. Construct the least squares problem

Preliminary knowledge:

P\otimes Q = [P]_{L} * Q = [Q]_{R}*P

in:

 [P]_{L} = \begin{vmatrix} p_{w} & -p_{x} & -p_{y} & -p_{z}\\ p_{x} & p_{w} & -p_{ z} & p_{y}\\ p_{y} & p_{z} & p_{w} & -p_{x}\\ p_{z} & -p_{y}& p_{x} & p_{w } \end{vmatrix} =p_{w}*1 + \begin{vmatrix}0 & -p_{v}^{T}\\p_{v}&\begin{vmatrix}p_{v\times} \end {vmatrix} \end{vmatrix}\!  \!  \!  \!  \!  \;  \;  \;  \;  \;  \;  \;  \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \:

[Q]_{R} = \begin{vmatrix} q_{w} & -q_{x} & -q_{y} & -q_{z}\\ q_{x} & q_{w} & q_{z} & -q_{y}\\ q_{y} & -q_{z} & q_{w} & q_{x}\\ q_{z} & q_{y}& -q_{x} & q_{w} \end{vmatrix} =q_{w}*1 + \begin{vmatrix} 0 & -q_{v}^{T}\\ q_{v}&\begin{vmatrix} -q_{v\times } \end{vmatrix} \end{vmatrix}

 according to:

q_{b_{k}b_{k+1}}\otimes q_{bc} = q_{bc}\otimes q_{c_{k}c_{k+1}}

According to the above preparatory knowledge, it is converted into the following formula:

(\begin{vmatrix} q_{b_{k}b_{k+1}} \end{vmatrix}_{L} - \begin{vmatrix} q_{c_{k}c_{k+1}} \end{vmatrix}_{R})*q_{bc} = 0

 After conversion, the least squares form Ax = 0 is obtained.

        
double w = Quaterniond(Rc[i]).w();
Vector3d q = Quaterniond(Rc[i]).vec();
L.block<3, 3>(0, 0) = w * Matrix3d::Identity() + Utility::skewSymmetric(q);
L.block<3, 1>(0, 3) = q;
L.block<1, 3>(3, 0) = -q.transpose();
L(3, 3) = w;

Quaterniond R_ij(Rimu[i]);
w = R_ij.w();
q = R_ij.vec();
R.block<3, 3>(0, 0) = w * Matrix3d::Identity() - Utility::skewSymmetric(q);
R.block<3, 1>(0, 3) = q;
R.block<1, 3>(3, 0) = -q.transpose();
R(3, 3) = w;

A.block<4, 4>((i - 1) * 4, 0) = huber * (L - R);

4. Obtained by svd solutionR_{camera \to imu}

 Preliminary knowledge svd solves the Ax = 0 problem.

x = V*y, y = [0,0,...0,1]

That is, x is the last column of the V matrix. In this context, x is the A^{T}Aunit eigenvector corresponding to the smallest eigenvalue. 

reference:

Singular value decomposition (SVD) method for solving least squares problems

 Finally get it R_{camera \to imu}.

JacobiSVD<MatrixXd> svd(A, ComputeFullU | ComputeFullV);
Matrix<double, 4, 1> x = svd.matrixV().col(3);
Quaterniond estimated_R(x);
// 求逆
ric = estimated_R.toRotationMatrix().inverse();
//cout << svd.singularValues().transpose() << endl;
//cout << ric << endl;
Vector3d ric_cov;
ric_cov = svd.singularValues().tail<3>();
if (frame_count >= WINDOW_SIZE && ric_cov(1) > 0.25)
{
	calib_ric_result = ric;
	return true;
}
else
	return false;

Guess you like

Origin blog.csdn.net/u010196944/article/details/127546030