Vins-Fusion initialization - triangulate triangulate

Following the previous blog post Vins-Fusion initialization pose - 3D-2D: PNP solves the current frame pose , this article continues to introduce the triangulate when Vins-Fusion binocular initialization.

1. Triangulate triangulate

Triangulate is to restore the 3D position of the feature point according to the projection of the feature point under multiple cameras.

2. Triangulation solution process

1. Traverse the feature points of the current frame. If the depth of the feature point is positive, it means it has been triangulated.

// 已经三角化过
if (it_per_id.estimated_depth > 0)
	continue;

2. Triangulation of binocular first frame

Triangulation is to restore the 3D position of the feature point based on the projection under multiple cameras, which requires the known camera pose and projected pixel position (normalized camera plane pixel coordinates)

(1) Right and left eye position

Left eye starting frame start_frame pose

// 起始观测帧位姿 Tcw
int imu_i = it_per_id.start_frame;
Eigen::Matrix<double, 3, 4> leftPose;
Eigen::Vector3d t0 = Ps[imu_i] + Rs[imu_i] * tic[0];
Eigen::Matrix3d R0 = Rs[imu_i] * ric[0];
leftPose.leftCols<3>() = R0.transpose();
leftPose.rightCols<1>() = -R0.transpose() * t0;

Right eye start frame start_frame pose

// 起始观测帧位姿Tcw
Eigen::Matrix<double, 3, 4> rightPose;
Eigen::Vector3d t1 = Ps[imu_i] + Rs[imu_i] * tic[1];
Eigen::Matrix3d R1 = Rs[imu_i] * ric[1];
rightPose.leftCols<3>() = R1.transpose();
rightPose.rightCols<1>() = -R1.transpose() * t1;
(2) Normalized camera plane pixel coordinates for left and right eyes
// 取左右目对应的归一化相机平面点
Eigen::Vector2d point0, point1;
Eigen::Vector3d point3d;
point0 = it_per_id.feature_per_frame[0].point.head(2);
point1 = it_per_id.feature_per_frame[0].pointRight.head(2);

Note: it_per_id.feature_per_frame[0].point.head(2) indicates the first two data of the left-eye normalized camera plane, ie x, y.

(3) Triangulation

With the pose and the corresponding projection position, you can triangulate and find the landmark points (3D coordinates)

triangulatePoint(leftPose, rightPose, point0, point1, point3d);

//函数原型:
/**
 * SVD计算三角化点
 * @param Pose0     位姿Tcw
 * @param Pose1     位姿Tcw
 * @param point0    归一化相机坐标系平面点
 * @param point1    归一化相机坐标系平面点
 * @param point_3d  output 三角化点,世界坐标系点
*/
void FeatureManager::triangulatePoint(Eigen::Matrix<double, 3, 4> &Pose0, Eigen::Matrix<double, 3, 4> &Pose1,
                        Eigen::Vector2d &point0, Eigen::Vector2d &point1, Eigen::Vector3d &point_3d)
{
    
    
    Eigen::Matrix4d design_matrix = Eigen::Matrix4d::Zero();
    design_matrix.row(0) = point0[0] * Pose0.row(2) - Pose0.row(0);
    design_matrix.row(1) = point0[1] * Pose0.row(2) - Pose0.row(1);
    design_matrix.row(2) = point1[0] * Pose1.row(2) - Pose1.row(0);
    design_matrix.row(3) = point1[1] * Pose1.row(2) - Pose1.row(1);
    Eigen::Vector4d triangulated_point;
    triangulated_point =
              design_matrix.jacobiSvd(Eigen::ComputeFullV).matrixV().rightCols<1>();
    point_3d(0) = triangulated_point(0) / triangulated_point(3);
    point_3d(1) = triangulated_point(1) / triangulated_point(3);
    point_3d(2) = triangulated_point(2) / triangulated_point(3);
}

The above part is analyzed as follows:
insert image description here
Questions: According to the derivation of 1-2 lines of design_matrix, which corresponds to 2-1 in the code, I don’t know why yet?
ps: For the theoretical basis of calling the Eigen:svd library in the code to solve the least squares Ax = 0 problem, you can refer to the article Singular value decomposition (SVD) method to solve the least squares problem , and leave a message to communicate if you have any questions.

3. Binocular non-first frame triangulation

After completing the triangulation of the binocular first frame, traverse the observation frame of the current feature point, construct the least squares with the first frame observation frame, and use SVD to solve it. The solution process is similar to the first frame, as follows

int imu_i = it_per_id.start_frame, imu_j = imu_i - 1;

Eigen::MatrixXd svd_A(2 * it_per_id.feature_per_frame.size(), 4);
int svd_idx = 0;

Eigen::Matrix<double, 3, 4> P0;
// 首帧观测帧的位姿 Twc
Eigen::Vector3d t0 = Ps[imu_i] + Rs[imu_i] * tic[0];
Eigen::Matrix3d R0 = Rs[imu_i] * ric[0];
P0.leftCols<3>() = Eigen::Matrix3d::Identity();
P0.rightCols<1>() = Eigen::Vector3d::Zero();

// 遍历当前特征点观测帧,与首帧观测帧构建最小二乘,SVD三角化
for (auto &it_per_frame : it_per_id.feature_per_frame)
{
    
    
	imu_j++;
	// 当前观测帧位姿 Twc
	Eigen::Vector3d t1 = Ps[imu_j] + Rs[imu_j] * tic[0];
	Eigen::Matrix3d R1 = Rs[imu_j] * ric[0];
	// 与首帧观测帧之间的位姿变换 Tij
	Eigen::Vector3d t = R0.transpose() * (t1 - t0);
	Eigen::Matrix3d R = R0.transpose() * R1;
	// Tji
	Eigen::Matrix<double, 3, 4> P;
	P.leftCols<3>() = R.transpose();
	P.rightCols<1>() = -R.transpose() * t;
	Eigen::Vector3d f = it_per_frame.point.normalized();
	svd_A.row(svd_idx++) = f[0] * P.row(2) - f[2] * P.row(0);
	svd_A.row(svd_idx++) = f[1] * P.row(2) - f[2] * P.row(1);

	if (imu_i == imu_j)
		continue;
}
// todo
ROS_ASSERT(svd_idx == svd_A.rows());
Eigen::Vector4d svd_V = Eigen::JacobiSVD<Eigen::MatrixXd>(svd_A, Eigen::ComputeThinV).matrixV().rightCols<1>();
double svd_method = svd_V[2] / svd_V[3];
//it_per_id->estimated_depth = -b / A;
//it_per_id->estimated_depth = svd_V[2] / svd_V[3];

it_per_id.estimated_depth = svd_method;
//it_per_id->estimated_depth = INIT_DEPTH;

if (it_per_id.estimated_depth < 0.1)
{
    
    
	it_per_id.estimated_depth = INIT_DEPTH;
}

The above completes the triangulate to solve the 3D position of the feature point (landmark point) when the Vins-Fusion binocular is initialized.

reference:

(1)https://blog.csdn.net/u010196944/article/details/127569002?spm=1001.2014.3001.5501
(2)https://blog.csdn.net/u010196944/article/details/127514369?spm=1001.2014.3001.5501

Guess you like

Origin blog.csdn.net/u010196944/article/details/127576632