Find the normalized transformation of fundamental matrix and homography matrix in SLAM

When using the direct linear transformation (DLT) algorithm to solve the fundamental matrix F or the homography matrix H, data normalization must be performed first. The normalization transformation will eliminate the influence of arbitrarily selecting the origin and scale of the image coordinate system.

The normalization method includes translation and scale scaling of the image coordinates. The normalization must be performed before the implementation of DLT, and then the result can be appropriately corrected to obtain the basic matrix F or the homography matrix H about the original coordinate system.

The following takes the homography matrix as an example to introduce the normalized DLT process. The
Insert picture description here
normalization process can be summarized as follows:
(1) Translate the point so that its centroid is at the circle
point (2) Zoom the point
(3) Adjust the two images The feature points on are all subjected to the above transformation

After the normalization, the basic matrix F or the homography matrix H can be solved, and the denormalization operation should be performed after the solution is completed.

The following uses the normalized DLT code in ORBSLAM2 to solve the homography matrix H to implement the above process:

void Initializer::Normalize(const vector<cv::KeyPoint> &vKeys, vector<cv::Point2f> &vNormalizedPoints, cv::Mat &T)
{
    
    
    ///这里的归一化,归一的是这些点在x方向和在y方向上的一阶绝对矩。步骤如下:

    const int N = vKeys.size();// 点总数
    vNormalizedPoints.resize(N);//标准化后的点

    float meanX = 0;//横坐标均值
    float meanY = 0;//纵坐标均值
    for(int i=0; i<N; i++)
    {
    
    
        meanX += vKeys[i].pt.x;// 横坐标之和
        meanY += vKeys[i].pt.y;// 纵坐标之和
    }
    meanX = meanX/N;//横坐标均值
    meanY = meanY/N;//纵坐标均值

    // 分别累计这些特征点偏离横纵坐标均值的多少
    float meanDevX = 0;//绝对矩
    float meanDevY = 0;//绝对矩

    // 将所有vKeys点减去中心坐标,使x坐标和y坐标均值分别为0
    for(int i=0; i<N; i++)
    {
    
    
        vNormalizedPoints[i].x = vKeys[i].pt.x - meanX;// 去均值点坐标
        vNormalizedPoints[i].y = vKeys[i].pt.y - meanY;
        //累计这些特征点偏离横纵坐标均值的程度
        meanDevX += fabs(vNormalizedPoints[i].x);// 总绝对矩
        meanDevY += fabs(vNormalizedPoints[i].y);
    }
    // 求出平均到每个点上,其坐标偏离横纵坐标均值的程度;将其倒数作为一个尺度缩放因子
    meanDevX = meanDevX/N; //均值绝对矩
    meanDevY = meanDevY/N;

    float sX = 1.0/meanDevX;
    float sY = 1.0/meanDevY;

    // 将x坐标和y坐标分别进行尺度缩放,使得x坐标和y坐标的一阶绝对矩分别为1
    // 这里所谓的一阶绝对矩其实就是随机变量到取值的中心的绝对值的平均值, 归一化就体现在这里
    for(int i=0; i<N; i++)
    {
    
    
        // 归一化后的点坐标   //就是简单地对特征点的坐标进行进一步的缩放
        vNormalizedPoints[i].x = vNormalizedPoints[i].x * sX;  // 去均值点坐标 * 绝对矩倒数
        vNormalizedPoints[i].y = vNormalizedPoints[i].y * sY;
    }
    ///计算归一化矩阵
    // |sX  0  -meanx*sX|
    // |0   sY -meany*sY|
    // |0   0       1   |
    //  标准化矩阵
    //  标准化矩阵 * 点坐标 = 标准化后的坐标
    //  点坐标 = 标准化矩阵的逆矩阵 * 标准化后的的坐标
    T = cv::Mat::eye(3,3,CV_32F);
    T.at<float>(0,0) = sX;
    T.at<float>(1,1) = sY;
    T.at<float>(0,2) = -meanX*sX;
    T.at<float>(1,2) = -meanY*sY;
}

Then the process of solving the homography matrix H

	//归一化后的参考帧和当前帧中的特征点坐标
    vector<cv::Point2f> vPn1, vPn2;// 2d-2d点对
    //各自的归一化矩阵
    //其实这里的矩阵归一化操作主要是为了在单目初始化过程中,固定场景的尺度,原理可以参考SLAM十四讲P152
    cv::Mat T1, T2;// 标准化矩阵
    Normalize(mvKeys1,vPn1, T1);// 标准化点坐标  去均值点坐标 * 绝对矩倒数
    Normalize(mvKeys2,vPn2, T2);
    //这里求的逆在后面的代码中要用到,辅助进行原始尺度的恢复
    cv::Mat T2inv = T2.inv();// 标准化矩阵 逆矩阵
    cv::Mat H21i, H12i;// 原点对的单应矩阵 //  H21i 原始点
	///此处略去了随机取8对对应约束的过程,vPn1i,vPn2i即迭代过程中选取的对应点对。
	cv::Mat Hn = ComputeH21(vPn1i,vPn2i);//  计算标准化后的点对的单应矩阵
    // 去归一化,恢复原始的均值和尺度
    H21i = T2inv*Hn*T1;  // 原始点    p1 ---> p2 的单应
    H12i = H21i.inv();   // 原始点    p2 ---> p1 的单应

Guess you like

Origin blog.csdn.net/qq_33898609/article/details/107460140