A study on the difference between the initUndistortRectifyMap () function in OpenCV and the dedistortion formula in Lecture 14


Recently, I discovered a problem when using OpenCV to dedistort fisheye camera images. The parameters used to dedistort based on the pinhole model are a little different from the distortion coefficients in the previous fourteen lectures and visual SLAM.

1. The dedistortion formula in Lecture 14

The first is the method in Lecture 14 or visual SLAM. The distortion coefficient of the pinhole model is[k1, k2, p1, p2], which is calculated using the following de-distortion formula:

Insert image description here

2. Dedistortion formula in OpenCV

In OpenCV, the mapping table between the original image and the corrected image can be obtained through the initUndistortRectifyMap() function, and then the remap() function performs operations on the entire image according to the mapping table Mapping processing achieves distortion removal.

 cv::fisheye::initUndistortRectifyMap(K, D, cv::Mat(), K, imageSize, CV_16SC2, map1, map2);
 cv::remap(raw_image, undistortImg, map1, map2, cv::INTER_LINEAR, cv::BORDER_CONSTANT);

For specific implementation, please see the article "De-distortion processing of fisheye camera images"

initUndistortRectifyMap()The function is declared as follows:

void cv::initUndistortRectifyMap	
(	InputArray 	cameraMatrix,     // 原相机内参矩阵
        InputArray 	distCoeffs,       // 原相机畸变参数
        InputArray 	R,                // 可选的修正变换矩阵 
        InputArray 	newCameraMatrix,  // 新相机内参矩阵
        Size 	        size,             // 去畸变后图像的尺寸
        int 	        m1type,           // 第一个输出的映射(map1)的类型,CV_32FC1 or CV_16SC2
        OutputArray 	map1,             // 第一个输出映射
        OutputArray 	map2              // 第二个输出映射
)

Interestingly, the camera distortion parameters here are optional, and can be 4 parametersk1, k2, p1, p2, or 5 parametersk1, k2, p1, p2, k3, or Can be 8 parametersk1, k2, p1, p2, k3, k4, k5, k6.

Later, I searched for the distortion formula in the initUndistortRectifyMap() function, as follows:
Insert image description here

The core of the derivation process of is:
Insert image description here
When k3, k4, k5, k6 and s1, s2, s3, s4 are both 0, the dedistortion formula and The formula in Lecture 14 is the same, that is, the dedistortion formula in Lecture 14 is a simplified version of this formula.

3. Difference between 4 parameters and 8 parameters

It has been said that initUndistortRectifyMap()The dedistortion parameter in the function can be 4 parametersk1, k2, p1, p2, and it can be 5 parametersk1, k2, p1, p2, k3, or 8 parametersk1, k2, p1, p2, k3, k4, k5, k6.

For ordinary wide-angle camera images, radial distortion and tangential distortion are generally relatively small, so only using k1, k2, p1, p2 can complete the distortion removal process, which corresponds to the removal process in Lecture 14. Distortion formula.

For fisheye cameras, there is generally relatively large radial distortion, so a higher-order radial distortion coefficient is neededk3, k4, k5, k6, as for why 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 1 + k 4 r 2 + k 5 r 4 + k 6 r 6 \frac{1+k_1r^2+k_2r^4+k_3r^6 }{1+k_4r^2+k_5r^4+k_6r^6} 1+k4r2+k5r4+k6r61+k1r2+k2r4+k3r6This ratio form, temporarily to find the design principle of the formula, should be designed based on some consideration of radial distortion.

Depending on different calibration tools and camera models, the obtained fisheye camera distortion coefficients may be in various forms. What you need to know is that they can all be used in the OpenCV dedistortion function. And sometimes the complete 8 de-distortion parametersk1, k2, p1, p2, k3, k4, k5, k6 are obtained through calibration, which means that when calling the OpenCV function to de-distort, you need to use the complete parameters. Only using k1, k2, p1, p2 will Get a failed result.

4.initUndistortRectifyMap() function source code

void cv::initUndistortRectifyMap( InputArray _cameraMatrix, InputArray _distCoeffs,
                              InputArray _matR, InputArray _newCameraMatrix,
                              Size size, int m1type, OutputArray _map1, OutputArray _map2 )
{
    
    
    //相机内参、畸变矩阵
    Mat cameraMatrix = _cameraMatrix.getMat(), distCoeffs = _distCoeffs.getMat();
    //旋转矩阵、摄像机参数矩阵
    Mat matR = _matR.getMat(), newCameraMatrix = _newCameraMatrix.getMat();
 
    if( m1type <= 0 )
        m1type = CV_16SC2;
    CV_Assert( m1type == CV_16SC2 || m1type == CV_32FC1 || m1type == CV_32FC2 );
    _map1.create( size, m1type );
    Mat map1 = _map1.getMat(), map2;
    if( m1type != CV_32FC2 )
    {
    
    
        _map2.create( size, m1type == CV_16SC2 ? CV_16UC1 : CV_32FC1 );
        map2 = _map2.getMat();
    }
    else
        _map2.release();
 
    Mat_<double> R = Mat_<double>::eye(3, 3);
    //A为相机内参
    Mat_<double> A = Mat_<double>(cameraMatrix), Ar;
 
    //Ar 为摄像机坐标参数
    if( newCameraMatrix.data )
        Ar = Mat_<double>(newCameraMatrix);
    else
        Ar = getDefaultNewCameraMatrix( A, size, true );
    //R  为旋转矩阵
    if( matR.data )
        R = Mat_<double>(matR);
 
    //distCoeffs为畸变矩阵
    if( distCoeffs.data )
        distCoeffs = Mat_<double>(distCoeffs);
    else
    {
    
    
        distCoeffs.create(8, 1, CV_64F);
        distCoeffs = 0.;
    }
 
    CV_Assert( A.size() == Size(3,3) && A.size() == R.size() );
    CV_Assert( Ar.size() == Size(3,3) || Ar.size() == Size(4, 3));
 
    //摄像机坐标系第四列参数  旋转向量转为旋转矩阵
    Mat_<double> iR = (Ar.colRange(0,3)*R).inv(DECOMP_LU);
    //ir IR矩阵的指针
    const double* ir = &iR(0,0);
    //获取相机的内参 u0  v0 为主坐标点   fx fy 为焦距
    double u0 = A(0, 2),  v0 = A(1, 2);
    double fx = A(0, 0),  fy = A(1, 1);
 
    CV_Assert( distCoeffs.size() == Size(1, 4) || distCoeffs.size() == Size(4, 1) ||
               distCoeffs.size() == Size(1, 5) || distCoeffs.size() == Size(5, 1) ||
               distCoeffs.size() == Size(1, 8) || distCoeffs.size() == Size(8, 1));
 
    if( distCoeffs.rows != 1 && !distCoeffs.isContinuous() )
        distCoeffs = distCoeffs.t();
    
    //畸变参数计算
    double k1 = ((double*)distCoeffs.data)[0];
    double k2 = ((double*)distCoeffs.data)[1];
    double p1 = ((double*)distCoeffs.data)[2];
    double p2 = ((double*)distCoeffs.data)[3];
    double k3 = distCoeffs.cols + distCoeffs.rows - 1 >= 5 ? ((double*)distCoeffs.data)[4] : 0.;
    double k4 = distCoeffs.cols + distCoeffs.rows - 1 >= 8 ? ((double*)distCoeffs.data)[5] : 0.;
    double k5 = distCoeffs.cols + distCoeffs.rows - 1 >= 8 ? ((double*)distCoeffs.data)[6] : 0.;
    double k6 = distCoeffs.cols + distCoeffs.rows - 1 >= 8 ? ((double*)distCoeffs.data)[7] : 0.;
    //图像高度
    for( int i = 0; i < size.height; i++ )
    {
    
    
        //映射矩阵map1 
        float* m1f = (float*)(map1.data + map1.step*i);
        //映射矩阵map2
        float* m2f = (float*)(map2.data + map2.step*i);
        short* m1 = (short*)m1f;
        ushort* m2 = (ushort*)m2f;
        //摄像机参数矩阵最后一列向量转换成的3*3矩阵参数
        double _x = i*ir[1] + ir[2];
        double _y = i*ir[4] + ir[5];
        double _w = i*ir[7] + ir[8];
        //图像宽度
        for( int j = 0; j < size.width; j++, _x += ir[0], _y += ir[3], _w += ir[6] )
        {
    
    
            //获取摄像机坐标系第四列参数
            double w = 1./_w, x = _x*w, y = _y*w;
            double x2 = x*x, y2 = y*y;
            double r2 = x2 + y2, _2xy = 2*x*y;
            double kr = (1 + ((k3*r2 + k2)*r2 + k1)*r2)/(1 + ((k6*r2 + k5)*r2 + k4)*r2);
            double u = fx*(x*kr + p1*_2xy + p2*(r2 + 2*x2)) + u0;
            double v = fy*(y*kr + p1*(r2 + 2*y2) + p2*_2xy) + v0;
            if( m1type == CV_16SC2 )
            {
    
    
                int iu = saturate_cast<int>(u*INTER_TAB_SIZE);
                int iv = saturate_cast<int>(v*INTER_TAB_SIZE);
                m1[j*2] = (short)(iu >> INTER_BITS);
                m1[j*2+1] = (short)(iv >> INTER_BITS);
                m2[j] = (ushort)((iv & (INTER_TAB_SIZE-1))*INTER_TAB_SIZE + (iu & (INTER_TAB_SIZE-1)));
            }
            else if( m1type == CV_32FC1 )
            {
    
    
                m1f[j] = (float)u;
                m2f[j] = (float)v;
            }
            else
            {
    
    
                m1f[j*2] = (float)u;
                m1f[j*2+1] = (float)v;
            }
        }
    }
}

Guess you like

Origin blog.csdn.net/guanjing_dream/article/details/133736524