OpenCV actual combat (23) - camera calibration

0. Preface

We've seen how a 2Dcamera captures 3Da scene by casting light on the sensor plane, producing an image that accurately represents the scene as viewed from a particular viewpoint at the instant the image was captured. However, the image formation process removes all information about the depth of the scene elements it represents. In order to restore 3Dthe structure 3Dand the pose of the camera, we need to calibrate the camera parameters. In this section, we will introduce how to perform camera calibration. In order to better calibrate the camera, we first briefly review the principle of image formation.

1. Principles of digital imaging

Recalling the image formation process introduced in Image Projection Relations , we learned the principles of the pinhole camera model. Specifically, the model describes 3Da scene at position ( X , Y , Z ) (X, Y, Z)(X,Y,Z ) with its position in the camera image( x , y ) (x, y)(x,y ) :

pinhole camera model
To gain more insight into coordinate transformations, we add a reference frame at the center of the projection, with the yaxis pointing down, to ensure that the coordinate system is compatible with the convention of placing the image origin at the top left corner of the image. Finally, we also identify a special point on the image plane - the line passing through the focal point is normal to the image plane, then the point ( u 0 , v 0 ) (u_0, v_0)(u0,v0) is the pixel position where this line crosses the image plane, and this special point is called the principal point (principal point). We can assume that this principal point is at the center of the image plane, but in reality, this point may be off center by a few pixels, depending on how precisely the camera is manufactured.
When learning how to estimate projective relationships in images, we learned that the essential parameters of a camera in the pinhole model are its focal length and the size of the image plane. We also see how 3Dthe point( X , Y , Z ) (X, Y, Z)(X,Y,Z ) onto the image plane( f XZ , f YZ ) (f\frac XZ, f\frac YZ)(fZX,fZY) place. Also, since we are dealing with digital images, the number of pixels in the image plane (i.e. its resolution) is another important characteristic of a camera.
If we want to convert this coordinate to pixels, we need to divide 2Dthe imagepx p_xpx) and pixel height ( py p_ypy). By dividing the focal length given in physical length units (usually millimeters) by px pxp x , we can get the focal length expressed in (horizontal) pixels, we define this asfx f_xfx; Similarly, fy = fpy f_y = \frac f{p_y}fy=pyfDefined as the focal length expressed in vertical pixel units. Therefore, the complete projection equation is as follows:
y = fy YZ + v 0 y=\frac {f_yY} Z+v_0y=ZfyY+v0
where, ( u 0 , v 0 ) (u_0, v_0)(u0,v0) is the principal point added to the result in order to move the origin to the upper left corner of the image. Note that the physical size of a pixel can be obtained by dividing the size of the image sensor (usually in millimeters) by the number of pixels (horizontal or vertical). In modern sensors, pixels are generally square, i.e. they have the same horizontal and vertical dimensions.
We can rewrite the above equation in matrix form to get the complete projective equation in the most general form:
S [ xy 1 ] = [ fx 0 u 0 0 fyv 0 0 0 1 ] [ r 1 r 2 r 3 t 1 r 4 r 5 r 6 t 2 r 7 r 8 r 9 t 3 ] [ XYZ 1 ] S\left[ \begin{array}{ccc} x\\ y\\ 1\\\end{array}\right]=\left[ \begin{array}{ccc} f_x&0&u_0\\ 0&f_y&v_0\\ 0&0&1\\\end{array}\right]\left[ \begin{array}{ccc} r_1&r_2&r_3&t_1\\ r_4&r_5&r_6&t_2\\ r_7&r_8&r_9&t_3\ \\end{array} \right]\left[ \begin{array}{ccc} X\\ Y\\ Z\\ 1\\\end{array}\right]S xy1 = fx000fy0u0v01 r1r4r7r2r5r8r3r6r9t1t2t3 XYZ1

2. Camera Calibration

Camera calibration ( Camera calibrationalso known as camera calibration) is the process of obtaining different camera parameters. We can use the specification data provided by the camera manufacturer, but for some tasks (such as 3Dreconstruction ), these specification data are not accurate enough. Camera calibration works by showing a known pattern to the camera and analyzing the resulting image, then an optimization process will determine the best parameter values ​​to explain the observations. This is a complex process, but we can simplify it by OpenCVutilizing the scaling function of .
To calibrate a camera, it needs to be shown a set of scene points at known 3Dpositions . Then, you need to observe where these points are projected on the image. With a sufficient number of 3Dpoints and associated 2Dimage points, accurate camera parameters can be deduced from the projection equation, obviously, to obtain accurate results, we need to observe as many points as possible. One way to do this is to take a picture of the scene with many known 3Dpoints , but in practice this is not very feasible. A more convenient method is to take multiple images of a 3Dset of , but in addition to calculating the internal parameters of the camera, this method also needs to calculate the position of each camera viewpoint, which is highly feasible.
OpenCVIt is recommended to use a checkerboard pattern to generate the set of 3Dscene . A checkerboard pattern creates points at the corners of each square, and since this pattern is flat, we can assume that the board ( board) lies at Z = 0 Z = 0Z=0 ,xthe axes and yaxesIn this case, the calibration process shows the camera a checkerboard pattern from different viewpoints. The following is an example of a 6x4calibration

Calibration image example

OpenCVcontains functions that automatically detect the corners of this checkerboard pattern.

2.1 Perform camera calibration

In this section we will use OpenCVthe algorithm to detect checkerboard points and compute camera calibration.

(1) Just provide the image and the size of the chessboard used (that is, the number of horizontal and vertical interior corners), and the cv::findChessboardCornersfunction will return the positions of these chessboard corners on the image. If the function fails to get the camera parameter model, the function returns false:

// 输出图像点向量
std::vector<cv::Point2f> imageCorners;
// 棋盘内角点数
cv::Size boardSize(7, 5);
// 获取棋盘角点
bool found = cv::findChessboardCorners(image, boardSize, imageCorners);

(2) The output parameter imageCornerscontains the pixel coordinates of the detected inner corner points of the pattern, and additional parameters can be used to fine-tune the algorithm. The function drawChessboardCornerscan draw detected corners on a checkerboard image and connect them sequentially with lines:

// 绘制棋盘角点
cv::drawChessboardCorners(image, 
            boardSize, imageCorners, 
            found); // 角点是否已检测

The resulting image is as follows:

Corner detection result

The lines connecting the points show the order of the points in the detected image point vector. To perform camera calibration, we need to specify the corresponding 3Dpoints .

(3) The points can be specified in units of choice (for example, in centimeters or inches); however, it is easiest to assume that each square represents a unit. In this case, assuming the depth of the board is Z = 0 Z = 0Z=0 , then the coordinates of the first point are( 0 , 0 , 0 ) (0, 0, 0)(0,0,0 ) , the second point will be( 1 , 0 , 0 ) (1, 0, 0)(1,0,0 ) and so on until the last point( 7 , 5 , 0 ) (7, 5, 0)(7,5,0 ) . There are 48total

(4) To obtain more points, more images of the same calibration pattern need to be shown from different angles. To do this, move the pattern in front of the camera or move the camera in front of the pattern, both are completely equivalent from a mathematical point of view. OpenCVThe calibration function assumes that the reference frame is fixed on the calibration pattern, and will calculate the rotation and translation of the camera relative to the reference frame. We encapsulate the calibration process in CameraCalibratorthe class , and the class attributes are as follows:

class CameraCalibrator {
    
    
    private:
        // 输入点:这些点位于世界坐标系
        // 每个方形是一个单元
        std::vector<std::vector<cv::Point3f> > objectPoints;
        // 图像点像素位置
        std::vector<std::vector<cv::Point2f> > imagePoints;
        // 输出矩阵
        cv::Mat cameraMatrix;
        cv::Mat distCoeffs;
        // 用于指定校准是否完成的标志 
        int flag;

(5) The scene and image point input vectors are actually std::vectorclass vectors, and each element is a vector of a point in the view. We add calibration points by specifying a vector of checkerboard image filenames as input to addChessBoardPointsthe class function:

// 打开棋盘文件并提取角点
int CameraCalibrator::addChessboardPoints(
            const std::vector<std::string>& filelist,   // 包含棋盘图像的文件名列表
            cv::Size& boardSize,                        // 板尺寸
            std::string windowName) {
    
    

(6) In addChessBoardPointsthe function , we first have to initialize the vector and set 3Dthe scene point of the chessboard:

// 棋盘上的点
std::vector<cv::Point2f> imageCorners;
std::vector<cv::Point3f> objectCorners;
// 3D 场景点
// 初始化在棋盘参考帧中的棋盘角点
// 角点在 3D 场景的位置 (X, Y, Z) = (i, j, 0)
for (int i=0; i<boardSize.height; i++) {
    
    
    for (int j=0; j<boardSize.width; j++) {
    
    
        objectCorners.push_back(cv::Point3f(i, j, 0.0f));
    }
}

(7) We have to read each image of the input list and find the corners of the chessboard using cv::findChessboardCornersthe function :

// 2D 图像点——棋盘图像
cv::Mat image;
int successes = 0;
// 对于所有视点
for (int i=0; i<filelist.size(); i++) {
    
    
    image = cv::imread(filelist[i], 0);
    // 棋盘角点
    bool found = cv::findChessboardCorners(image,   // 棋盘图像
                    boardSize,                      // 棋盘尺寸
                    imageCorners);                  // 角点列表

(8) In addition, in order to obtain a more accurate position of the image point, cv::cornerSubPixthe function , and then the image point is positioned at the sub-pixel level accuracy. The termination criteria specified by cv::TermCriteriathe object define the maximum number of iterations and the minimum level of precision in sub-pixel coordinates, either of which will stop the corner detection process:

        // 在角点上获取亚像素精度
        if (found) {
    
    
            cv::cornerSubPix(image, imageCorners,
                    cv::Size(5, 5),         // 搜索窗口的一半大小
                    cv::Size(-1, -1),
                    cv::TermCriteria(cv::TermCriteria::MAX_ITER+cv::TermCriteria::EPS, 
                            30,             // 最大迭代次数
                            0.1));          // 最小准确率
            if (imageCorners.size()==boardSize.area()) {
    
    
                // 根据视图添加图像和场景点
                addPoints(imageCorners, objectCorners);
                successes++;
            }
        }
        if (windowName.length()>0 && imageCorners.size()==boardSize.area()) {
    
    
            // 绘制角点
            cv::drawChessboardCorners(image, boardSize, imageCorners, found);
            cv::imshow(windowName, image);
            cv::waitKey(100);
        }
    }
    return successes;
}

(9) After successfully detecting a set of checkerboard corner points, use addPointsthe method to add these points to the vector of image and scene points. Once a sufficient number of checkerboard images have been processed (thus, a large number of 3Dscene points or 2Dimage points), the calculation of the calibration parameters can be initiated:

// 相机标定
double CameraCalibrator::calibrate(const cv::Size imageSize) {
    
    
    // 初始化
    mustInitUndistort= true;
    // 输出旋转和平移
    std::vector<cv::Mat> rvecs, tvecs;
    // 相机标定
    return calibrateCamera(objectPoints,    // 3D 点
                    imagePoints,            // 图像点
                    imageSize,              // 图像尺寸
                    cameraMatrix,           // 输出相机矩阵
                    distCoeffs,             // 输出失真矩阵
                    rvecs, tvecs,           // Rs, Ts 
                    flag);                  // 设置选项
                    // ,CV_CALIB_USE_INTRINSIC_GUESS);
}

In practice, it is enough to10 checkerboard images, but these images have to be taken from different depths/viewpoints. Two important outputs of the function are the camera matrix and the distortion parameters. To interpret the results of the calibration, we need to recall the projection equation , which describes the process of transforming points into points by sequentially applying two matrices. The first matrix contains all the camera parameters, called the intrinsic parameters of the camera ( ), this matrix is ​​one of the output matrices returned by the function. It is also possible to use another function to explicitly return the intrinsic parameter values ​​given by the calibration matrix. The second matrix is ​​used to represent the input points as camera-centered coordinates, and it consists of a rotation vector (a matrix) and a translation vector (a matrix). In our calibration example, the reference frame is placed on a checkerboard, therefore, a rigid transformation must be computed for each view (by r_1 from r 120
3D2Dintrinsic parameters3x3cv::calibrateCameracv::calibrationMatrixValues
3x33x1r1to r 9 r_9r9The rotation components represented by the matrix entries of t 1 and t_1 by t 1t1 t 2 t_2 t2and t 3 t_3t3represented by the translation component). These values ​​are given by the output parameter list of cv::calibrateCamerathe function . The rotation and translation components are often called calibration extrinsics ( extrinsic parameters), and they are different for each view.
For a given camera or lens system, the intrinsic parameters remain constant. cv::calibrateCameraThe provided calibration results are obtained through an optimization process designed to compute intrinsic and extrinsic parameters to minimize the discrepancy between predicted image point positions calculated from projections of 3Dscene and actual image point positions observed on the image. difference. The sum of the differences of all points specified during calibration is called the reprojection error.
The intrinsic parameters obtained for the above test camera from calibration based 48on checkerboard images are fx = 409.272 f_x = 409.272fx=409.272 pixels,fy = 408.706 f_y=408.706fy=408.706 image elements,u 0 = 237.248 u_0=237.248u0=237.248 pixels andv0 = 171.287 v0=171.287v 0=171.287 pixels. The calibration image size is 536x356pixelsFrom the calibration results, it can be seen that the principal point is close to the center of the image, but offset by a few pixels. Check the manufacturer's specifications for the camera used to take the calibration image, which has a sensor size of23.5mmx15.7mm, and therefore a pixel size of0.0438mm. The calculated focal length is expressed in pixels, so multiplying the result by the pixel size gives an approximate focal length of17.8mm, which is consistent with the focal length of the lens we are actually using.
Next, we consider the distortion parameters. We mentioned that the effect of the lens can be ignored in the pinhole camera model, but this is based on the premise that the lens used to capture the image will not introduce optical distortion. For low-quality or very short focal length lenses, the effect of the lens cannot be simply ignored. It can be seen that the checkerboard pattern displayed in the image in the example is clearly distorted, and this distortion becomes more pronounced the farther away from the center of the image, this distortion is called radial distortion ( )radial distortion.
These distortions can be compensated by introducing an appropriate modified model. By using a set of mathematical equations to represent the distortion introduced by the lens, applying these equations in reverse can remove the distortion visible on the image. Accurate transformation parameters correcting distortion can be obtained using other camera parameters during the calibration phase. After doing this, the images from the newly calibrated camera should no longer be distorted. Therefore, we need to add an extra method CameraCalibratorto

// 移除图像失真
cv::Mat CameraCalibrator::remap(const cv::Mat &image, cv::Size &outputSize) {
    
    
    cv::Mat undistorted;
    if (outputSize.height == -1)
        outputSize = image.size();
    if (mustInitUndistort) {
    
            // 每次校准调用一次
        cv::initUndistortRectifyMap(
                cameraMatrix,       // 计算摄像机矩阵
                distCoeffs,         // 计算失真矩阵
                cv::Mat(),          // 可选矫正
                cv::Mat(),          // 生成不失真的相机矩阵
                outputSize,         // 未失真尺寸
                CV_32FC1,           // 输出图类型
                map1, map2);        // x 和 y 映射函数 
        mustInitUndistort= false;
    }
    // 应用映射
    cv::remap(image, undistorted, map1, map2, 
                cv::INTER_LINEAR);  // 插值类型
    return undistorted;
}

Running this code gives the following results:

Code running results
As shown in the image above, once the image is not distorted, we can get a regular perspective image.
To correct for distortion, OpenCVa polynomial function is used to move the image points to their undistorted positions. By default, 5coefficients ; 8models with coefficients are also available. Once these coefficients are obtained, two mapping functions (one for the coordinate and one for the coordinate) can be calculated according to cv::initUndistortRectifyMapthe function , giving the undistorted position of the image point on the distorted image in the new image. The function remaps all points of the input image to a new image. Since the mapping is a nonlinear transformation, some pixels of the input image may be mapped outside the boundaries of the output image. We can enlarge the size of the output image to reduce this pixel loss. There are more options available when it comes to camera calibration. Next, we will explain it.cv::Matxycv::remap

2.2 Calibration using known camera parameters

When the estimated values ​​of the camera intrinsic parameters are known, they can be directly input to cv::calibrateCamerathe function , and then they will be used as initial values ​​during the optimization process. Just add CALIB_USE_INTRINSIC_GUESSthe flag and enter these values ​​in the calibration matrix parameters; it is also possible CALIB_FIX_PRINCIPAL_POINTto impose a fixed value for the principal point ( ), which can usually be assumed to be the center pixel; and f_xfxand fy f_yfyA focal length of ( CALIB_FIX_RATIO) imposes a fixed ratio, eg assuming the pixels are square.

2.3 Calibration using a circular grid

OpenCVA scheme for calibrating the camera using a grid of solid circles is also provided as an alternative to the checkerboard pattern. In this case, the center of the circle is used as the calibration point. The function used is very similar to the one we used to position the corners of the chessboard:

cv::Size boardSize(7, 7);
std::vector<cv::Point2f> centers;
bool found = cv::findCirclesGrid(image, boardSize, centers);

3. Complete code

The complete code of the header file ( CameraCalibrator.h) is as follows:

#if !defined CAMERACALIBRATOR_H
#define CAMERACALIBRATOR_H

#include <vector>
#include <iostream>

#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <opencv2/highgui/highgui.hpp>

class CameraCalibrator {
    
    
    private:
        // 输入点:这些点位于世界坐标系
        // 每个方形是一个单元
        std::vector<std::vector<cv::Point3f> > objectPoints;
        // 图像点像素位置
        std::vector<std::vector<cv::Point2f> > imagePoints;
        // 输出矩阵
        cv::Mat cameraMatrix;
        cv::Mat distCoeffs;
        // 用于指定校准是否完成的标志 
        int flag;
        // 用于矫正图像失真
        cv::Mat map1, map2;
        bool mustInitUndistort;
    public:
        CameraCalibrator() : flag(0), mustInitUndistort(true) {
    
    }
        // 打开棋盘文件并提取角点
        int addChessboardPoints(const std::vector<std::string>& filelist, cv::Size&boardSize, std::string windowName="");
        // 添加场景点及其对应图像点
        void addPoints(const std::vector<cv::Point2f>& imageCorners, const std::vector<cv::Point3f>& objectCorners);
        // 相机标定
        double calibrate(const cv::Size imageSize);
        // 设定标定标志
        void setCalibrationFlag(bool radial8CoeffEnabled=false, bool tangentialParamEnabled=false);
        // 移除图像失真
        cv::Mat remap(const cv::Mat& image, cv::Size& outputSize);
        cv::Mat getCameraMatrix() {
    
    return cameraMatrix;}
        cv::Mat getDistCoeffs() {
    
    return distCoeffs;}
};

#endif

The complete code of the main function file ( fastCorners.cpp) is as follows:

#include <iostream>
#include <iomanip>
#include <vector>
#include <opencv2/core.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/features2d.hpp>

#include "CameraCalibrator.h"

// 打开棋盘文件并提取角点
int CameraCalibrator::addChessboardPoints(
            const std::vector<std::string>& filelist,   // 包含棋盘图像的文件名列表
            cv::Size& boardSize,                        // 板尺寸
            std::string windowName) {
    
    
    // 棋盘上的点
    std::vector<cv::Point2f> imageCorners;
    std::vector<cv::Point3f> objectCorners;
    // 3D 场景点
    // 初始化在棋盘参考帧中的棋盘角点
    // 角点在 3D 场景的位置 (X, Y, Z) = (i, j, 0)
    for (int i=0; i<boardSize.height; i++) {
    
    
        for (int j=0; j<boardSize.width; j++) {
    
    
            objectCorners.push_back(cv::Point3f(i, j, 0.0f));
        }
    }
    // 2D 图像点——棋盘图像
    cv::Mat image;
    int successes = 0;
    // 对于所有视点
    for (int i=0; i<filelist.size(); i++) {
    
    
        image = cv::imread(filelist[i], 0);
        // 棋盘角点
        bool found = cv::findChessboardCorners(image,   // 棋盘图像
                        boardSize,                      // 棋盘尺寸
                        imageCorners);                  // 角点列表
        // 在角点上获取亚像素精度
        if (found) {
    
    
            cv::cornerSubPix(image, imageCorners,
                    cv::Size(5, 5),         // 搜索窗口的一半大小
                    cv::Size(-1, -1),
                    cv::TermCriteria(cv::TermCriteria::MAX_ITER+cv::TermCriteria::EPS, 
                            30,             // 最大迭代次数
                            0.1));          // 最小准确率
            if (imageCorners.size()==boardSize.area()) {
    
    
                // 根据视图添加图像和场景点
                addPoints(imageCorners, objectCorners);
                successes++;
            }
        }
        if (windowName.length()>0 && imageCorners.size()==boardSize.area()) {
    
    
            // 绘制角点
            cv::drawChessboardCorners(image, boardSize, imageCorners, found);
            cv::imshow(windowName, image);
            cv::waitKey(100);
        }
    }
    return successes;
}

// 添加场景点和相应的图像点
void CameraCalibrator::addPoints(const std::vector<cv::Point2f>& imageCorners, const std::vector<cv::Point3f>& objectCorners) {
    
    
    // 2D 图像点
    imagePoints.push_back(imageCorners);          
    // 对应 3D 场景点
    objectPoints.push_back(objectCorners);
}

// 相机标定
double CameraCalibrator::calibrate(const cv::Size imageSize) {
    
    
    // 初始化
    mustInitUndistort= true;
    // 输出旋转和平移
    std::vector<cv::Mat> rvecs, tvecs;
    // 相机标定
    return calibrateCamera(objectPoints,    // 3D 点
                    imagePoints,            // 图像点
                    imageSize,              // 图像尺寸
                    cameraMatrix,           // 输出相机矩阵
                    distCoeffs,             // 输出失真矩阵
                    rvecs, tvecs,           // Rs, Ts 
                    flag);                  // 设置选项
                    // ,CV_CALIB_USE_INTRINSIC_GUESS);
}

// 移除图像失真
cv::Mat CameraCalibrator::remap(const cv::Mat &image, cv::Size &outputSize) {
    
    
    cv::Mat undistorted;
    if (outputSize.height == -1)
        outputSize = image.size();
    if (mustInitUndistort) {
    
            // 每次校准调用一次
        cv::initUndistortRectifyMap(
                cameraMatrix,       // 计算摄像机矩阵
                distCoeffs,         // 计算失真矩阵
                cv::Mat(),          // 可选矫正
                cv::Mat(),          // 生成不失真的相机矩阵
                outputSize,         // 未失真尺寸
                CV_32FC1,           // 输出图类型
                map1, map2);        // x 和 y 映射函数 
        mustInitUndistort= false;
    }
    // 应用映射
    cv::remap(image, undistorted, map1, map2, 
                cv::INTER_LINEAR);  // 插值类型
    return undistorted;
}

// 设置标定选项
void CameraCalibrator::setCalibrationFlag(bool radial8CoeffEnabled, bool tangentialParamEnabled) {
    
    
    // 设置用于 cv::calibrateCamera() 函数的标志
    flag = 0;
    if (!tangentialParamEnabled) flag += cv::CALIB_ZERO_TANGENT_DIST;
	if (radial8CoeffEnabled) flag += cv::CALIB_RATIONAL_MODEL;
}

int main() {
    
    
    cv::Mat image;
    std::vector<std::string> filelist;
    // 生成棋盘文件列表
    for (int i=1; i<=27; i++) {
    
    
        std::stringstream str;
        str << "chessboards/chessboard" << std::setw(2) << std::setfill('0') << i << ".jpg";
        std::cout << str.str() << std::endl;
        filelist.push_back(str.str());
        image= cv::imread(str.str(),0);
    }
    // 创建标定器对象
    CameraCalibrator cameraCalibrator;
    // 在棋盘中添加角点
    cv::Size boardSize(7,5);
    cameraCalibrator.addChessboardPoints(
            filelist,               // 棋盘文件列表
            boardSize, "Detected points");	// 棋盘尺寸
    // 相机标定
    cameraCalibrator.setCalibrationFlag(true, true);
    cameraCalibrator.calibrate(image.size());
    // 矫正图像失真
    image = cv::imread(filelist[14],0);
    cv::Size newSize(static_cast<int>(image.cols*1.5), static_cast<int>(image.rows*1.5));
    cv::Mat uImage= cameraCalibrator.remap(image, newSize);
    // 相机矩阵
    cv::Mat cameraMatrix= cameraCalibrator.getCameraMatrix();
    std::cout << " Camera intrinsic: " << cameraMatrix.rows << "x" << cameraMatrix.cols << std::endl;
    std::cout << cameraMatrix.at<double>(0,0) << " " << cameraMatrix.at<double>(0,1) << " " << cameraMatrix.at<double>(0,2) << std::endl;
    std::cout << cameraMatrix.at<double>(1,0) << " " << cameraMatrix.at<double>(1,1) << " " << cameraMatrix.at<double>(1,2) << std::endl;
    std::cout << cameraMatrix.at<double>(2,0) << " " << cameraMatrix.at<double>(2,1) << " " << cameraMatrix.at<double>(2,2) << std::endl;
    cv::namedWindow("Original Image");
    cv::imshow("Original Image", image);
    cv::namedWindow("Undistorted Image");
    cv::imshow("Undistorted Image", uImage);
    // 存储计算出的矩阵
    cv::FileStorage fs("calib.xml", cv::FileStorage::WRITE);
    fs << "Intrinsic" << cameraMatrix;
    fs << "Distortion" << cameraCalibrator.getDistCoeffs();
    cv::waitKey();
    return 0;
}

summary

Camera calibration, also called camera calibration, is a key step in recovering 3Dthe structure and 3Dthe pose of the camera. In this section, we introduce the basic principles of camera calibration and implement the camera calibration algorithm from scratch.

series link

OpenCV actual combat (1) - OpenCV and image processing foundation
OpenCV actual combat (2) - OpenCV core data structure
OpenCV actual combat (3) - image area of ​​interest
OpenCV actual combat (4) - pixel operation
OpenCV actual combat (5) - Image operation detailed
OpenCV actual combat (6) - OpenCV strategy design mode
OpenCV actual combat (7) - OpenCV color space conversion
OpenCV actual combat (8) - histogram detailed
OpenCV actual combat (9) - image detection based on backprojection histogram Content
OpenCV actual combat (10) - detailed explanation of integral image
OpenCV actual combat (11) - detailed explanation of morphological transformation
OpenCV actual combat (12) - detailed explanation of image filtering
OpenCV actual combat (13) - high-pass filter and its application
OpenCV actual combat (14) ——Image Line Extraction
OpenCV Actual Combat (15) ——Contour Detection Detailed
OpenCV Actual Combat (16) ——Corner Point Detection Detailed
OpenCV Actual Combat (17) —— FAST Feature Point Detection
OpenCV Actual Combat (18) —— Feature Matching
OpenCV Actual Combat (19) )——Feature Descriptor
OpenCV Actual Combat (20)——Image Projection Relationship
OpenCV Actual Combat (21)—Based on Random Sample Consistent Matching Image
OpenCV Actual Combat (22)——Homography and Its Application

Guess you like

Origin blog.csdn.net/LOVEmy134611/article/details/130665079