Monocular Camera Calibration Based on Concentric Circles

   Most of the blogs on the Internet about camera calibration are all about theory, and there is very little code implementation. Therefore, intend to write this blog. The blog does not involve the derivation of relevant formulas. It is assumed that everyone already understands the meaning of terms such as the three major world coordinate systems and internal and external parameters.

One : The role of camera calibration

        (1): Solve internal and external parameters. 

        (2): Used to deal with distortion correction.

Two: Calibration process

       (1): Prepare several calibration pictures (at least four)

       (2): Image preprocessing, clearing irrelevant contour information on the image

       (3): Extract corner information. (The center of the concentric circle is the corner point)

       (4): Call the camera calibration function

       (5): Calculate the reprojection error

2.1:

The target images used in the experiment are shown below. In the camera calibration experiment, at least four images are generally required. At the end of the article, I will give the images used in this experiment for your convenience.

           

 2.2:

         The figure below shows the effect of the input target image after Canning. It can be seen that there are many disturbing edge contours on the image. We are only interested in the 11*9 concentric circle contours on the image, so image preprocessing is required. The specific operation is going to be explained in another blog, so here is a brief introduction.

                                 

    2.3:

           After image preprocessing, use the fitEllipse ellipse fitting function in opencv to get the center coordinates of the concentric circles. And draw it on the original image. I have stored the center coordinates extracted from this calibration experiment into the text, which will be attached to the Baidu cloud disk.

                            

      2.4:

        In fact, the camera calibration function has been packaged in opencv. We are just porters who will call the function and know the meaning of the parameters. Use the calibrateCamera function to calculate the required internal and external parameters.

double cv::calibrateCamera  (   
InputArrayOfArrays  objectPoints,
InputArrayOfArrays  imagePoints,
Size    imageSize,
InputOutputArray    cameraMatrix,
InputOutputArray    distCoeffs,
OutputArrayOfArrays     rvecs,
OutputArrayOfArrays     tvecs,
int     flags = 0,
TermCriteria    criteria = TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, DBL_EPSILON) 
) 

  2.5:

 The reprojection error is mainly used to evaluate the accuracy of the internal and external parameters obtained. The smaller the error, the more reliable the obtained internal and external parameters can be used as the input for the next step.

the 1 image of average error: 0.0143862pix

the 2 image of average error: 0.0155335pix

the 3 image of average error: 0.0138766pix

the 4 image of average error: 0.0140222pix

the 5 image of average error: 0.0144393pix

the 6 image of average error: 0.0135796pix

the 7 image of average error: 0.0147223pix

the 8 image of average error: 0.0143567pix

The above is the average error of reprojection in this experiment, the smaller the error, the better.

Three: code part

#include "widget.h"
#include <QApplication>
#include<stdlib.h>
#include <iostream>
#include <fstream>
#include <vector>
#include<opencv2/calib3d/calib3d.hpp>
#include<opencv2/opencv.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/calib3d.hpp>
#include <opencv2/highgui/highgui.hpp>

using namespace cv;
using namespace std;

void convert_float(char name[],double temp[2])
{

    char filename[20];  char tt[10];
    int i=0,num=0,k=0;

    for(k;name[k]!='\0';k++)
    {
        filename[k]=name[k];
    }
    filename[k]='\0';

    temp[0]=atof(filename);

    for(i;filename[i]!=' ';i++);

    for(i;filename[i]!='\0';i++)
    {
        tt[num]=filename[i];
        num++;
    }
    tt[num]='\0';
    temp[1]=atof(tt);
}


int main(int argc, char *argv[])
{
    QApplication a(argc, argv);
    char filename[100];
    string infilename = "H:/Image/cc/center4.txt";  //注意改下各自电脑上的路径
    cv::Size imageSize;
    imageSize.width=1280;
    imageSize.height=1024;
    //标定板上每行每列的角点数
    cv::Size boardSize=cv::Size(11,9);

    //缓存每幅图片上检测到的角点
    std::vector<Point2f> imagePointsBuf;
    //保存检测到的所有角点
    std::vector<std::vector<Point2f>> imagePointsSeq;
    ifstream fin(infilename);

    if(fin.is_open())
    {
        while(!fin.eof())
        {
            fin.getline(filename,sizeof(filename)/sizeof(char));

            if(filename[0]=='#')
            {
               imagePointsSeq.push_back(imagePointsBuf);

               imagePointsBuf.clear();
               continue;
            }
            Point2f temp_coordinate; double temp[2];
            convert_float(filename,temp);
            temp_coordinate.x=temp[0];
            temp_coordinate.y=temp[1];
            imagePointsBuf.push_back(temp_coordinate);
        }
    }

    for(int i=0;i<imagePointsSeq.size();i++)
    {

        string imagePath ="H:/Image/cc/"+to_string(i+1)+".bmp";
        Mat image=imread(imagePath);
        vector<Point2f> temp=imagePointsSeq[i];
        for(int j=0;j<temp.size();j++)
        {
            Point2f tt=temp[j];
           circle(image,tt,2,Scalar(0,0,255),2,8);
        }
        imshow(to_string(i+1),image);
    }

    //保存标定板上角点的三维坐标
    vector<vector<Point3f>> objectPoints;
    //相机内参数矩阵 M=[fx γ u0,0 fy v0,0 0 1]
    Mat cameraMatrix=cv::Mat(3,3,CV_64F,Scalar::all(0));
    //相机的五个畸变系数 k1 k2 p1 p2 p3
    Mat distCoeffs=Mat(1,5,CV_64F,Scalar::all(0));
    //每幅图片的旋转向量
    vector<Mat> tvecsMat;
    //每幅图片的平移向量
    vector<Mat> rvecsMat;

    //初始化标定板上角点的三维坐标  给出每个角点的世界坐标系下坐标
    int i,j,t;
    for(t=0;t<8;t++)  //我只用八张图像进行标定
    {
        vector<Point3f> tempPointSet;
        //行数
        for(i=0;i<boardSize.height;i++)  // 9
        {
            //列数
            for(j=0;j<boardSize.width;j++) // 11
            {
                Point3f realPoint;   //每一幅图片上有11*9 个角点
                //假定标定板放在世界坐标系中Z=0的平面上
                realPoint.x=i*30.0;
                realPoint.y=j*30.0;
                realPoint.z=0;

               tempPointSet.push_back(realPoint);
            }
        }
        objectPoints.push_back(tempPointSet);
    }


//    //开始标定
calibrateCamera(objectPoints,imagePointsSeq,imageSize,cameraMatrix,distCoeffs,rvecsMat,tvecsMat);


    cout<<cameraMatrix<<endl;// 输出相机内参数矩阵

    // 输出每张图片的旋转矩阵、平移向量
    for(int i=0;i<8;i++)
    {
        cout<<i+1<<" "<<"picture"<<endl;
        cout<<rvecsMat[i]<<endl;
        cout<<tvecsMat[i]<<endl;
        cout<<endl;
    }

    cout<<"calibration is over"<<endl;
    cout<<"start to  estimate result "<<endl;

    //所有图像的平均误差总和
    double totalErr=0.0;
    //每幅图像的平均误差
    double err=0.0;
    //保存重新计算得到的投影点
    vector<Point2f> imagePoints2;

    for(i=0;i<8;i++)
    {
    vector<Point3f> tempPointSet=objectPoints[i];  //每幅图片 角点的世界坐标
    //通过对得到的相机内外参数 对空间的三维点进行重新投影计算,得到新的投影点 imagePoints2(在像素坐标系下的点坐标)
    //就是利用 得到的内外参数 世界坐标系下角点坐标 计算出每个角点的像素坐标
    //每幅图片的 所有角点世界坐标 旋转矩阵    平移向量       内参数矩阵   畸变系数    保存计算到的每幅图片 角点的二维坐标
    projectPoints(tempPointSet,rvecsMat[i],tvecsMat[i],cameraMatrix,distCoeffs,imagePoints2);

    //计算 新投影点和 旧投影点之间的误差
    vector<Point2f> tempImagePoint=imagePointsSeq[i];

    Mat tempImagePointMat=Mat(1,tempImagePoint.size(),CV_32FC2);  //原始二维点
    Mat imagePoints2Mat=Mat(1,imagePoints2.size(),CV_32FC2);     //新计算出来的二维点

    for( j=0;j<tempImagePoint.size();j++)
    {
        imagePoints2Mat.at<cv::Vec2f>(0,j)=cv::Vec2f(imagePoints2[j].x,imagePoints2[j].y);
        tempImagePointMat.at<cv::Vec2f>(0,j)=cv::Vec2f(tempImagePoint[j].x,tempImagePoint[j].y);
    }
     //计算误差
    err=norm(imagePoints2Mat,tempImagePointMat,NORM_L2);
    err/=99;
    cout<<"the "<<i+1<<" image of average error: "<<err<<"pix"<<endl;

    }

    return a.exec();
}

Most of the code is commented, and you can still see and understand it step by step. Of course, this is just the basic monocular camera calibration, which is only suitable for getting started. There is a long way to go to improve the accuracy.

If you have any questions, please point them out.

Also, attach the image of the target and the text center4.txt that holds the center of the ellipse.

Link: https://pan.baidu.com/s/1XcZzLUWPHSNQ_f6bxABEnw 
Extraction code: 6tg8

Guess you like

Origin blog.csdn.net/qq_42027706/article/details/121696702