OpenCV Basics (28) Use OpenCV for camera calibration Python and C++

Cameras are an integral part of many fields, including robotics, surveillance, space exploration, social media, industrial automation, and even the entertainment industry. For many applications, the parameters of a camera must be known in order to effectively use it as a vision sensor.

In this article, you will learn about the steps involved in camera calibration and what they mean. We also share C++ and Python code and sample images of the checkerboard pattern.

1. What is camera calibration

The process of estimating camera parameters is called camera calibration.

This means that we have all the information (parameters or coefficients) about the camera needed to determine the exact relationship between a 3D point in the real world and its corresponding 2D projection (pixel) in the image captured by this calibration camera.

Usually this means restoring two parameters:

  • Internal parameters of the camera/lens system. For example. The focal length, optical center and radial distortion coefficient of the lens.
  • Extrinsic parameters: This refers to the orientation (rotation and translation) of the camera relative to some world coordinate system.

In the image below, the lens parameters estimated by the geometric calibration are used to remove the distortion of the image.
insert image description here

2. Use OpenCV for camera calibration

To find the projection of a 3D point on the image plane, we first need to transform the point from the world coordinate system to the camera coordinate system using the extrinsic parameters (rotation and translation).

Next, using the camera's intrinsic parameters, we project the point onto the image plane.

The equation relating a 3D point in world coordinates (X_w, Y_w, Z_w)to its projection in image coordinates is as follows: where is a projection matrix consisting of two parts - an intrinsic matrix containing intrinsic parameters and a combination of a rotation matrix and a translation vector the external matrix . As mentioned earlier, the intrinsic matrix is ​​upper triangular. fx , fy f_x, f_y(u, v)
insert image description here
P3×4K3×3R3×1t[R|t]
insert image description here
K
insert image description here
fx,fyare the x and y focal lengths (yes, they are usually the same).
cx , cy c_x, c_ycx,cyare the x and y coordinates of the optical center in the image plane. Using the center of the image is usually a good enough approximation.
γ \gammaγ is the skew between axes. Usually 0.

3. The target of camera calibration

The goal of camera calibration is to use a set of known 3D points ( X w , Y w , Z w ) (X_w, Y_w, Z_w)(Xw,Yw,Zw) and their corresponding image coordinates( u , v ) (u,v)(u,v ) Find the 3×3 matrixKKK , 3×3 rotation matrixRRR and the 3×1 translation vectorttt . A camera is said to be calibrated when we get the values ​​of the intrinsic and extrinsic parameters.

In summary, a camera calibration algorithm has the following inputs and outputs

  • Input: A set of images containing points with known 2D image coordinates and 3D world coordinates.
  • Output: 3×3 camera intrinsic matrix, rotation and translation for each image.

Note: In OpenCV, the camera intrinsic matrix has no skewparameter . So the form of the matrix is:
Please add a picture description

4. Different types of camera calibration methods

  • Calibration Plates: When we have full control over the imaging process, the best way to perform calibration is to capture multiple images of an object or pattern of known size from different viewpoints. The checkerboard-based methods we will study in this post fall into this category. We could also use a circular pattern of known dimensions instead of a checkerboard pattern.
  • Geometric cues: Sometimes we have other geometric cues in the scene, such as straight lines and vanishing points that can be used for calibration.
  • Deep learning-based methods: When we have little control over the imaging setup (e.g., we only have a scene image), it is still possible to use deep learning-based methods to obtain calibration information for the camera.

5. Camera calibration steps

insert image description here

5.1 Defining real-world coordinates with a checkerboard pattern

insert image description here
World Coordinate System : Our world coordinates are fixed by a checkerboard pattern attached to the walls of the room. Our 3D points are the corners of the squares on the chessboard. Any corner of the above chessboard can be selected as the origin of the world coordinate system. XXXYYY axis along the wall,ZZThe Z axis is perpendicular to the wall. Therefore, all points on the chessboard lie on the XY plane (ieZZZ= 0)。

In the calibration process, we pass a set of known 3D ( X w , Y w , Z w ) (X_w, Y_w, Z_w)(Xw,Yw,Zw) point and its corresponding pixel position in the image( u , v ) (u,v)(u,v ) to calculate the camera parameters.

For 3D points, we shoot a checkerboard pattern with known dimensions in many different directions. World coordinates are attached to the chessboard, and since all corner points lie on a plane, we choose each point Z w Z_wZwThe coordinates are 0. Because the points are equidistant in the chessboard, each 3D point of ( X w , Y w ) (X_w, Y_w)(Xw,Yw) coordinates are easily defined by taking a point as reference( 0 , 0 ) (0,0)(0,0 ) and define the rest of the points based on that reference point.

Why are checkerboard patterns used so widely in calibration?
Checkerboard patterns are unique and easy to detect in images. Not only that, but the corners of the squares on the chessboard are ideal for positioning them because they have a sharp gradient in both directions. In addition, these corners are also related to the intersection points of the checkerboard lines. All of these facts are used to determine the corners of the squares in the checkerboard pattern.
insert image description here

5.2 Capture multiple images of the chessboard from different angles

Please add a picture description
These images above are used for camera calibration.

Next, we keep the checkerboard stationary and acquire multiple images of the checkerboard by moving the camera.

Alternatively, we can also keep the camera constant and capture the checkerboard pattern in different directions. The two cases are mathematically similar.

5.3 Find the two-dimensional coordinates of the chessboard

We now have multiple chessboard images. We also know the 3D position of the points on the board in world coordinates. The last thing we need is the 2D pixel locations of these checkerboard corners in the image.

5.3.1 Finding the chessboard corners

OpenCV provides a built-in function findChessboardCornerscalled which finds the chessboard and returns the coordinates of the corners. Let's see the usage in the code block below.

C++

bool findChessboardCorners(InputArray image, Size patternSize, OutputArray corners, int flags = CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_NORMALIZE_IMAGE )

Python

retval, corners = cv2.findChessboardCorners(image, patternSize, flags)
  • image: The source checkerboard view. It must be an 8-bit grayscale or color image.
  • patternSize: Number of interior corners ( ) for each chessboard row and column patternSize = cvSize (points_per_row, points_per_colum) = cvSize(columns,rows).
  • corners: Output array of detected corners.
  • flags: Various operation flags. You only need to worry about these when things are not going your way. Use the default.

The output is true or false, depending on whether a checkerboard was detected.

5.3.2 Refinement of checkerboard corners

Everything is for calibration accuracy. For good results, it is important to obtain corner locations with sub-pixel accuracy. OpenCV's function cornerSubPixtakes the original image and corner locations, and finds the best corner location within a small neighborhood of the original location. The algorithm is iterative in nature, so we need to specify termination criteria (such as number of iterations and/or accuracy)
C++

void cornerSubPix(InputArray image, InputOutputArray corners, Size winSize, Size zeroZone, TermCriteria criteria)

Python

cv2.cornerSubPix(image, corners, winSize, zeroZone, criteria)
  • image: Input picture.
  • corners: Input the initial coordinates of the corners and output the provided refinement coordinates.
  • winSize: Half of the side length of the search window.
  • zeroZone: Half the size of the dead zone in the middle of the search zone, which is not summed in the formula below. It is sometimes used to avoid possible singularities in the autocorrelation matrix. (-1,-1)Indicates that the size does not exist.
  • criteriaCriteria for terminating the iterative process of corner refinement. That is, the angular position refinement process stops criteria.maxCountafter iterations or when the angular position moves less than in an iteration .criteria.epsilon

5.4 Camera Calibration

The last step of calibration is to pass the 3D point in world coordinates and its 2D position in all images to calibrateCamerathe methods . The implementation is based on a paper by Zhengyou Zhang. The math is a bit complex and requires a background in linear algebra.

Let's look at the syntax calibrateCameraof

C++

double calibrateCamera(InputArrayOfArrays objectPoints, InputArrayOfArrays imagePoints, Size imageSize, InputOutputArray cameraMatrix, InputOutputArray distCoeffs, OutputArrayOfArrays rvecs, OutputArrayOfArrays tvecs)

Python

retval, cameraMatrix, distCoeffs, rvecs, tvecs = cv2.calibrateCamera(objectPoints, imagePoints, imageSize)
  • objectPoints: Vector of 3D point vectors. The outer vector contains as many elements as the number of views.
  • imagePoints: Vector of 2D image points.
  • imageSize: the size of the image
  • cameraMatrix: intrinsic camera matrix
  • distCoeffs: Lens distortion coefficient. These coefficients will be explained in a future article.
  • rvecs: A 3×1 rotation vector. The direction of the vector specifies the axis of rotation, and the magnitude of the vector specifies the angle of rotation.
  • tvecs: 3×1 translation vector.

6. Camera calibration complete code

The code for camera calibration using Python and C++ is shared below.

6.1 Python code for camera calibration

Please read through the code comments, which explain what each step does.

#!/usr/bin/env python

import cv2
import numpy as np
import os
import glob

# 定义棋盘格的尺寸
CHECKERBOARD = (6,9)
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)

# 创建向量以存储每个棋盘图像的 3D 点向量
objpoints = []
# 创建向量以存储每个棋盘图像的 2D 点向量
imgpoints = [] 


# 定义 3D 点的世界坐标
objp = np.zeros((1, CHECKERBOARD[0] * CHECKERBOARD[1], 3), np.float32)
objp[0,:,:2] = np.mgrid[0:CHECKERBOARD[0], 0:CHECKERBOARD[1]].T.reshape(-1, 2)
prev_img_shape = None

# 提取存储在给定目录中的单个图像的路径
images = glob.glob('./images/*.jpg')
for fname in images:
    img = cv2.imread(fname)
    gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
    # 找到棋盘角
    # 如果在图像中找到所需数量的角,则 ret = true
    ret, corners = cv2.findChessboardCorners(gray, CHECKERBOARD, cv2.CALIB_CB_ADAPTIVE_THRESH + cv2.CALIB_CB_FAST_CHECK + cv2.CALIB_CB_NORMALIZE_IMAGE)
    
    """
    如果检测到所需数量的角, 我们细化像素坐标并可视化
    """
    if ret == True:
        objpoints.append(objp)
        # 细化给定二维点的像素坐标。
        corners2 = cv2.cornerSubPix(gray, corners, (11,11),(-1,-1), criteria)
        
        imgpoints.append(corners2)

        # 绘制并显示角
        img = cv2.drawChessboardCorners(img, CHECKERBOARD, corners2, ret)
    
    cv2.imshow('img',img)
    cv2.waitKey(0)

cv2.destroyAllWindows()

h,w = img.shape[:2]

"""
通过传递已知 3D 点 (objpoints) 的值 和检测到的角点(imgpoints)对应的像素坐标 实现相机标定
"""
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)

print("Camera matrix : \n")
print(mtx)
print("dist : \n")
print(dist)
print("rvecs : \n")
print(rvecs)
print("tvecs : \n")
print(tvecs)

6.2 C++ code for camera calibration

#include <opencv2/opencv.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <stdio.h>
#include <iostream>

// 定义棋盘的尺寸
int CHECKERBOARD[2]{
    
    6,9}; 

int main()
{
    
    
  // 创建向量以存储每个棋盘图像的 3D 点向量
  std::vector<std::vector<cv::Point3f> > objpoints;

  // 创建向量以存储每个棋盘图像的 2D 点向量
  std::vector<std::vector<cv::Point2f> > imgpoints;

  // 定义 3D 点的世界坐标
  std::vector<cv::Point3f> objp;
  for(int i{
    
    0}; i<CHECKERBOARD[1]; i++)
  {
    
    
    for(int j{
    
    0}; j<CHECKERBOARD[0]; j++)
      objp.push_back(cv::Point3f(j,i,0));
  }


  // 提取存储在给定目录中的单个图像的路径
  std::vector<cv::String> images;
  // 包含棋盘图像的文件夹的路径
  std::string path = "./images/*.jpg";

  cv::glob(path, images);

  cv::Mat frame, gray;
  // 用于存储检测到的棋盘角的像素坐标的向量
  std::vector<cv::Point2f> corner_pts;
  bool success;
  
  /* 循环遍历目录中的所有图像 */
  for(int i{
    
    0}; i<images.size(); i++)
  {
    
    
    frame = cv::imread(images[i]);
    cv::cvtColor(frame,gray,cv::COLOR_BGR2GRAY);

    // 寻找棋盘角
    // 如果在图像中找到所需数量的角,则成功 = true
    success = cv::findChessboardCorners(gray, cv::Size(CHECKERBOARD[0], CHECKERBOARD[1]), corner_pts, CV_CALIB_CB_ADAPTIVE_THRESH | CV_CALIB_CB_FAST_CHECK | CV_CALIB_CB_NORMALIZE_IMAGE);
    
    /* 
     如果检测到所需数量的角,们细化像素坐标并在棋盘格图像上显示它们
    */
    if(success)
    {
    
    
      cv::TermCriteria criteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ITER, 30, 0.001);
      
      // 细化给定二维点的像素坐标。
      cv::cornerSubPix(gray,corner_pts,cv::Size(11,11), cv::Size(-1,-1),criteria);
      
      // 在棋盘上显示检测到的角点
      cv::drawChessboardCorners(frame, cv::Size(CHECKERBOARD[0], CHECKERBOARD[1]), corner_pts, success);
      
      objpoints.push_back(objp);
      imgpoints.push_back(corner_pts);
    }

    cv::imshow("Image",frame);
    cv::waitKey(0);
  }

  cv::destroyAllWindows();

  cv::Mat cameraMatrix,distCoeffs,R,T;

  /*通过传递已知 3D 点 (objpoints) 的值 和检测到的角点(imgpoints)对应的像素坐标 实现相机标定*/
  cv::calibrateCamera(objpoints, imgpoints, cv::Size(gray.rows,gray.cols), cameraMatrix, distCoeffs, R, T);

  std::cout << "cameraMatrix : " << cameraMatrix << std::endl;
  std::cout << "distCoeffs : " << distCoeffs << std::endl;
  std::cout << "Rotation vector : " << R << std::endl;
  std::cout << "Translation vector : " << T << std::endl;

  return 0;
}

reference list

https://learnopencv.com/camera-calibration-using-opencv/

Guess you like

Origin blog.csdn.net/weixin_43229348/article/details/123520653