Camera calibration after reading an article in five minutes

Camera Calibration is simply the process of solving the conversion relationship (parameter) between the world coordinate system -> camera coordinate system -> image (physical) coordinate system -> pixel coordinate system.

Why Camera Calibration?

Let me talk about the conclusion first: establish a camera imaging geometric model and correct lens distortion.

Camera Imaging Geometry Model

The primary task of computer vision is to obtain the corresponding information of the object in the real three-dimensional world through the captured image information. Therefore, it is particularly important to establish the geometric model of the object in the process of mapping from the three-dimensional world to the camera imaging plane.

Correct Lens Distortion

Due to the manufacturing process of the lens, there will be various forms of distortion in the imaging. In order to remove the distortion (to make the imaged image consistent with the theoretically mapped image), people calculate and use the distortion coefficient to correct this aberration. (Although it is theoretically possible to design a lens that does not produce distortion, in practice, due to limitations and factors such as manufacturing processes, most lens distortion still needs to be solved at the algorithm level)

Four main coordinate systems

coordinate system name Coordinate system description
World Coordinate System (3D) Describe the position of the target in the real world (Xw, Yw, Zw)
Camera coordinate system (3D) Contact the world coordinate system and the image coordinate system, the camera optical axis is the Z axis (Xc, Yc, Zc)
Image (physical) coordinate system (2D) The camera coordinate system is converted to the image coordinate system through perspective projection, the unit is mm, and the intersection point of the camera optical axis and the image coordinate system is the coordinate origin (x, y)
Pixel coordinate system (2D) The unit distance (mm) in the image coordinate system is converted to pixels, and the upper left corner is the coordinate origin (u, v)

Basic process of coordinate transformation
Basic process of graph coordinate transformation

The result of camera calibration
coordinate transformation matrix
Formula coordinate transformation matrix

1. External parameter (number) matrix: the point on the world coordinate system is converted to the corresponding position on the camera coordinate system through rotation and translation (rigid transformation);

rigid transformation
Graph Rigid Transformation

The main calibration external parameters: R (rotation matrix), T (translation matrix).

2. Internal parameter (number) matrix: The points on the camera coordinate system are converted to the corresponding positions on the image (physical) coordinate system through perspective projection, and then electronized into corresponding pixels; the perspective transformation of the
perspective transformation
picture

The main calibration internal parameters: fx fy (number of pixels per unit distance), cx cy (coordinates of the origin of the image coordinate system in the pixel coordinate system), s (scale factor).

3. Lens distortion: the deviation of lens manufacturing precision (radial distortion, k1 k2 k3), assembly process (tangential distortion, p1 p2) and linear distortion ("near large far small") leads to distortion of the original image ("image" Why is it not in the theoretical position, but in the current position). Simply put, it is the "gap" between the ideal image coordinate system and the actual image coordinate system.

Lens Distortion Correction

Common Distortion Types
Figure Common Distortion Types

Distortion Correction Formula
Formula Distortion Correction Formula

As shown in the formula, we generally use polynomials to fit the distortion law and make corrections. The influence of tangential distortion is very small, and it is generally not considered, so we generally only need to ask for k1 k2 k3 (sometimes k3 is not considered, and the correction effect obtained by using k1 k2 also meets the engineering needs).

Common camera calibration methods

Objective function without considering distortion
The formulation does not take into account the distortion of the objective function

Objective function considering distortion
The formula considers the distortion objective function

Among them: n is the number of pictures, m is the number of coordinate points of a single picture. Directly using the optimization method to calculate, it is easy to fall into a local optimum (it is easier when the initial value is not good). The main significance of Zhang’s calibration method is to solve a relatively accurate initial value (if considering distortion, including distortion coefficient) through numerical solution, and then use the optimization method to fine-tune the objective function.

For more detailed content, please refer to the author's paper: Flexible Camera Calibration By Viewing a Plane From Unknown Orientations
Link: https://pan.baidu.com/s/1XlDrL-p5e7mUOlj__412Rw Extraction code: 0t7y

Summarize

We can use OpenCV's cv2.calibrateCamera() function for camera calibration. This article mainly introduces the concept and brief process of a camera calibration. For the detailed camera calibration process, please follow the actual engineering needs.

reference

1. https://zhuanlan.zhihu.com/p/24651968

Guess you like

Origin blog.csdn.net/weixin_41006390/article/details/105477966
Recommended