Overview | camera calibration method

The author Cai within their means, the public number: computer vision, life members, due to formatting, formulas display may be a problem, I recommend reading the original link: Overview | camera calibration method
is also recommended several original number of computer vision, Python, natural language processing, data mining related, the latest summary of resources, learning more efficient!

In the image measurement and machine vision applications, the three-dimensional object surface to determine a spatial position of a point in the geometric relationship between corresponding points in the image to be imaged by the camera geometry model, geometric model parameters is the camera parameter. These parameters (internal reference, external reference, distortion parameters) must be obtained by experiment and calculation can under most conditions, this solution parameters of the process is called calibration camera (or camera calibration). Whether it is measured in the image or machine vision applications, camera calibration parameters are very critical aspect of its stability and accuracy of the calibration algorithm results directly affect the accuracy of the results generated camera work. Therefore, to do follow-up work to do camera calibration is a prerequisite to improve the calibration accuracy is the focus of scientific research.

The main purpose of calibration is to solve two problems:

a, determines the conversion relationship between three-dimensional point in world coordinates and the pixel plane pixel (internal and external reference);

B, the distortion is determined during camera imaging system for image correction.

Pinhole camera model

The camera in the three-dimensional world coordinate point (unit: m) is mapped to a two-dimensional image plane (unit: pixels) in the process can be used to describe a geometric model which most simply as a pinhole camera model (pinhole Camera Model) , its frame is shown below:

figure 1

Which involves the camera calibration related to the four coordinate system, namely:

Pixel coordinate system: In order to describe the coordinates of the image point on the object is imaged in the digital image (photograph) is introduced, we read from the camera to the real coordinate system information is located. A unit (number of pixels).

The imaging plane coordinate system: In order to describe the process of imaging the object from the camera coordinate system is introduced to the projection coordinates of the image transmitting relationship, to facilitate further the pixel coordinates of the coordinate system. The unit is m.

The camera coordinate system: Built in the camera coordinate system to describe the position of the object from the camera angle is defined as an intermediate communication world coordinate system and the image / pixel coordinate system of a ring. The unit is m.

The world coordinate system: Three-dimensional world of user-defined coordinate system, in order to describe the position of an object in the real world are introduced. The unit is m.

Below, we detail the derivation from the world coordinate system to the pixel coordinates.

The world coordinate system to the camera coordinate system

From the world coordinate to a camera-based coordinate system, which is a rigid body transformation, only the role of the three-dimensional points in a world coordinate system rotation R and translation t (R, t is the outer camera parameters), the transformation process can be done by clicking the formula :

The camera coordinate system to the coordinate system of the imaging plane

This conversion process from the three-dimensional coordinates to two-dimensional coordinates, i.e. during a perspective projection (central projection method with an object projected onto a projection surface, such that a process for obtaining a visual effect is closer sided projection, i.e. It makes us human eye to see the scene near the far smaller one imaging modality).

The imaging process is shown below: Pinhole surface (camera coordinate system) between the image plane (image coordinate system) object point and the plane (the plane of the board), an image of the real image is inverted.

figure 2

However, in order to more easily described mathematically, we camera coordinate system and the image coordinate system position reversed into the arrangement shown in FIG. (No real physical meaning, but to facilitate the calculation):

image 3

In this case, it is assumed that there is the camera coordinate system M, the image coordinate system at over (without distortion) coordinates of the point P is imaged (obtained by similar triangles principle):

f is the focal length, finishing, too:

Imaging plane coordinate system to the coordinate system of the pixel

Figure 4

As shown above, to exist between the plane coordinate system and a pixel coordinate system to zoom and pan

Order was:

To fx, fy way be expressed as:

among them

  • α, β units of pixels / meter;
  • fx, fy is a focal length of the x, y direction, in pixels;
  • (Cx, cy) main point, center of the image in pixels.

Then, the camera coordinate system to the final form of the pixel coordinates can be written as:

The Zc to the left:

Therefore, three-dimensional point in the world coordinate system M = [X, Y, Z ] T and the two-dimensional coordinate system pixel point m = [ U , V ] T, the relationship is:

which is:

Wherein, s is a scaling factor, A is the matrix of intrinsic parameters of the camera, [R t] is a matrix of the outer camera parameters, and the homogeneous coordinates respectively corresponding to m and M.

Distortion model

We talked to the camera coordinate system when perspective projection transform image coordinate system. When the physical camera photographing lens to the image projected by the plane, but due to variations in manufacturing precision of the lens and the assembly process can introduce distortion, resulting in distortion of the original image. Therefore, we need to consider the issue of image distortion.

Distortion lens distortion is divided into radial and tangential distortion, as well as a thin lens distortion, etc., but not the radial and tangential distortion significantly affected, so here we only consider the radial and tangential distortion.

Radial distortion

As the name suggests, the distortion is radial distortion distribution along the radial direction of the lens, causes the light is more curved than in the center near the center of the lens where the principles of this distortion performance more apparent inexpensive ordinary lens, the main radial distortion It includes barrel distortion and pincushion distortion two kinds. The following are pincushion and barrel distortion are schematic:

Figure 5

We used the actual situation at r = 0 first few Taylor series expansion to approximate radial distortion is described, the relationship between the coordinates before and after the correction of radial distortion:

Tangential distortion

Lens tangential distortion is the camera itself and the sensor plane (image plane) or image plane is not parallel due generated, this situation is due to be adhered to the lens mounted on the lens module cause a deviation. Distortion model can use two additional parameters p1 and p2 to describe:

among them,

Therefore, we need a total of five distortion parameter ( K . 1, K 2, K . 3, P . 1, P 2) will be described lens distortion.

To sum up, in fact, it is to determine the internal and external camera calibration parameters of the camera, the process of distortion parameters.

These are calibrated for a single camera, then for a multi-camera system, camera calibration or RGBD it?

Three-dimensional calibration

For a multi-camera system or the camera in addition to the RGBD to be not more than calibrate each camera, it needs to claim transformation T between the sensor, so that data can be acquired at a time "aligned" to eyes, for example, about two cameras coordinate system as shown below:

Image 6

Calculated rotation matrix R and translation vector t, the method between the two cameras is calculated from two cameras are R and T, and then calculated by the formula:
\ [R = R_r \ \\ T = T_R R_L bullet-R \ bullet t_l \]

Stereo matching

Because only a single camera captured images to calculate two-dimensional coordinates, because we use two sets of cameras, and the relationship between the two sets of cameras is known, if we can get a point in three-dimensional space around imaging camera two-dimensional coordinates are calculated, and can know that this is the same point, so you can calculate the three-dimensional coordinates. This validation technique of the same name which is the point of stereo matching. There are many stereo matching algorithms, where the local matching method is most commonly used, but now has the algorithm, there is no one algorithm can achieve a 100% match. The more general point to be matched, the lower matching accuracy.

Existing calibration method introduced

Camera calibration methods are: traditional camera calibration method, active vision camera calibration method, the camera self-calibration method.

Calibration method advantage Shortcoming Common method
Traditional camera calibration method The camera can be used for any model of high precision Require calibration object, algorithm complexity Tsai two-step process, Zhang calibration method
Active Vision camera calibration method Does not require calibration object, the algorithm is simple, robust high High cost, expensive equipment Active systems do specific motion control camera
Camera self-calibration method Flexibility, online calibration Low accuracy, poor robustness Stratification gradually calibration, based on Kruppa equation

(See slide around)

  1. Tsai first linear two-step process is determined camera parameters, after considering the distortion factor to obtain an initial parameter value, to obtain a final camera parameters by nonlinear optimization. Tsai fast two-step process, but considering only radial distortion, when the camera serious distortion, the method is not applicable.
  2. Zhang two-dimensional grid consisting of calibration plate calibration method to calibrate the image acquisition pose different calibration plate, extracted picture corner pixel coordinates, the initial value of the external parameters calculated by the camera homography by nonlinear least square multiplying the estimated distortion factor, the last parameter optimization using maximum likelihood estimation. The method is simple to operate, and high accuracy to meet most applications.
  3. Active vision based camera calibration method is to do a particular sport by actively controlling the camera system, the use of specific control platform to control the camera moves taken multiple sets of images, according to the image information and known displacement to solve internal and external camera parameters. This calibration method requires precise control with a platform, so the higher the cost.
  4. Calibration method is to gradually layered sequence of images do projective reconstruction for radiation calibration and calibration on the basis of European reconstruction, based on internal and external camera parameters obtained by nonlinear optimization algorithm. Since the initial parameter value is vague, uncertain optimization algorithm convergence.
  5. Kruppa based self-calibration method is based on the constraint equation by a matrix of intrinsic camera quadratic curve using at least three pairs to calibrate the camera image. Stability of image sequence can affect the length of the calibration algorithm can not guarantee the projective space plane at infinity.

The above is a single camera calibration method, and to the camera - This camera - the distance between the sensor calibration, OpenCV, Matlab toolkit or have built-in library can be used to calibrate, but [1] with a proposed Web interface toolkit for automatic camera to the camera and the camera to the calibration range. The system can be restored as well as conversion between internal and external camera parameters and the distance sensor within a minute. Further, a grid corner detection method is based on the growth of the proposed method is obviously superior to specify OpenCV corner - corners size detection method. Details, please refer to "get one shot multi-camera automatic calibration"

References:

1、Geiger A, Moosmann F, Car Ö, et al. Automatic camera and range sensor calibration using a single shot[C]//Robotics and Automation (ICRA), 2012 IEEE International Conference on. IEEE, 2012: 3936-3943.

2, pinhole camera projection model and a distortion model

3, computer vision life | camera calibration

4, learning opencv3 (Chinese version) - Adrian Kaehler & Gary Bradski

5, binocular vision of the camera calibration

Public concern number, click on the "learning circles", "SLAM Getting Started", "learning from scratch three-dimensional visual core technology SLAM, unconditional refund within 3 days. Long is the advantage, learning should not go it alone, there are tutorial materials, work practice, answering questions such as, high-quality learning circle to help you avoid detours, Quick Start!

Recommended Reading

How to learn from scratch systematic visual SLAM?
Scratch together learning SLAM | Why learn SLAM?
To learn from scratch together SLAM | SLAM learn what in the end need to learn?
To learn from scratch together SLAM | SLAM what's the use?
To learn from scratch together SLAM | C ++ new features do not want to learn?
To learn from scratch together SLAM | Why use homogeneous coordinates?
To learn from scratch together SLAM | rigid body rotation of three-dimensional space
from scratch learning SLAM together | and why did they need Lie Lie algebra?
To learn from scratch together SLAM | camera imaging model
to learn from scratch with SLAM | do not push the formula, how to really understand very constrained?
To learn from scratch with SLAM | magical Homography
to learn from scratch together SLAM | Hello, point cloud
from scratch learning SLAM together | add filters to the point cloud
from scratch learning SLAM together | point cloud smooth normals estimate
to learn from scratch with SLAM | point cloud to the evolution of the grid
from scratch learning SLAM together | Figure optimizing understand, step by step with g2o you read the code
from scratch learning SLAM together | grasp g2o vertex programming routines
from scratch learning SLAM together | mastered the code routines g2o sides
to learn from scratch with SLAM | quaternion interpolation IMU alignment and picture frames
zero-based white, computer vision how to get started?
SLAM field of cattle, cattle laboratories, research cattle comb
I used MATLAB line and a 2D LiDAR SLAM
visual understanding of quaternions, you are no longer willing to hair loss
What are the most recent year SLAM semantic representation work there?
Visual SLAM Survey
Summary | VIO, laser SLAM collection and classification of papers related
research SLAM, how high programming requirements?
2018 SLAM, three-dimensional visual experience sharing job direction
2018 SLAM, three-dimensional visual experience sharing job direction of
depth learning experience SLAM | how to evaluate based on the depth of learning DeepVO, VINet, VidLoc?
AI resource requirements docking Summary: The first one
resource requirements docking AI Summary: The first two

Guess you like

Origin www.cnblogs.com/CV-life/p/11426028.html