Transformation of four major coordinate systems and camera calibration

Transformation of four major coordinate systems and camera calibration

The camera imaging process involves coordinate transformation

Contains world coordinates (Xw, Yw, Zw), camera coordinates (Xc, Yc, Zc), plane coordinates (x, y), pixel coordinates (u, v).
insert image description here

One: Four different types of coordinate systems

The three-dimensional object is converted into two-dimensional coordinates on the photo, and the conversion is performed by four coordinate systems.

1 world coordinate system

The world coordinate system is a special coordinate system that establishes the frame of reference needed to describe other coordinate systems. Being able to use the world coordinate system to describe the position of other coordinate systems, but not to describe the world coordinate system in terms of a larger, external coordinate system. In a non-technical sense, the world coordinate system establishes the largest coordinate system we care about, not really the whole world.
Expressed as (Xw, Yw, Zw).

2. Camera coordinate system

Taking the geometric center (optical center) of the camera lens as the origin, the coordinate system satisfies the right-hand rule, represented by (Xc, Yc, Zc); the optical axis of the camera is the Z axis of the coordinate system, the X axis is horizontal, and the Y axis is vertical.

3. Image physical coordinate system

Taking the center of the CCD image as the origin, the coordinates are represented by ( x , y ) (x, y)(x, y). The unit of the image coordinate system is generally millimeters. The origin of the coordinates is the intersection of the camera optical axis and the imaging plane (generally case, this intersection point is close to the exact center of the image).
insert image description here

CCD,英文全称:Charge coupled Device,中文全称:电荷耦合元件,可以称为CCD图像传感器。
CCD是一种半导体器件,能够把光学影像转化为数字信号。 CCD上植入的微小光敏物质称作像素(Pixel)。
一块CCD上包含的像素数越多,其提供的画面分辨率也就越高。

4. Image pixel coordinate system

In fact, when we refer to an image, we usually refer to the pixel coordinate system of the image. The origin of the pixel coordinate system is the upper left corner, and the unit is pixel.
insert image description here

Transform the origin O1 of the image coordinate system into a coordinate system with O0 as the origin. Reasons to use:

  • If you use the image coordinate system, the unit is mm, it is actually not easy to measure the specific image. If you follow the uniform pixel standard, it is easier to measure the quality of the image
  • If you use the image coordinate system, then there are four quadrants, so there will be problems with positive and negative numbers, but after converting to the pixel coordinate system, they are all integers. In subsequent operations and calculations, it is much simplified.

Two-coordinate transformation

1. World coordinates → camera coordinates (rigid transformation)

insert image description here
(Xc, Yc, Zc) represents the camera coordinates; (Xw, Yw, Zw) represents the world coordinates; R represents the orthogonal unit rotation matrix, and t represents the three-dimensional translation vector.
According to the rotation angle, the rotation matrix in three directions can be obtained respectively, and the rotation matrix is ​​their product: R=Rx Ry Rz By the way, record the formulas of the three rotation matrices, which are often forgotten.
Rotate θ degrees around X:
insert image description here
Rotate θ degrees around Y:

insert image description here

Rotate θ degrees around Z:
insert image description here
as shown below (rotation θ):
insert image description here

2 camera coordinates → image coordinate system (central projection)

The perspective relationship between the camera coordinate system and the image coordinate system is calculated using similar triangles.
insert image description here
The matrix multiplication written in the form of homogeneous coordinates is:
insert image description here
where f represents the focal length, that is, the difference between the camera coordinate system and the image coordinate system on the Z axis. At this time, the unit of the projection point p is still mm, not pixel, which is inconvenient for subsequent calculations.

3 Image coordinate system → pixel coordinate system (discretization)

The origin of the pixel coordinate system is the upper left corner, and the unit is pixel. Both the pixel coordinate system and the image coordinate system are on the imaging plane, but their respective origins and measurement units are different. The origin of the image coordinate system is the intersection of the camera optical axis and the imaging plane, usually the midpoint of the imaging plane or the principal point. The unit of the image coordinate system is mm, which belongs to the physical unit, while the unit of the pixel coordinate system is pixel. We usually describe a pixel with several rows and columns. So the conversion between the two is as follows: where dx and dy represent how many mm each column and each row represent, that is, 1pixel=dx mm

insert image description here

4. Finally get the coordinate system conversion formula

Through the conversion of the above four coordinate systems, we can get a point from the world coordinate system to the pixel coordinate system, as shown in the figure below, here are the internal and external parameter matrices we are familiar with.
insert image description here
insert image description here

Intrinsic and extrinsic parameters of the camera

  • External parameter: The rotation and translation of the camera is an external parameter, which is used to describe the motion of the camera in a static scene, or the rigid motion of a moving object when the camera is fixed. Therefore, in image stitching or 3D reconstruction, it is necessary to use external parameters to calculate the relative motion between several images, so as to register them under the same coordinate system.

  • Internal parameters: The internal parameter matrix is ​​given below. It should be noted that real lenses also have radial and tangential distortions, and these distortions belong to the internal parameters of the camera. The mapping relationship between the pixel coordinate system and the world coordinate system is known from the previous steps:
    so:
    insert image description here

Among them, fx=f/dx, fy=f/dy, f is the focal length of the camera. RT is an external parameter, and matrix K is an internal parameter, which contains 5 unknowns. When calibrating, if the object is at a different position from the camera, then we must set the camera coordinates at different positions. The simple understanding is that when the object is close to the camera, the imaging effect is large, and the actual representative size is small. Therefore, every position must be calibrated.

Three-camera calibration

Experimental steps:

1. Print a piece of checkerboard A4 paper (the black and white spacing is known), and stick it on a flat plate

2. Take several pictures for the checkerboard (usually 10-20 pictures)

3. Detect feature points in the picture (Harris feature)

4. Using the analytical solution estimation method to calculate 5 internal parameters and 6 external parameters

5. According to the maximum likelihood estimation strategy, design the optimization goal and realize the refinement of the parameters

1 Open Matlab. The interface is shown in Figure 1 below.

insert image description here

2 Click the "APP" function on the top to enter the following interface, the red part is the camera calibration function (the upper part is the calibration function of the monocular camera, and the lower part is the calibration function of the binocular camera)

insert image description here

3 Click Monocular Camera Calibration to enter the following calibration interface, click **"Down Triangle"** in the red part to select "From file", select a folder, and select a photo.

4 After selecting the photo, enter the following interface and select the size of the calibration plate (in mm). My setting is 25mm.

insert image description here

5 Click **“Calibrate”** to start the calibration.

insert image description here

6. Calibration parameter error. The red part in Figure 6 is the calibration error, the average value of the calibration error is less than 0.3 pixels, and the camera parameters are available. (The error of each photo can be seen from the calibration error. If the error of a certain photo is too large, it can be deleted from the picture list on the left and re-calibrated).

insert image description here

7. Click the red part in Figure 7 to save the parameters. The command box in Figure 8 will appear, click OK.

insert image description here
Result:
Among them, "RadiaDistortion" is the distortion matrix of the camera, and "IntrinsicMatrix" is the internal reference matrix.
insert image description here
insert image description here

Guess you like

Origin blog.csdn.net/m0_46406029/article/details/128361670