[] Multisensor fusion distinctions of Xuecheng multisensor fusion study notes (three) - The LIDAR 3D point clouds mapped to the camera image (on)

The LIDAR 3D point clouds mapped to the camera image (upper) - principle analysis

Homogeneous coordinates

To lidar 3D point cloud is mapped onto a two-dimensional camera image, we can use the first article of this series pinhole camera model as obtained projection formula:
Here Insert Picture Description
In addition to the parameters of the camera itself constituting the projection geometry, we also need to We know about cameras and laser radar relative positional relationship in a common reference coordinate system. From the laser radar need to go through to the camera panning and rotating coordinate system conversion, we need to convert these operations are applied to each 3D point. Therefore, our goal is to simplify the symbols used to represent mapping . By using a linear transformation , 3D points can be represented by a vector, its translation, rotation, scaling, and perspective projection and other operations can be expressed with a matrix vector multiplication. With the above method, so far, the only remaining issue projection equations we want to get is that they involve About FROM FROM division, so that they become non-linear, thus affecting we convert them into more conventional matrix-vector form.

One way to avoid this problem is to change the coordinate system into homogeneous coordinates from the initial Euclidean coordinate system. In between these two coordinate systems is switching back and forth a non-linear operation, but if we are in a homogeneous coordinate system, the image projection transformation relationship given above will become linear, so that we can use a simple matrix - vector multiplication to represent. Switching between the two coordinate systems as shown in FIG.

Here Insert Picture Description
in n n -dimensional Euclidean coordinate system, a point may be made of a n n dimensional vector representation. By simply numbers 1 1 as an additional component can be converted into ( n + 1 ) (n+1) dimensional homogeneous coordinates. The transformation may be applied to image coordinates, the coordinates can be applied to the scene.

To homogeneous coordinate converted back to Euclid coordinate system only need to remove the last dimension vector coordinates, and before use n n coordinates by dividing the last dimension coordinate removed, as shown in FIG. As described above, this is a non-linear operation, once we return to Euclidean space, lose the ability to accurately separated into different parameters of the individual matrix elements (i.e., linear coordinate mapping relationship represented) of. Next, we will examine these matrix components.

Internal reference (Intrinsic Parameters)

Below we will use a matrix - vector form represented projection equations:
Here Insert Picture Description
As can be seen, the camera intrinsic parameters ** (Intrinsic Parameters, referred to as internal reference) ** is extracted into a matrix, which can in a very simple manner represents general properties of our pinhole camera model. Camera model more complex properties, such as skewed (Skewness) or shear (Shear), can be easily added.

The following Web site provides video animation vividly demonstrate the impact of various internal reference camera image plane of the object appearance on the show: http://ksimek.github.io/2013/08/13/intrinsic/ , video animation of the site schematic is shown below.

Here Insert Picture Description

外参(Extrinsic Parameters)

In the center of the pinhole camera coordinate system, a point in three-dimensional space P P two-dimensional image plane point P P' Mapping between the out has been described. However, the information about the point that if we have in three-dimensional space are in another coordinate system, such as is common in many automotive applications in automotive coordinate system, then the situation how? As shown below, the coordinates of the vehicle based on the ground below the origin is at the midpoint of the rear axle of the vehicle, x-axis pointing in the driving direction. In addition to naming conventions outside the axis, the figure also shows about the X, Y and Z axis angular name rotated normally used, i.e. "roll", "pitch" and "yaw".
Here Insert Picture Description
We assume that the car is equipped with a laser radar and cameras, they will coordinate calibration in the vehicle coordinate system. In order to measure the laser radar coordinate system to the camera coordinate system projected point, we need to add an extra in the mapping operation, so that we can associate the points from the vehicle coordinate system to the camera coordinate system, and vice versa. Typically, this mapping operation may be divided into three parts:a translation (Translation), rotation (rotation) and scaling (Scaling). Let us turn to analyze them:
translation: seen from the chart below describes the translation from point P \overrightarrow{P} To a new location P \overrightarrow{P^{'}} Linear movement, the movement can give P \overrightarrow{P^{}}
Add a translation vector t \overrightarrow{t} achieve.

Here Insert Picture Description
In homogeneous coordinates, this can be connected by a N N -dimensional matrix I I performed, where N is the P \overrightarrow{P} with t \overrightarrow{t} The number of elements. Thus, the panning operation becomes a simple matrix - vector multiplication, as shown in FIG.

Zoom : rotation operation can make P \overrightarrow{P} Multiplied by a scaling vector s \overrightarrow{s} achieve. In homogeneous coordinates, this can be represented as a matrix - vector multiplication, as shown below.
Here Insert Picture Description
Rotation : counter-clockwise (mathematically positive) obtained by rotating point P \overrightarrow{P^{'}} It can be made P \overrightarrow{P} Obtained by the embodiment shown in FIG.
Here Insert Picture Description
As shown above, the rotation operation may be multiplied by a rotation matrix R is represented. In the three-dimensional space, points P P can be shown by the following diagram represents a rotation matrix to rotate about three axes:
Here Insert Picture Description
Note: independent rotation about the three axes may be multiplied by a rotation matrix combined into ajoint rotation matrix R & lt:
R = R z R y R x R=R_z \cdot R_y \cdot R_x

One advantage of the homogeneous coordinates is simply connect multiple matrix - vector multiplication can easily convert multiple combinations of operation, there really is a great tool!

contain R R sum t \overrightarrow{t} Matrix combination and called external reference matrix , because it simulates the point coordinate transformation between the coordinate system. Once the laser radar coordinate system is converted to point the camera coordinate system, the following needs to be mapped to the image plane of the camera. To achieve this goal, we have to add additional camera matrix of intrinsic previously discussed K K . In homogeneous coordinates, we just before the conversion results and internal control matrix multiplication can be achieved.

Whereby available, a complete mapping lidar 3D point cloud to a desired coordinate conversion formula on the camera image as shown below:
Here Insert Picture Description

Note: The above figure, "Zoom" operation has been integrated into the matrix of intrinsic K K (and in related quantity becomes the focal length), rather than the outer part of the reference matrix. In the video below, shows the effect of camera parameters of the outer appearance of the object on the display image plane:http://ksimek.github.io/perspective_camera_toy.html

Published 48 original articles · won praise 65 · views 70000 +

Guess you like

Origin blog.csdn.net/xiaolong361/article/details/104788181