Joint calibration of camera and lidar (2)

foreword

LiDAR Camera Calibration (LCC) series mainly introduces the related content of extrinsic calibration of LiDAR camera. This article mainly introduces related open source codes and software, mainly including target-based and targetless methods. Each method corresponds to the title and explains the year of the method and the language of the open source code (c: c++, p: python, m: matlab).

Github synchronization update:

GitHub - Deephome/Awesome-LiDAR-Camera-Calibration: A Collection of LiDAR-Camera-Calibration Papers, Toolboxes and Notes​github.com/Deephome/Awesome-LiDAR-Camera-CalibrationUploading...ReuploadCancel

1. target-based method

Generally, a calibration board is used, which can be an ordinary rectangular board, which can add visual effects (such as checkerboard, ArUco), and can hollow out a specific shape on the rectangular board.

1.0 CamLaserCalibraTool (2004c)

Mainly referring to the 2004 paper of the University of Washington, Megvii provided an open source implementation and blog interpretation. 2D lidar and camera calibration. It mainly utilizes point-to-plane and edge constraints. See Megvii 's blog and open source for details :

1.1 LCCT (2005m)

From CMU Robotics Institute, the earliest known work on 3D Laser and camera calibration (2005), based on the matlab graphical user interface, was used to calibrate the external parameters of the lidar camera.

The target-based method uses a calibration board to collect multiple point cloud image pairs, and selects the plane area of ​​the calibration board on the depth map (range image) corresponding to the point cloud to solve the external parameters (two stages).

In the first stage, the difference between the distance from the camera center to the plane and the normal vector of the plane in the two coordinate systems are minimized , and the translation and rotation are solved linearly in turn; in the second stage, the point-to-plane distance is minimized, and the solution is iterative.

1.2 cam_lidar_calib (2010c)

From University of Michigan, ROS/C++ implementation.

To use checkerboard, at least 3 views are required. Features are automatically extracted, the normal vector of the checkerboard in the camera coordinate system and the distance from the origin of the camera are extracted from the image, and the plane points of the checkerboard are extracted from the point cloud.

1.3 lidar_camera_calibration (2017c)

From India IIIT Robotics Research Lab, ROS package (C++) implementation, introduced two methods. The first method is based on 2D-3D correspondence , using hollow rectangular cardboard as the target, manually marking corner 2D pixels on the image, manually selecting line segments in the point cloud , using straight line intersection to solve 3D corner points, and then using PnP+ransac to solve external parameters. The disadvantage is that the pixels are manually marked, and the error is large.

The second method is based on 3D-3D correspondence , and the main difference from the first method is the extraction of features in the image. By using the ArUco two-dimensional code, the 3D coordinates of the corner points in the camera coordinate system can be directly calculated, and then the external parameters can be solved by ICP .

1.4 ILCC (2017p)

From Nagoya University, Japan, implemented in python. The complete process is as follows:

The 3D corner point extraction method of this method is quite unique. Based on the correlation between the point cloud reflection intensity and the chessboard color mode, a chessboard model is used to fit (match) the segmented point cloud , so that the corner position of the chessboard model is used to represent the corner position in the chessboard point cloud.

1.5 plycal (2018c)

From HKUST, C++ implementation.

Use the rectangular plate as the target. Firstly, time synchronization of lidar and camera, and image correction. Fully automatic extraction of rectangular plate corners and edges in images, and extraction of rectangular plate edges and plane points in point clouds. 2D-3D matching of rectangular features. Optimize with point-to-line and point-inside-polygon constraints.

1.6 Matlab Lidar Toolbox (2018m)

The target-based method uses a chessboard, and theoretically it can be solved by collecting only one pose. Feature extraction automatically extracts the plane and edge information of the chessboard in the camera and lidar coordinate systems , and uses line correspondence (direction constraint + point to line constraint) and plane correspondence (normal constraint + point to plane constraint) for calibration.

Only the lidar toolbox of matlab can be used, and the source code cannot be seen.
This method is similar to the point cloud feature extraction method of the calibration board of plycal (2018c).

1.7 extrinsic_lidar_camera_calibration (2020m)

From Robotics Institute, University of Michigan. matlab implementation.

The main innovation is the corner estimation method of the point cloud of the calibration board. Assuming that there is a reference target with a known size at the origin of the lidar, it is hoped that the point cloud of the target target will coincide with the reference target as much as possible after the H transformation. Optimally solve H, and inversely transform the corner points of the reference calibration plate to obtain the corner position in the point cloud.

The previous method used the idea of ​​first fitting the edge and then intersecting the straight line. Only the edge points were used. Due to the influence of the depth measurement error of the point cloud, the four corner points extracted at last may not be compatible with the real geometry of the target. The corner estimation of this method considers all points, and the estimated four corners are also compatible with the real target shape.
This method is similar to the ILCC (2017p) method, which is to parametrically model the point cloud of the calibration plate by fitting with a reference calibration plate to obtain the corner points. It's just that ILCC uses the point cloud reflection intensity, and this method only uses point cloud geometric information.

github open source : https://github.com/UMich-BipedLab/extrinsic_lidar_camera_calibration

This open source code also implements the method of point cloud edge extraction in the Matlab Lidar Toolbox (2018m) reference paper: 1) Fit the plane with ransac first, 2), find the endpoint (edge ​​point) of each scanline, 3) project the point cloud of the calibration plate to the fitting plane, 4) fit each scan line, 5) project the edge points to the fitted scan line, 6) fit the edge with ransac, and remove the gross error of the edge points Reference Paper: 2020 _IEEE access_Improvements to Target-Based 3D LiDAR to
Camera Calibration

1.8 livox_camera_lidar_calibration(2020c)

The Lidar-Camera calibration code officially provided by Livox, images and point clouds are manually punctuated. github open sourcehttps://github.com/Livox-SDK/livox_camera_lidar_calibration

1.9 ACSC (2020p)

From Beihang University, python implementation, for solid-state lidar Livox.

A multi-frame point cloud integration refinement algorithm (temporal-spatial-based geometric feature refinement) and a reflection intensity distribution-based 3D corner estimation method (reflectance intensity distribution-based 3D corner estimation) are proposed. Automatically extract 2D and 3D corners, then solve with Ransac-based PnP.

1.10 velo2cam_calibration (2021c)

From Intelligent Systems Lab (LSI), Universidad Carlos III de Madrid, Leganes, ROS + C++ implementation. LiDAR, monocular camera, and stereo camera can be calibrated in any pair. A special calibration board is required:

  • github open source : https://github.com/beltransen/velo2cam_calibration
  • 参考论文: Beltrán, J., Guindel, C., and García, F. (2021). Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor Setups. arXiv:2101.04431 [cs.RO]. Submitted to IEEE Transactions on Intelligent Transportation Systems.

1.11 autoware

The latest version only has autoware_camera_lidar_calibrator, and the method of directly selecting points manually is introduced below.

The result of using the calibration plate is more accurate, but the operation is inconvenient. 1) You need to manually grab multiple keyframes, 2) Use glviewer to display the point cloud, it is not easy to adjust the viewing angle, 3) You need to manually select the plane point cloud

1.12 LIBCBDETECT (2012 m)

Sub-pixel detection of checkerboard corners, realized by matlab. Can be used for pinhole cameras, fisheye cameras, panoramic cameras. From the paper:

Automatic Camera and Range Sensor Calibration using a single Shot​www.cvlibs.net/publications/Geiger2012ICRA.pdf

1.13 multiple-cameras-and-3D-LiDARs-extrinsic-calibration

2 targetless methods

2.1 apollo

It is a targetless method based on natural scenes , which does not require manual marking, but requires a more accurate initial value .

(A better calibration scene, including street lights, trees, roads and other objects)

Note: the core code is not open source

2.2 autoware

There is no target, but you need to manually mark the corresponding points in the image and point cloud, at least 9 pairs are selected.

2.3 ExtrinsicCalib (2012c)

Firstly, the image is grayscaled and histogram equalized to obtain a grayscale image, and then the point cloud is projected into an image according to the reflection intensity and normal vector characteristics of the point cloud, and the normalized mutual information is used to measure the correlation between the grayscale image and the image generated by the point cloud. The particle swarm optimization algorithm is used to continuously change the external parameters until the particles converge and reach the maximum value of the standardized mutual information.

论文:2012_Automatic Targetless Extrinsic Calibration of a 3D Lidar and Camera by Maximizing Mutual Information

Limitations: For images, lighting affects pixel brightness and there are shadow issues; point cloud intensity is different because the laser is active. Therefore, using multi-view data, noise caused by lighting and shadows can be avoided as much as possible, making the error function relatively smooth and easy to optimize.

2.3 CamVox (2020c)

from Southern University of Science and Technology. The image is first grayscaled and then the edge is extracted. The point cloud first obtains the reflection intensity map and the depth map respectively, and then extracts the edge. Through ICP optimization, the best external parameters are solved.

2.4 livox camera calib (2020c)

from the University of Hong Kong. Extract the edge features in the point cloud and image respectively, then match the features, and finally optimize and solve the best external parameters to better align the edge of the point cloud and the edge of the image.

For detailed interpretation of the paper, please refer to:

Scene selection: Avoid cylindrical objects, avoid excessive textures (trees, flowers, etc.), evenly distribute edges, and edges in multiple directions  Issue with finding depth-continuous regions (github)

2.5 mlcc

Guess you like

Origin blog.csdn.net/scott198510/article/details/131177823