Temporal and Spatial Online Integrated Calibration for Camera and LiDAR paper reading

Table of contents

Summary

Main contributions

​Method

A pose evaluation model 

B-line feature extraction

C projection and feature filtering

D dynamic point cloud target removal

E search optimization 


Summary

Camera radar is widely used, but there is little work linking time synchronization and external parameter calibration, which is important for data fusion. Time and space calibration technology faces the challenges of lack of correlation and real-time performance. This paper proposes a pose assessment model and an environmental robust line feature extraction algorithm to improve the capabilities of data association and online real-time estimation. Dynamic target elimination aims to seek a better strategy and consider the correspondence of adjacent moving point cloud registration. The strategy of search optimization aims to provide more optimal parameters while taking into account the accuracy and efficiency of calculations. We evaluated our algorithm on KITTI. In online experiments, our method improved the accuracy by 38.5% compared to the soft synchronization method in time calibration. In spatial calibration, our method can automatically correct the perturbation error within 0.4s and achieve an accuracy of 0.3°. This work has some usefulness for the research and application of sensor fusion.

Main contributions

1) Propose a new pose evaluation model to reduce time delay and external parameter drift in spatio-temporal online calibration.

2) Propose a method to eliminate dynamic point clouds , which only uses the correlation between the two frames of the necklace instead of the detection frame based on prior information.

3) Introduce a search optimization method to improve optimization efficiency, and a new accuracy evaluation method to evaluate the accuracy of calibration results.

method

A pose evaluation model 

B-line feature extraction

Image line feature extraction: mainly using grayscale image processing and canny edge words, refer to the paper [ Lsd: a line segment detector ].

LiDAR line feature extraction: The point cloud is divided into different line bundles, and the boundary line feature point extraction uses the continuity of distance. In order to extract sufficient line features in low-beam LiDAR, local mapping is used to associate three frames together, so that more point clouds can be displayed at one time. Depending on the strength of the GPS signal and the accuracy requirements of the autonomous driving scene, we propose two local mapping methods: GPS-based method and NDT. The former has high efficiency but low accuracy, while NDT has low efficiency and high accuracy.

C projection and feature filtering

The point cloud is projected into the image through a rotation and translation matrix, and outliers are filtered out in feature extraction. After the LiDAR point cloud is converted into an image format, an 8-pixel boundary convolution kernel filters out outliers in the image to obtain more standardized line features from the image. We have filtered out ground point clouds because their lateral line features do not register well.

D dynamic point cloud target removal

Although there are more static point clouds, the registration method based on line features will be more affected than the registration method based on the original point cloud, so dynamic point cloud filtering is necessary.

Dynamic point cloud filtering is currently widely used for deep learning object detection. The original point cloud and prior information are used as inputs to the detection network, which can identify possible dynamic objects and return detection frames. Then the dynamic point cloud is calculated based on the detection frame. However, this method relies on prior information and detection network, which cannot meet the real-time nature of online calibration.

We propose a lightweight high-accuracy method. kd binary tree is used to find nearest neighbor points. The nearest neighbor registration point distance should be small for stationary targets and larger for dynamic objects, so we set a threshold to filter out dynamic targets. Because the point cloud at time t is a guess, not t+e_tthe true value at time, there will be an angular error. According to the triangle similarity principle, further dynamic objects require a larger threshold to filter out, so we set a linear factor to correct the threshold. Dynamic objects are filtered out through linear dynamic thresholding and clustered for subsequent analysis

E search optimization 

In the previous stage, LiDAR line features have been extracted and projected into the image, and their projection ratio has been calculated.

In order to calculate the accuracy, as shown in Fig5, four search steps are proposed based on different grayscale rates.

For computational efficiency, a search method is used to optimize the cost function. In [ Line-based automatic extrinsic calibration of lidar and camera, ICRA ], they compared the current function score with the adjacent 728 scores. If the search program finds a parameter with a higher score, it will stop the current search step. Then start a new search at a higher scoring position. This method stops after reaching the upper limit of iterations or finding the best score. We improved this method so that when the fastest gradient descent direction is found, we can optimize faster along this direction. Algorithm1 demonstrates our approach

Guess you like

Origin blog.csdn.net/qq_38650944/article/details/128384752