[Brief summary] Comparison of the results of the Benchmark of the SLAM algorithm and related data sets

Preface and References

The main thing is to copy the summary, so that you can find the specific use of the plan later. All the sources are in the title and jump to the original link. Here we only collect and use various benchmarks. If the original author thinks it is infringing, please contact me and I will fully cooperate with it. Content and Link Removal

If netizens have other updated results to recommend, you can leave a message in the comment area and I will continue to join; because the results every year will definitely make some algorithms no longer state-of-art, you can click on the directory to jump to the latest one of the year, which is faster

Finally, I am not a SLAM professional track player. The summary of all the program features is mainly discussed by the link authors and netizens. If you have different opinions, please click on the relevant issues and links to discuss together~ peace and love!

2020 SLAM Comparison

GITHUB original link: https://github.com/Tompson11/SLAM_comparison

This is the first comparison of SLAM schemes I have seen. It basically explains the schemes and characteristics before 2020 clearly, but after 2020, many excellent SLAM schemes have appeared~~

Loose coupling means: no need for IMU/others, tight coupling scheme: all sensors need to be online

The following is an excerpt from the original text, please click the original link to view more detailed content


Program and Features

This time I tested 8 open source solutions on github, which can be divided into

features plan
Pure Lidar A-LOAM (HKUST version of LOAM), hdl_graph_slam , BLAM
Lidar is loosely coupled with IMU LeGo-LOAM , SC-LeGo-LOAM (using a new loop detection method on LeGo-LOAM)
Lidar and IMU are tightly coupled LINSLIO-SAMLIOM

Summarize

  1. The pros and cons of the various options are as follows:
plan advantage insufficient
ALOAM 1. It is more stable when the geometric features are rich 1. The memory will explode in the later stage, and the calculation efficiency will drop.
2. There will be obvious drift when there are few geometric features
LeGo-LOAM 1. Relatively stable when the ground points are abundant
2. Lightweight
1. It is easy to crash when there is a lack of ground points
2. The resulting map is relatively sparse
LINS 1. Lightweight 1. The drift in the z direction is obvious
2. The obtained map is relatively sparse
3. The current version requires Lidar to be parallel to the xy plane of the IMU body coordinate system, and does not accept the external parameters provided by itself
LIO-SAM 1. There is a loop detection, which can close the loop well
2. Strong stability
3. Demo looks more comfortable
1. In the case of rich geometric features, it may not be as good as ALOAM
WITH ME 1. There is a correction of the acceleration of gravity and the initial state estimation of the IMU 1. The stability is not good, sometimes the performance is good, sometimes not, it may be related to the performance of the initialization link
2. The memory usage is large, and the time performance is poor
  1. After IMU correction, fusing high-frequency IMU can indeed improve SLAM performance, especially in the case of lack of geometric features or severe motion.
  2. LIO-SAMIt does a good job in positioning and mapping, and it is recommended to use it.

In 2021, some netizens’ issue links are also very interesting. I hope you don’t miss it. Anyway, when I saw it for the first time, I was very happy to learn from the experience of the predecessors hhhh: https://github.com/Tompson11/SLAM_comparison /issues/1

@Gatsby23: Thanks to the author for making such a rich comparison experiment. I didn't expect that LINS can achieve such good results. Because looking at LINS code and papers, the tight coupling results of Lidar and IMU are only sent to the Mapping Module as the initial value, and the Mapping Module itself needs to do a Scan-to-Map ICP again, so I have tried it under different initial values. According to the results of Scan-to-Map, it is found that there is not much difference in the results, but I did not expect the results of LINS to be so good in actual effect. However, there may be two reasons: on the one hand, there is a process of correcting Roll and Pitch through the direction of gravity in the LINS code. This may have a better correction effect on the body movement itself. The second is that in some intense sports, a better initial value is needed to converge. This is where I did not do enough experiments and did not expect it.
But I recommend the author to try the two new algorithms: F-LOAM and Fast-LIO. In particular, FAST-LIO is updated based on the observation of the current frame and map, which can theoretically achieve better results.

@chengwei0427: The effect of tight coupling will be better than that of loose coupling (more constraints and higher requirements for sensors), and this will also meet expectations; although the performance of lego_loam is surprising
, it also conforms to its working principle (scan- The to-scan effect is very poor, which is not as good as aloam);

@chengwei0427: If I remember correctly, the ground points of lego_loam are placed in surf to participate in optimization, so there are still constraints on the ground. Compared with lego, the feature extraction of lio_sam removes the clustering of non-ground points and the removal of point clusters with a small number of points. The ground is removed to ensure that points close to the ground do not participate in corner feature extraction.
The bias constraint is also to improve the accuracy of the integral, so that the initial value in lio_sam will be more accurate, but the imu does not participate in the optimization, and only provides the initial value; (the actual test does not use gnss).
Therefore, in addition to the difference in features, the scan_to_map of lio_sam and lego_loam is the difference in the initial value; here, is the feature more influential, or is the initial value more influential? And in the actual test, compared with lio, lego has more iterations and more cases of non-convergence

@Gatsby23: Hmm, thanks for the reminder, some forgot Lego-LOAM. If I remember correctly, the optimization of LEGO-LOAM is disassembled, so maybe the Euler angles disassembled to optimize is not a good result? I personally think that the lower limit of the characteristic is guaranteed, and the initial value is higher than the upper limit. You can map the matching point relationship to see

@chengwei0427: I recently tested LIO-Livox and found the effect to be quite good. So I adapted the traditional rotating laser (mainly feature extraction with lio-sam, and laser odometer with LIO-Livox, almost no modification).

The data collected by the ouster-64, the collection speed is about 0.7~1.2m/s (LIO-Livox claims to support the composition at a speed above 50km/h), but the trajectory will drift after running for a period of time (laser 10hz and imu data 100hz is provided by ouster, so the hardware clock is already synchronized, and the unit matrix used for the external reference of imu-lidar should actually have a small translation transformation).

Now I'm stuck here, and I haven't found out where the problem is? Want to hear what you think?

Possible reasons are:

①The external parameter accuracy of laser and imu is not enough. (Because there is no calibration, the unit array is used directly, and the laser and imu in ouster should also be close to the unit array. The subtle difference here has a great impact on the result. After reading the code, it is mainly the laser residual part. If it is close to the unit array , here should theoretically have little impact);

②The tight coupling of LIO-Livox is affected by the accuracy of the sensor, especially the accuracy of the imu; (ouster has a built-in 100hz imu, and the bias is very large. In LIO-Livox, the sliding window of two frames is tightly coupled, and the imu pre-integration result is affected by the imu How big is the accuracy impact?)

PS: I also tested lio-sam with the same bag, and the effect is quite good. Of course, the tight coupling of lio-sam is quite special.

Looking forward to your reply!

@Gatsby23 A few questions:

I read what others said: Ouster's world stamp may not be truly hard-synchronized. This still needs to be tested. Can you collect the data and observe it offline? If it doesn't work, you can consider adding an interpolation to smooth it, and then remove the distortion (but I'm talking about this on paper, and I haven't done it in detail).
Regarding the issue of external parameters, I think you can run FAST-LIO to see the results, and see if it converges in the end to basically know how the external parameters are. I ran FAST-LIO directly, and found that the translation error after convergence was similar to what I calibrated.
A very important constraint of LIO-Livox is NHC, which uses the ground to constrain the angle, so it depends on whether the data you record is completely parallel to the ground? If the jitter is relatively large, it will also have a relatively large impact.
It’s best to look at the final trajectory map. If it’s more than 1KM, it’s quite normal.

@chengwei0427: LIO-Livox is an optimization-based method, not a filtering-based method, which is somewhat similar to lili-om. You may have read too much and got confused again.

Here I am modifying the feature extraction part of LIO-Livox, referring to the feature extraction of lio-sam, to support spinning laser; the optimization part is almost unchanged, and the poseEstimation of LIO-Livox is used, so the high probability is not a code problem, but The problem with my parameters, especially lidar-imu parameters.

The strange thing is that if I set the initial value in fast-lio according to the lidar-imu parameter inside ouster, it will drift a little bit; if I set the unit matrix for rotation in lidar-imu and translate to a parameter, the effect is still the same. good.

2022 SLAM Application

Original link: https://github.com/engcang/SLAM-application

There is no direct indication of the result, but the running algorithm is given to the video and the result pcd/odom.txt

The following is an excerpt from the original text, please click the original link to view more detailed content


Results:

2022 HKUST DATASET

IROS2022论文:FusionPortable: A Multi-Sensor Campus-Scene Dataset for Evaluation of Localization and Mapping Accuracy on Diverse Platforms

Device sensor configuration in the dataset

The master's group, although not involved, has also learned a lot; this article is building more data sets and expanding it into a journal, please use this web link to learn about it~ https://ram-lab.com/file/site/ multi-sensor-dataset/

The following are the positioning results tested on this dataset, directly taken from the IROS2022 paper:

2022 NTU DATASET

IJRR 2021: NTU VIRAL: A Visual-Inertial-Ranging-Lidar dataset, from an aerial vehicle viewpoint

The reason why 2022 is written is because the author is still maintaining this data set, and at the same time adding the use and evaluation of various algorithms on this data set

Device sensor configuration in the dataset

Whisper: Brother Tianming, the first author of the Ph.D. group, came to do his Ph. D., and used their datasets in class, so I got acquainted with it and found that the scripts written by Brother Tianming are really easy to use;

2022/12/21 Method for adapting data sets intercepted from web pages:

Method Repository Credit
Open-VINS https://github.com/brytsknguyen/open_vins Forked from https://github.com/rpng/open_vins
WINES-Fusion https://github.com/brytsknguyen/VINS-Fusion Forked from https://github.com/HKUST-Aerial-Robotics/VINS-Fusion
VINS-Monkey https://github.com/brytsknguyen/VINS-Mono Forked from https://github.com/HKUST-Aerial-Robotics/VINS-Mono
M-LOAM https://github.com/brytsknguyen/M-LOAM Forked from https://github.com/gogojjh/M-LOAM
LIO-SAM https://github.com/brytsknguyen/LIO-SAM Forked from https://github.com/TixiaoShan/LIO-SAM
A-LOAM https://github.com/brytsknguyen/A-LOAM Forked from https://github.com/HKUST-Aerial-Robotics/A-LOAM
FAST_LIO https://github.com/Kin-Zhang/FAST_LIO Kindly provided by Kin-Zhang @ KTH RPL

This is the result table in the 2021IJRR paper:

2022 Hilti Challenge

When I was in Hong Kong Branch, I saw that Lao Hu was complaining about this data every day. The data must be rotated so much hhhh, the link is directly given to axriv so that everyone can directly go to a report similar to a report to see which method each submission team is based on , although most did not open up the code they evaluated

equipment:
insert image description here

As a result, the evaluation method of the specific score can be read in the paper.

insert image description here

Guess you like

Origin blog.csdn.net/qq_39537898/article/details/128140829