Multi-sensor fusion SLAM --- 16. Introduction and basic knowledge of LVI-SAM

Table of contents

1 The role of SLAM in navigation and positioning

2 Introduction of VINS-FUSION 

3 A brief introduction to laser SLAM

4 Multimodal Fusion SLAM Algorithm


1 The role of SLAM in navigation and positioning

        As shown in the figure below, the SLAM algorithm plays a pivotal role in navigation and positioning:

use:

2 Introduction of VINS-FUSION 

        The middle box (Sect. IV) is the measurement of the sensor: the extraction of visual features (FAST corner points, tracking and matching with optical flow method), IMU pre-integration.

        The initialization on the left (Sect. Ⅴ) is a process of visual-inertial alignment. The visual observation is used to calculate the movement according to the matching relationship within a period of time. When the accumulation reaches a certain level (20-30 frames), the movement and inertia during this period will be calculated. The matching results are subjected to global BA to calculate a more accurate visual odometry result. After obtaining the results of the visual odometry, align the results of the IMU pre-integration with the results of the visual odometry, and solve the external parameters of the coordinate system, the system scale and the acceleration of gravity.        

        After initialization, it enters the process of sliding window tight coupling optimization solution. At the same time, loopback detection will be performed.

3 A brief introduction to laser SLAM

        Representative laser SLAM:

        Laser point cloud motion distortion: Since the lidar rotates once to obtain a complete frame of point cloud, each point under the lidar is measured at different times, which is no problem when the robot is stationary.

        But when the robot is moving, the problem appears. The starting point and ending point of the lidar scan do not coincide. The point cloud is distorted, causing the object to be stretched and deformed.

4 Multimodal Fusion SLAM Algorithm

        Introduction to LVI-SAM: It consists of a visual-inertial subsystem and a laser-inertial subsystem.

        Since the lidar can measure the geometric information of the environment, its fusion with the IMU can be quickly initialized. We can use the results output by the laser inertial odometer to assist visual SLAM to initialize, so that it can be initialized faster and more accurately. We can use the external parameters to project the laser point cloud into the image to assist in the depth extraction of visual features and finally fuse the laser- Vision-IMU information for joint factor graph optimization.

Guess you like

Origin blog.csdn.net/qq_41694024/article/details/131161338