Classic literature reading - DAMS-LIO (lightweight LiDAR inertial odometer based on iEKF)

0. Introduction

The fusion scheme is the key to the multi-sensor fusion method, which is the most promising solution for state estimation in complex and extreme environments such as underground mines and planetary surfaces. This paper presents a lightweight iEKF-based LiDAR inertial odometry system with a degradation-aware modular sensor fusion pipeline that simultaneously measures LiDAR from another odometry during the update process only when degradation is detected. points and relative poses. The CRLB theory and simulation experiments verify that the method has higher accuracy than the single observation method. Furthermore, the proposed system is evaluated on perception-challenged datasets against various state-of-the-art sensor fusion methods. The results show that the system still has high estimation accuracy and real-time performance even in harsh environments and poor observation conditions.

Although this work has not yet been open sourced, it is expected to be open sourced after receiving it. For the specific original content, please refer to " DAMS-LIO: A Degeneration-Aware and Modular Sensor-Fusion LiDAR-inertial Odometry " here.
insert image description here

Figure 1: (a) is the mapping result of DAMS-LIO in the CERBERUS DARPA subsurface challenge dataset, which contains many challenging environments, as shown in (b)-(c), including dark, long tunnels, and textureless areas . We compare our mapping results with the state-of-the-art laser inertial odometry navigation system LIO-SAM in (a) top right.

1. Article contribution

  1. A lightweight lidar-inertial odometry system (DAMS-LIO) based on degradation-aware and modular sensor fusion is proposed, which is capable of robust and accurate state estimation in extreme In this case, it provides a clear advantage for robots to perform complex detection tasks.

  2. A new sensor fusion method is proposed to fully fuse the information of LiDAR and other odometry, and only when degradation is detected, the relative pose of LiDAR points and other odometry is used as the measurement value in the update process.

  3. A theoretical analysis based on the CRLB theorem is carried out to quantify the performance and demonstrate the high accuracy of the proposed sensor fusion method.

  4. Extensive experiments on simulated and real datasets verify the robustness and accuracy of our method.

2. System architecture

The definition of each coordinate system in our system is shown in Fig. 2. Following the typical LIO coordinate system definition, { L } \{L\}{ L } sum{ I } \{I\}{ I } correspond to the lidar and IMU coordinate systems respectively, while{ G } \{G\}{ G } is a local vertical reference frame whose origin coincides with the initial IMU position. { O } \{O\}{ O } denotes other odometry measurements whose origin is marked as{ M } \{M\}{ M}
insert image description here

L p ^Lp L prepresents the position of each point relative to the lidar coordinate system. Definitions of some important and commonly used symbols are given in Table I.

insert image description here

A. State vector (this part is more general)

The estimated states in our system include the current IMU state x I x_IxI, the extrinsic parameter I x L I_{x_L} between LiDAR and IMUIxLand the transformation from other odometry frames to IMU frames I x O I_{x_O}IxO. at time tk t_ktk, the state can be written as:

insert image description here
G v I ^Gv_IGvIis the speed of the IMU in the global coordinate system, bg , k b_{g,k}bg,kand ba , k b_{a,k}ba,kdenote the bias of gyroscope and accelerometer respectively, ggg is in{ G } \{G\} Gravity vector in { G } coordinate system, x L = { LI q ˉ , I p L } x_L = \{^I_L \bar{q}, ^Ip_L\}xL={ LIqˉ,IpL}x O = { OI q ˉ , I p O } x_O = \{^I_O \bar{q}, ^Ip_O\}xO={ OIqˉ,IpO} means from other odometry coordinate system {O} and LiDAR coordinate system{ L } \{L\}{ L } to IMU coordinate system{ I } \{I\}Transformation of { I } .

B. IMU propagation (similar to IMU propagation)

Due to IMU measurements are subject to bias bbb and zero-mean Gaussian noisennn [15], so they can be modeled as:
insert image description here
where,ω m ( t ) ω_m(t)ohm(t) a m ( t ) a_m(t) am( t ) is the original IMU measurement data,I ω ( t ) I_ω(t)Ioh( t ) is the IMU in the local coordinate system{ I } \{I\} Angular velocity at { I } , G g G_gGg G a I ( t ) G_{a_I}(t) GaI( t ) is gravity and the acceleration of the IMU in the global coordinate system. The IMU kinematics are the same as [16], and for the sake of brevity, we do not repeat the description here. In order to transform the covariance matrix from timetk t_ktkpassed to time tk+1 t_{k+1}tk+1, we follow the linear discrete-time model generation format of [17]:
insert image description here
This text is about state vectors and IMU propagation. Among them, in Φ k Φ_kPhikis the linearized system state transition matrix, nk = [ ngnwgnanwa ] n_k=[n_g n_{wg} n_a n_{wa}]nk=[ngnwgnanw a] is the system noise. Except for quaternions, the error state is defined asx ~ = x − x ^ \tilde{x} = x − \hat{x}x~=xx^ . Among them, the quaternion passes the relationq = q ^ ⊗ δ qq = \hat{q}⊗δqq=q^δq is defined. Same as [16], notation⊗ ⊗ means quaternion multiplication, and the error quaternion is defined asδ q ⋍ [ 1 2 δ θ T 1 ] T δq \backsimeq [ \frac{1}{2} δθ^T 1]^Tq _[21d iT 1]T. _
Ifnk n_knkThe covariance of Q k Q_k is defined asQk, then the state covariance can be changed from tk t_ktkPropagate to tk+1 t_{k+1}tk+1, see the formula for the specific method.
insert image description here
where Φ = diag ( Φ I , Φ O ) Φ = diag(Φ_I, Φ_O)Phi=diag(ΦI,PhiO) G = [ G I T , G O T ] T G = [G^T_I, G^T_O]^T G=[GIT,GOT]TΦ I Φ_IPhiIJapanese GI G_IGIIndicates the part related to variables other than extrinsic parameters between other odometry and IMU, which is the same definition as in [12]. Furthermore, Φ O = I 6 × 6 , GO = 0 6 × 12 Φ_O = I_{6×6},G_O = 0_{6×12}PhiO=I6×6,GO=06×12

C. Measurement model (more important knowledge)

1) LiDAR measurements: For measurements from LiDAR, we model them as in [12]. After motion compensation for scans sampled at time τi, we set the jjthj points are defined asL pj ^Lp_jLpj. By backpropagation, the point can be changed to be the same as tk t_ktkThe end-of-scan measurement corresponding to the IMU measurement at . At the same time, since each point should lie on a facet area in the map, we can get
insert image description here
where, G qj ^Gq_jGqjis a point on the facet, G uj ^Gu_jGujis the normal vector of the plane. IRL ^IR_LI RLGRI k ^GR_{I}GRI kis the corresponding %ILqsum GIK q ^GI_{Kq}GIKqThe rotation matrix of . L nj ^Ln_jLnjis point L pj ^Lp_jLpjranging and pointing noise. (8) can also be summarized in a more concise form:
insert image description here
In order to linearize the updated measurement model, we set the measurement model at x ^ k \hat{x}_kx^kmake a first-order approximation.
insert image description herewhere vl ∈ N ( 0 , R l ) v_l ∈ N(0,R_l)vlN(0,Rl) is due to the original measurement noiseL nj ^Ln_jLnjresulting in Gaussian noise. HL H_LHLis the residual rl r_lrlFor error state x ~ k \tilde{x}_kx~kThe Jacobian matrix for , is given as follows:
insert image description here
2) Another odometry measurement: Since each odometry is issued at a different frequency, we perform a linear interpolation to obtain the estimated state at time tk t_ktkand tk + 1 t_{k+1}tk+1Other odometer attitudes at , and denote their coordinate system as O k − 1 O_{k−1}Ok1and Ok O_kOk. We will transform the matrix ATB from A to B ^AT_BATBDefined as ATB = [ ARB , A p B ; 0 3 × 3 , 1 ] ^AT_B = [^AR_B ,^Ap_B ; 0_{3×3}, 1]ATB=[ARB,ApB;03×3,1 ] . Then according to Figure 2, we have the following transformation relationship:
insert image description here
Among them, $^{Q{k-1}}T_{O_k} =KaTeX parse error: Double superscript at position 19: …^{-1}_{O_{k- 1}}^̲GT_{O_k}is the relative pose measure computed by other odometry. Therefore, based on (13), we can easily get the measurement modelz O = [ zrzp ] z_O = [\begin{matrix}z_r \\ z_p\end{matrix}]zO=[zrzp] , where:
insert image description here
the rotation and translation errors calculated according to formula (10), wherevr ∈ N ( 0 , R r ) v_r∈N(0,R_r)vrN(0,Rr) andvp ∈ N ( 0 , R p ) v_p∈N(0,R_p)vpN(0,Rp) is the corresponding Gaussian noise. Therefore, according to equations (14) and (15), we can obtain the Jacobian matrix of the error state of the rotation and translation residuals.
insert image description here
in:
insert image description here

D. Metamorphic perception update (more important knowledge)

Similar to [12], we have the following error states:
insert image description here
⊞ / ⊊ \boxplus/\boxminus/ represent the addition and subtraction operators in the Lie group [18]. M κ M_\kappaMMr( x ^ k κ ⊞ x ~ k κ ) ⊟ x ^ k (\hat{x}^\kappa_k \boxplus \tilde{x}^\kappa_k)\boxminus \hat{x}_k(x^kMrx~kMr)x^kRelative to x ~ k κ \tilde{x}^\kappa_kx~kMrPartial derivatives of , evaluated with a value of zero.

…For details, please refer to Gu Yueju

Guess you like

Origin blog.csdn.net/lovely_yoshino/article/details/129324305