IMU and visual fusion study notes

Using pure visual information for pose estimation, the positioning effect is not robust enough for moving objects, light interference, and missing scene textures. At present, vision and IMU fusion (VI-SLAM) has gradually become a common multi-sensor fusion method. The fusion of visual information and IMU data can also be divided into filter-based and optimization-based according to the fusion method. According to whether the image feature information is added to the state vector, it can be divided into two types: loose fusion and tight fusion.

The benefit of adding the imu module to the binocular camera is to solve the problem of high-speed motion intermediate data processing. There is blur in the middle of the camera movement process, and there is too little overlapping area between the two frames, so it is difficult to do feature point matching. The camera itself can solve the drift problem of the imu in the middle of slow motion, the two are complementary. On the basis of the previous research on vision and IMU, carry out IMU and vision fusion learning and make records.

insert image description here

1. The method of IMU and visual fusion

1、IMU

The IMU outputs the angular velocity w and linear acceleration a of the carrier at high frequency (100HZ or 200HZ), and calculates the carrier velocity V, position P and rotation R at high frequency (100HZ or 200HZ)

2. Camera

The zero offset and noise will be relatively large, so that the offset will be very fast after long-term use, but if you use high-precision inertial navigation, this drift error will be reduced, because it is an integral state, which has been continuing since the beginning Integral, integral until it is no longer used, that is, its V, P, and R, each of them has an error at each moment, and this error will generate iterations, so it will drift after long-term use.

The camera obtains the image information in the scene at 30Hz or 20Hz, and uses the feature information in the image to solve the rotation and translation of the carrier. The
camera can obtain rich environmental information, and the drift error in a long time is small, but in fast motion or Tracking loss is prone to occur in a rotating environment, and the positioning accuracy will drop significantly in the face of challenging environments

3. The goal of integration

The goal of fusion is to perform a mutual compensation. There are three main goals for mutual compensation:

  • Use the visual odometer to compensate the cumulative drift of the IMU to reduce the drift error of the inertial navigation
  • For monocular vision sensors, the IMU can be used to correct the depth of the scene to alleviate the problem of monocular camera scale uncertainty
  • The output of IMU has nothing to do with the environment and is not constrained by environmental changes. Compensating with IMU and vision can improve the robustness of visual odometry pose estimation

insert image description here

Note: The addition of IMU to vision does not improve the positioning accuracy of vision. For example, the positioning accuracy of pure vision of ORB-SLAM3 is higher than that of vision + IMU. Pure vision is a higher-precision odometer, and the process is equivalent to A high-precision fusion of low-precision, so the final multi-modal positioning accuracy is not as high as pure vision, so vision + IMU improves the robustness of the system, not the accuracy

4. How to integrate the IMU with the visual sensor

Loose coupling means that each sensor calculates a trajectory, and fuses the calculated results again.
Tight coupling is when estimating, that is, the state of the IMU, the state of the visual sensor and other sensor states are put together for pose estimation, and finally After fusion, there is only one trajectory error. In fact, there is no need to calculate the trajectory of each sensor separately.
insert image description here
insert image description here
Most of the research now uses a tightly coupled state. If the system is particularly large, a small branch in the middle of all navigation modules of the visual-inertial SLAM, then more loosely coupled situations are used, because it is necessary to avoid the failure of a single module. .

The problems seen in most systems now are based on optimization, such as VINS and ORB-SLAM, which are based on the tight coupling of optimization algorithms.滤波只考虑当前的状态,相当于“得过且过”,想要得到当前的状态,只需要直达搜上一个状态,是一步一步推算出来的;优化过程是长期的一个过程,想要得到当前的状态,从过去很久之前到当前的所有的一个状态变化

The previous situation was that the optimization-based algorithm had a relatively large amount of calculation, and the calculation amount based on filtering was slightly smaller. However, in the development, the filtering-based method not only depends on the previous state, but also involves a sliding window model. The optimization method, using sparse marginalization, reduces the amount of calculation, thereby reducing the burden,优化算法的实时性还是不错的

2. IMU and vision initialization and parameter estimation method

3. Summary

Camera calibration
Kalibr is the only one among these tools that can calibrate camToImu. It is an essential tool for vio, and all others have substitutes. So learn a variety of open source algorithms for camera calibration, and record the process of learning camera calibration.
1. Camera calibration
1. Place a known object in the scene
(1) Identify the correspondence between the image and the scene
(2) Calculate the mapping from the scene to the image

In the simplest case, the calibration of the camera can use such a scheme: Assuming that there is a known object in the scene, and assuming that some relationships between some points of the known object and the midpoint of the image are established, the next thing to do is to find the camera matrix , to map some three-dimensional points to two-dimensional points on the plane, assuming that the coordinates of the three-dimensional points on the plane are known and very accurate, and the three-dimensional coordinates are known and the points in the image and the points of the three-dimensional model are known. Correspondence, use this 3D to 2D correspondence, you can do camera calibration

Problem: Geometry must be understood very precisely.
3D-2D correspondence must be known.
2. Camera parameter estimation. Resectioning
uses this correspondence from 3D to 2D to calibrate the camera. This solution is called Resectioning.
Looking at it as an image, it means that the three-dimensional points are here. There is a two-dimensional photo, and we know that the black points have a corresponding relationship. Then what is required is the camera matrix, which is a 3X4 camera matrix. All parameters including internal parameters and external parameters are in Inside, intuitively, it can be considered that this photo is placed in a proper position in the three-dimensional space, so that the light emitted from the center of the camera just passes through these known points in the three-dimensional space, so the problem to be solved is : Where should the camera be placed in the 3D space so that the geometric relationship can be established.

When doing such a method, it is assumed that the coordinates of these three-dimensional points are known very precisely, but it is difficult to achieve in practical applications. A long time ago, calibration objects were used to ensure three-dimensionality in the process of making professional equipment. The position of the point, and then calibrate the camera matrix after having an accurate calibration object.

3. The implementation of the Basic Equations algorithm
is actually quite simple. Any 3D point will be projected to a 2D point in the image after passing through the projection matrix of the camera.

2. Geometric constraints of two graphs

3. 3D reconstruction

IMU parameter calibration study notes
Inertial processing unit
1. Parameter calibration
If there is a large error in the IMU measurement data itself, that is, the input to the system is an error message No matter how well the algorithm of the upper application system is done, the wrong result will be
output Calibration
Compared with the IMU’s own coordinate system, in this coordinate system, the errors that appear in its data should be eliminated as much as possible within the
system
. It is related to the application. Before the internal reference calibration, the yield test is performed first. The calibration is to eliminate the sensor as much as possible. If the data measured by the sensor is too large, then the correction is meaningless, so first ensure that the IMU is normal.
②The internal reference calibration process
calibrates the IMU, it is necessary to model the source of the IMU error. In fact, the error in the IMU measurement process is caused by many different aspects. The modeling is for some obvious errors. It is built into a mathematical model, some error sources that we don't know, then ignore it, because it will be complicated to take all errors into account, this is not necessary. We mainly care about three sources of error:
 Zero bias
For example, if the scale in our daily life does not read zero, then the deviation value at this time is the zero bias, and the zero bias on the IMU is similar. That is, the IMU is placed at a certain place, and if the angular velocity is non-zero, then the deviation value is the zero deviation.
 Scale deviation
Regardless of the measurement of acceleration, angular velocity, or magnetic force, physical quantities are converted into electrical quantities, such as voltage, resistance, and current. The conversion process is called a scale. The scale converted on each axis is different, such as on the x-axis. When a force of 1 Newton is applied to the upper body, the voltage converted by the mechanical converter on the x-axis may be 1.5V, but on the y-axis, if it is also 1 Newton, the converted electrical quantity may be 1.8V. These two The voltage is different, and there is a coefficient difference in the middle. This coefficient difference is caused by many reasons. The scale can be thought of as a slope. The scale also has three values, x-axis, y-axis, z-axis
(2)
Compared with the internal parameter calibration, the external parameter calibration assumes that the IMU is mounted on the carrier board, and the carrier board itself also has a coordinate system, and the IMU is in its own coordinate system How to convert the measured value to the coordinate system of the carrier board to express
the relationship of coordinate transformation is represented by T, the changed parameter T is the external parameter, and the external participation in the actual installation of the IMU is installed in different places. , and is related to the target coordinate system to be transformed, and the external parameter is not unique.

There are many kinds of external parameters. If IMU and camera are fused, then the coordinate transformation relationship between the camera and IMU is the external parameter. The same is true for radar and IMU fusion.

Guess you like

Origin blog.csdn.net/Prototype___/article/details/131860531
IMU
IMU