IMU sensor temperature drift compensation method

0. Overview

Temperature drift is a trendline error, and its influence can be reversed by data fitting. The essence of warming and tonic is system identification.

When we introduced calibration before, we also said a similar sentence "the essence of calibration is parameter identification". There are similarities and differences between the two. Parameter identification refers to knowing the error model to estimate the actual value of each error amount, and system identification needs to do model identification before doing parameter identification, because the model of device deviation changing with temperature is unknown. With a few variables, and I don't know what order the model is, this matter becomes not so simple.

According to the routine of engaging in technology, the more complicated things are, the more methods will appear in the industry. Let's take a look at them one by one.

1. Temperature compensation method

1.1 Polynomial fitting method

The simplest and most violent method is to directly perform polynomial fitting on the "deviation-temperature" curve.

To do this, you need a thermostat, first open the thermostat door, then put the IMU in, and finally close the thermostat door. Operate the thermostat to raise it to the set temperature, and collect IMU data during the process.

Since the model is unknown, we need to assume multiple models first, and choose the most suitable model by fitting the variance. For example, you can first assume a model like this

When fitting, you may find that the curve coincidence is not good, and you feel that the order of the formula is a bit low, so change it to a 3rd order model

In fact, the higher the order, the better the fitting coincidence, but if the order is too high, it will obviously overfit, and generally it is not a big problem to control it at the third order.

After warming up, when I was expecting to use it, I found that the accuracy was not so high, so I had to do it again to see if I made a mistake last time. So I opened the door of the incubator, put the IMU in, and closed the door of the incubator. Operate the thermostat to raise it to the set temperature, and collect IMU data during the process. This time I was a little anxious, so the temperature rose faster.

Using the same model fitting parameters last time, I found that the coincidence degree is very bad this time. I checked the data repeatedly, and there was no error twice. What is the problem?

Since the only difference between the two times is the speed of temperature rise, does that mean that Bias is not only related to temperature, but also related to the rate of change of temperature? So try a model like this

where dT is the temperature change rate.

It seems that the coincidence degree has become better, and the data collected twice can be fitted with the same model. Collect a few more sets of data, try different heating and cooling rates, and find that some parts are not good. Is there something else? model? try

It seemed to be better, so I kept opening the thermostat door, put the IMU in, closed the thermostat door, and kept trying new models. After a few days or even weeks, a model is selected, not because the most accurate one has been found, but because it is really boring, so that's it, it's almost done.

1.2 Method based on piecewise fitting

The reason why I tried repeatedly, but I was still not very satisfied in the end, is that the "deviation-temperature" curves are all strange-shaped curves. If the order is high, it will be over-fitting, and if the order is low, the degree of coincidence will not be enough. So it is natural to think that it is okay to fit the curve in several segments. The whole is irregular, but it can be divided into several regular curves.

This is a feasible method, and the method of segmentation can be seen everywhere in various papers. Of course, when publishing a paper, if it is only divided into sections, it will not be compelling enough, and it must be theoretically high-end. Therefore, you will often see segments that introduce various logics. For example, at the junction of segments, should you prefer to use the model of the left segment or the right segment? In order to solve this problem, a fuzzy segment can be integrated, and the junction belongs to the left model with a certain weight, and also belongs to the right model with a certain weight.

1.3 Fitting based on neural network

As a prominent science, deep learning has ruled everything, and no field is immune. It directly completes model identification and parameter identification in one step, eliminating the trouble of assuming the model.

Although the penetration of deep learning in some fields has been controversial, I think it is not all meaningless alchemy to use it here.

First of all, before this deep learning fire, the neural network has been used in IMU device temperature compensation. Because of the complexity of the temperature compensation model, it does have such a clear requirement.

Secondly, the fields that exclude the penetration of deep learning are mostly fields with extremely clear physical models. For example, when I see the method of using deep learning to solve inertial navigation poses, I feel very disgusted, because the error model of inertial navigation It has been so transparent that it can no longer be transparent. As for the things that need to be identified by the warming model, it seems that it is not wrong to use the neural network to "end-to-end". Of course, more data must be used, otherwise the overfitting will be serious.

1.4 Fitting based on support vector machine (SVM)

The main idea of ​​SVM is to map the data to a high-dimensional space for fitting. Since the "deviation-temperature" curve is not easy to fit in a two-dimensional space, then map it to a high-dimensional space. In that way, the linearity of the data The degree will be stronger and the recognizability will be higher.

2. Summary and thinking

1. Why are temperature-related models so complicated?

In essence, the change of device deviation with temperature is caused by the deformation of the device material caused by temperature. The device is not an ideal particle, but a material block, so its temperature is not a temperature point, but a temperature field.

However, for our measurement, only the temperature of one point can be measured, and the actual temperature change of other points cannot be predicted. Although there is a correlation between the temperatures of each point and mutual heat conduction, we can model the temperature field to eliminate part of the influence, but as the complex external environment changes, it is completely impossible to rely on one model to realize the prediction of all temperature points. impossible. To make matters worse, temperature prediction is only the first step. The relationship between material deformation and temperature change, and the relationship between device deviation and material deformation are extremely complicated.

Faced with this kind of problem, it is impossible for us to find the "correct" model. Although it is there, we just can't get it. This reminds me of a famous saying in the technical field "All models are wrong, but some are useful". If you can't find the "correct" one, just find an approximate one, as long as it works.

2. How to choose the approximate one?

With so many methods, how to choose is a question that must be answered.

Selection, in fact, is to match the advantages and disadvantages of the method with the actual needs. The method of polynomial fitting is obviously the most engineering, it requires the least amount of calculation, the most clear physical meaning, and the disadvantage is obvious, that is, the degree of fitting is not so high. For the method based on neural network or SVM, the fitting degree can be improved in theory, but it will have a large demand for data volume, and in actual use, it will consume a lot of calculation.

Therefore, if the accuracy of the IMU device itself is not particularly high, then polynomial fitting is sufficient. If you want to go higher, you can set up a division, or raise the level appropriately. If you have to fight with the residual, then it is not impossible to build a neural network. This method is not only used in papers, it is indeed used in actual projects, but you have to open and close the thermostat door several times, and it takes a little more effort.

3. How to design sample data?

We know that fitting is a process of solving equations. When constructing an equation, the coefficients of each unknown quantity must provide sufficient changes, so that the equation can be solved and solved well, or in other words, can be correctly identified.

If there are factors of temperature change rate in the model we built, and we only provide a set of temperature rise data during fitting, then we are playing ourselves, because in the data we provide, each temperature point corresponds to only one temperature change rate. Therefore, it is a good sample data to repeatedly raise and lower the temperature, and use different heating and cooling rates to provide rich and varied data.

Guess you like

Origin blog.csdn.net/scott198510/article/details/129493639