The difference and derivation of Kalman and extended Kalman

1. Kalman's derivation:

1) First look at the state space model of the system stochastic system: (linear)
The so-called linear means that the recursion or state transition equation is linear. insert image description hereinsert image description here
As for the parameter explanation, read the book yourself.
2) The value at time k-1 minus the optimal state estimate at time k-1 = state estimation error at time k-1. First calculate the state estimation error at time k-1 ( the deviation between the reference value and the estimated value ):
insert image description here
3) According to the optimal state estimation at time k-1 and the system state equation, the state at time k (current time) can be optimally estimated ( Optimal one-step forecast):
insert image description here
4) Calculating the state one-step forecast error is:
insert image description here
5) Put 1) and 3) into 4) and combine 2) to get:
insert image description here
now the state one-step forecast error is calculated, then the state can be calculated One-step prediction of the mean square error matrix:
insert image description here
while the state estimation error matrix at k-1 time is:
insert image description here
Calculate and measure in the same way:
1) Through the one-step prediction of the state and the measurement equation of the system, one-step prediction of the measurement at time can be made:
insert image description here
2) Measurement The formula for the one-step forecast error is:
insert image description here
3) Bring the system formula and 1) formula into 2) get the measurement one-step forecast error:
insert image description here4) Calculate the mean square error matrix of the measurement one-step forecast, the state one-step forecast and the measurement one-step forecast Co-mean square error matrix:
insert image description here
So far, it is possible to use the one-step prediction of the state equation of the system to estimate the value at the current moment, but since no information of the measurement equation is used, the estimation accuracy will not be high. And because the measurement one-step prediction error contains the information of the state one-step prediction, therefore, considering the comprehensive consideration, the current state estimation is corrected by the measurement one-step prediction error as the final optimal state estimation:
insert image description here
5) Simplify the above formula:
insert image description here
It can be seen that the current state estimation is a linear combination (weighted estimation) of the state estimation at the previous moment and the current measurement, and the prior estimation is corrected by the measurement error to obtain the posterior estimation.
Next, calculate K to minimize the posterior estimation error (the state estimation error at the current moment):
6) The state estimation error at the current moment k is:
insert image description here
7) Put the above formula 5) into 6) to get:
insert image description here
8) Find the moment k in the same way (Current moment) The mean square error matrix of the state estimation:
insert image description here
9) The estimation error is a random vector, and the meaning of "minimizing the error" is stipulated to minimize the sum of the mean square errors of each component , which is
insert image description here
equivalent to:
insert image description hereinsert image description here
insert image description here
10) Because The mean square error matrix must be a symmetric matrix , so the formula 8) can be expanded as:
insert image description here
11) Simultaneously find the trace operation on both sides
insert image description here
of the formula 10), and get: the above formula is a quadratic function about the undetermined parameter matrix K, so tr§ must have a pole value (according to the meaning of probability, it should be a minimum value here).
In order to facilitate the use of the derivation method to obtain the extremum of the above formula, two equations for deriving the matrix from the trace of the square matrix are introduced, which are as follows:
insert image description here
insert image description here
Derivation of K on both sides at the same time:
insert image description here
According to the extremum value of the function, it is obtained:
insert image description hereinsert image description here
that is, when K When it is equal to the above values, the current state estimation error is the smallest. At this point, bring the above formula into 10) to get the root mean square error matrix at time k:
insert image description here
So far, the five formulas of Kalman filtering are deduced:
insert image description here

2. Extended Kalman derivation:

1) First look at the state space model of the system stochastic system: (nonlinear)
insert image description here
insert image description here
The only difference from Kalman is that the extended Kalman deals with nonlinear models, so a Taylor expansion is required for the state transition equation (regardless of prediction or measurement) Get the Jacobian matrix for linearization.
1) Calculate the state estimation at time k-1 in the same way:
insert image description here
2) According to the optimal state estimation at time k-1 and the system state equation, the state at time k (current moment) can be optimally estimated (optimal one-step prediction): 3
insert image description here
) The one-step prediction error of the calculation state is:
insert image description here
In the Kalman derivation, this step is directly brought into the system equation, but here it needs to be linearized first: In the
insert image description here
same way, the measurement:
insert image description here
insert image description here
Therefore, the original nonlinear system can be rewritten as a linear system:
insert image description here
Rewrite 5 formulas:
insert image description here
insert image description here

Guess you like

Origin blog.csdn.net/qq_37967853/article/details/131790395