Anatomy of the Kalman family from scratch - (05) Kalman filter → formula derivation, easy to understand from a one-dimensional level

Explaining the summary link of a series of articles about slam:The most comprehensive slam in history starts from scratch, for the Kalman family explained in this column Zero anatomy link:The Kalman family’s anatomy from scratch - (00) catalog latest explanation without blind spots: https://blog.csdn.net/weixin_43013761/article/details/133846882
 
The center directly below the end of the article provides my contact information. Click on my photo to display W X → Official certification {\color{blue}{The center directly below the end} provides my \color{red} contact information, \color {blue}Click on your photo to display WX→official certification} The center directly below the end of the articleprovides mycontact information,click on my photo to display a>XWOfficial certification

Solemnly declare: This series of blogs are exclusively owned by me (W e n h a i Z h u). Reprinting and plagiarism are prohibited. Thank you for reporting for the first time! \color{red} Solemnly declare: This series of blogs are exclusively owned by me (WenhaiZhu). Reprinting and plagiarism are prohibited. , thank you for the first report!Zheng heavy statement: 该 SERIES EXPO CUSTOMER HIMSELF(WenhaiZhu)Single private ownership,Prohibited abuse,Thank you for your first report!
 

I. Introduction

        In a blog, through the previous series of blogs, I have a certain understanding of Karl filtering, and the entire process was sorted out in the previous blog. First of all, Bayesian filtering is just an ideological guide, and there is no way to apply it directly. Among them, Kalman diffuse filtering is an instantiation. Of course, it is not a complete instantiation. If you want to implement the Kalman filtering algorithm, then A further step is needed to conduct modeling based on actual application conditions.

        Kalman filtering is based on the LG (Linear Gaussian) system for inference, so it is only applicable to the LG mathematical model. If you want to apply to the NL or NG system, you need to use other variant algorithms, such as EKF (Extended Kalman filter), these contents will be analyzed in detail in the following blogs. In the previous article, the formula is summarized as follows:
f X k + ( x ) = η k ⋅ f X k ∣ Y k ( x ) ⋅ f − 1 + ( v ) d v (01) \color{Green} \tag{01} f_{X_k}^+(x)=\eta_k ·f_{X_k | Y_k}(x) ·f_{X_k}^-( x) =\eta_k ·f_{R_{k}}\left[y_{k}-h(x)\right]· \int_{-\infty}^{+\infty} f_{Q_{k}}[ x-f(v)] f_{X_{k-1}}^{+}(v) \mathrm{d} v fXk+(x)=thekfXkYk(x)fXk(x)=thekfRk[ykh(x)]+fQk[xf(v)]fXk1+(v)dv(01) η k = [ ∫ − ∞ + ∞ ( f R k [ y k − h ( x ) ] ⋅ ∫ − ∞ + ∞ f Q k [ x − f ( v ) ] f X k − 1 + ( v ) d v d x ] − 1 ) d x (02) \color{Green} \tag{02} \eta_k=[\int_{-\infty}^{+\infty}(f_{R_{k}}\left[y_{k}-h(x)\right]· \int_{-\infty}^{+\infty} f_{Q_{k}}[x-f(v)] f_{X_{k-1}}^{+}(v) \mathrm{d} v\mathrm{d} x]^{-1}) \mathrm d x thek=[+(fRk[ykh(x)]+fQk[xf(v)]fXk1+(v)dvdx]1)dx(02)It has been analyzed that the main culprits that hinder the implementation of the algorithm are the above two Infinite integral, then the next core is how to avoid infinite integral, or directly solve infinite integral. Now let’s look at how the Ermei filter avoids infinite products from the source. The main steps are as follows:

( 01 ) \color{blue}(01) (01) state transfer function f ( x ) f(x) f(x) 观测function h ( x ) h(x) h(x) The city is linear, Below ( . ˇ \check{.} .ˇ Display destination, . ^ \hat{.} .^ represents posterior): State equation: x ˇ k = f x ^ k − 1 + q k f is a one-dimensional constant observation equation :         x ^ = h x ˇ k + r k               h is a one-dimensional constant (03) \color{Green} \tag{03} State equation:~~~~~~ \check x_{k}=f\hat x_{k- 1}+q_k~~~~~~~~~f is a one-dimensional constant\\observation equation:~~~~~~~\hat x=h\check x_k+r_k~~~~~~~~~~ ~~~~h is a one-dimensional constantEquation of state:      xˇk=fx^k1+qk         f is a one-dimensional constantobservation equation:       x^=hxˇk+rk              h is a one-dimensional constant(03) ( 02 ) \color{blue}(02) (02) 其上 q k ∈ Q ∼ N ( u q k , σ k 2 ) \color{Green} q_k \in Q \sim N(u_{q_k},\sigma^2_{k}) qkQN(uqk,pk2) r k ∈ R ∼ N ( u r k , σ r k 2 ) \color{Green} r_k \in R \sim N(u_{r_k},\sigma^2_{r_k}) rkRN(urk,prk2), a large distribution.

It should be noted that the subsequent derivation process starts with a one-dimensional example and then expands to high dimensions, because one-dimensional is not designed to include matrices, multivariate Gaussians, covariance matrices, etc. It is called seeing the essence through phenomena. Although one-dimensional ones are rarely used in practical applications, it is still more appropriate to understand the most appropriate and lowest-level principles from one dimension. Because it is one-dimensional, it is represented by lowercase letters. In addition, the above assumption is an instantiation of a random process, that is, x ˇ k \check x_{k} xˇk x ˇ k \check x_k xˇk q k q_k qk r k r_k rkRepresents the specific values ​​of the random process, rather than random variables. In addition, the derivation of formula (02) uses assumptions:

( 03 ) : \color{blue}(03): (03): X 0 X_0 X0 Q 1 Q_1 Q1 Q 2 Q_2 Q2 Q 3 Q_3 Q3 . . . . . . ...... ...... Q k Q_{k} Qk Mutually independent.
( 04 ) : \color{blue}(04): (04): X 1 X_1 X1 give R 1 R_1 R1 R 2 R_2 R2 R 3 R_3 R3 . . . . . . ...... ...... R k R_{k} RkIndependent.

So it needs to be recorded here, that is, in the actual application process, this assumption cannot be ignored, otherwise it will not be suitable for Bayesian filtering. In addition to the above assumptions, additional knowledge points are required:

( 05 ) : \color{blue}(05): (05): The normal distribution function is linearly transformed and still conforms to the normal distribution.
( 06 ) : \color{blue}(06): (06): The product of two normal distributions is still a normal distribution (remember the conclusion first and then deduce it later).

The result of the product of two normal distributions is as follows (note, remember the result first, don’t get entangled in the process and fall into a dead end):
x 1 ∈ X 1 ∼ N ( v 1 , σ 1 2 ) x 2 ∈ X 2 ∼ N ( v 2 , σ 2 2 ) (04) \color{Green} \tag{04} x_1 \in ~~~~~~~~~~~~~x_2 \in X_2 \sim N(v_2,\sigma_2^2) x1X1N(v1,p12)                    x2X2N(v2,p22)(04) f ( x 1 ) ∗ f ( x 2 ) = N ( σ 1 2 σ 1 2 + σ 2 2 μ 2 + σ 2 2 σ 1 2 + σ 2 2 μ 1 , σ 1 2 σ 2 2 σ 1 2 + σ 2 2 ) (05) \color{Green} \tag{05} f(x_1)*f(x_2)=N\left(\frac{\sigma_{1}^{2}}{\sigma_{1}^2+\sigma_{2}^{2}} \mu_{2}+\frac{\sigma_{2}^{2}}{\sigma_{1}^{2}+\sigma_{2}^{2}} \mu_{1} , \frac{\sigma_{1}^{2} \sigma_{2}^{2}}{\sigma_{1}^{2}+\sigma_{2}^{2}}\right) f(x1)f(x2)=N(p12+p22p12m2+p12+p22p22m1p12+p22p12p22)(05)

2. Ideological guidance

First of all, let’s analyze it as a whole, assuming the prior state x ^ k − 1 \hat x_{k-1} x^k1Probability density function of f k − 1 + f_{k-1}^+ fk1+ consistent with normal distribution N ( v x k − 1 , σ x k − 1 2 ) N(v_{x_{k-1}},\sigma _{x_{k-1 }}^2) N(vxk1,pxk12), then according to the state transition function f ( x ˇ k ) = a f ( x ^ k − 1 ) + q k f(\check x_{k})=af(\hat x_{k-1})+q_k f(xˇk)=af(x^k1)+qk,结合 【 ( 05 ) : \color{blue}(05): (05): The normal distribution function is linearly transformed and still conforms to the normal distribution. It can be known that f ( x ˇ k ) f(\check x_{k}) f(xˇk) 】,可知 f ( x ˇ k ) = f X k + ( x ) f(\check x_{k})=f^+_{X_k}(x) f(xˇk)=fXk+ (x) k − 1 k-1 k1 The posterior at time conforms to the normal distribution, then k k The prior at time k also conforms to the normal distribution. Let’s look at formula (01): f X k + ( x ) = η k ⋅ f X k ∣ Y k ( x ) ⋅ f (x) ·f_{X_k}^-(x) fXk+(x)=thekfXkYk(x)fXk(x)(06) Kaichimichi, Waka f X k ∣ Y k ( x ) f_{X_k | Y_k}(x) fXkYk (fXk+(x) η k \eta_k thek is a constant), so the recursion formula comes out. According to the formula (1): f X k ∣ Y k ( x ) = f R k [ y k − h ( x ) ] (07) \color{Green} \tag{07 } f_{X_k | Y_k}(x)=f_{R_{k}}\left[y_{k}-h(x)\right] fXkYk(x)=fRk[ykh(x)](07) h ( x ) h(x) h(x) is the linearity function, < /span> y k − h ( x ) y_{k}-h(x) andkh(x) is equivalent to The original too-normal distribution graph has been translated or scaled, so the result f X k ∣ Y k ( x ) f_{X_k | Y_k}(x) fXkYk(x) Yorukyushota distribution.

Core: \color{red}Core:core: According to the above derivation, it can be seen that if f k − 1 + f_{k-1}^+ fk1+ conforms to the normal distribution, then f X k − ( x ) f_{X_k}^-(x) fXk(x) f X k ∣ Y k ( x ) f_{X_k | Y_k}(x) fXkYk(x) Tofu Shota, Mata η k \eta_k thek is a parameter, according to [ ( 06 ) : \color{blue}(06): (06): The product of two normal distributions is still a normal distribution], it can be seen that f X k + ( x ) f_{X_k}^+ (x) fXk+(x) 为正太distribution. Last possible, young x 0 x_0 x0 is consistent with the normal distribution and can be derived continuously x ^ 1 \hat x_1 x^1 x ^ 2 \hat x_2 x^2 ⋯ \cdots x ^ k − 1 \hat x_{k-1} x^k1 x ^ k \hat x_{k} x^kAll are consistent with the normal distribution.

Additional addition here η \eta η is a constant, and its expression is inDissection of the Kalman family from scratch - (01) Preliminary knowledge points< Deduced in a i=3>: η = [ f Y k ( x ) ] − 1 = [ ∫ − ∞ + ∞ f Y k ∣ X k ( y k ∣ x ) f X ( x ) d x ] − 1 (08) \color{Green} \tag{08} \eta=[f_{Y_{k}}(x)]^{-1}=[{\int_{\mathbb{-\ infty}}^{+\infty} f_{Y_k \mid X_k}(y_k \mid x) f_{X}(x) \mathrm d x}]^{-1} the=[fANDk(x)]1=[+fANDkXk(ykx)fX(x)dx]1(08) Its essence comes from the total probability formula of continuous random variables. In fact, it is relatively Easy to understand, currently observed y k y_k andk,那么 P ( y k ) P(y_k) P(yk) can already be determined, because it has been assumed that f Y k ( X ) f_{Y_k}(X) fANDk(X) Signed regular distribution.观测到 y k y_k andkThe total probability of is not all x x In the x state, it can be observed Y k Y_k ANDkIs the sum of probabilities of probabilities?

3. Formula derivation

Before deriving further, you must first understand the known conditions and the purpose of derivation, so as to avoid not knowing what you are doing during the calculation process, as follows:
Known: x ^ k − 1                 Solve: x ^ k − 1 (09) \color{Green} \tag{09} Known: \hat x_{k-1} ~~~~~~~~~~~~~~~~~ ~Solution:\hat x_{k-1} A known:x^k1                  Solve:x^k1(09)Of course, there is no way to proceed with just one known condition. , so additional assumptions are needed to infer based on LG (linear Gaussian system), so f X k − ( x ) f_{X_{k}}^-(x) fXk(x) f X k + ( x ) f_{X_{k}}^+(x) fXk+(x) f Q k ( x ) f_{Q_{k}}(x) fQk(x) f R k ( x ) f_{R_{k}}(x) fRk(x) Integral range: f ( x ) = 1 σ 2 π e − ( x − μ ) 2 2 σ 2 X ∼ N ( μ , σ 2 ) (10) \color{Green} \tag{10} f(x)=\frac{ 1}{\sigma\sqrt{2\pi}}e^{-\frac{(x-\mu)^{2}}{2\sigma^{2}}}~~~~~~~~~ ~~X \sim N\left(\mu, \sigma^{2}\right) f(x)=p2π 1It is2σ2(xμ)2           XN(μ,p2)(10)并且 f Q k ( x ) ∼ N ( 0 , σ q k − 1 )           f R k ( x ) ∼ N ( 0 , σ R k − 1 ) (11) \color{Green} \tag{11} f_{Q_{k}}(x) \sim N(0,\sigma_{q_{k-1}})~~~~~~~~~f_{R_{k}}(x) \sim N(0,\sigma_{R_{k-1}}) fQk(x)N(0,pqk1)         fRk(x)N(0,pRk1)(11)

above μ \mu μ Display uniformity, σ \sigma σ represents the standard deviation. In addition, it is also necessary to instantiate the state transition equation and observation equation in (03). Here we assume: State equation: is a one-dimensional constant observation equation: x ^ = h x ˇ k + r k h is a one-dimensional constant (12) \color{Green} \tag{12} State equation:~~~~~~ \check x_{k}=f \hat x_{k-1}+q_k~~~~~~~~~f is a one-dimensional constant\\observation equation:~~~~~~~\hat x=h\check x_k+r_k~~~~ ~~~~~~~~~~h is a one-dimensional constantEquation of state:      xˇk=fx^k1+qk         f is a one-dimensional constantobservation equation:       x^=hxˇk+rk              h is a one-dimensional constant(12)The above assumptions are relatively simple, f f f h h h are two parameters and will not change, f f f h h h The scenarios that will change will not be considered here. The process is relatively complicated, and of course the application will be more extensive.

1. Prediction step

First refer to equation (01), which can guide f X k − 1 − f_{X_{k-1}}^- fXk1 given f X k − 1 + f_{X_{k-1}}^+ fXk1+The relationship of is as follows:
f X k − ( x ) = ∫ − ∞ + ∞ f Q k [ x − f ( v ) ] f X k − 1 + ( v ) d v (11) \color{Green} \tag{11} f_{X_k}^-(x) =\int_{-\infty}^{+\infty} f_{Q_{k}}[x-f (v)] f_{X_{k-1}}^{+}(v) \mathrm{d} v fXk(x)=+fQk[xf(v)]fXk1+(v)dv(11) f Q k ( x ) f_{Q_{k}}(x) fQk(x) 的均值为 μ Q k = 0 \color {red} \mu_{Q_{k}}=0 mQk=0,方差为 σ Q k \sigma_{Q_{k}} pQk f X k − 1 + ( x ) f_{X_{k-1}}^+(x) fXk1+(x) 的 uniformity x ^ k − 1 \color {red} \hat x_{k-1} x^k1, the variance is σ X k − 1 + \sigma_{X_{k-1}}^+ pXk1+6 Reduce the equation:
f X k − 1 + ( v ) = 1 σ X k − 1 2 π e − ( v − x ^ k − 1 ) 2 2 σ X k − 1 − 2 (12) \color{Green} \tag{12} f_{X_{k-1}}^{+}(v)=\frac{1}{\sigma_{ X{k-1}}\sqrt{2\pi}}e^{-\frac{(v-\hat x_{k-1})^{2}}{2\sigma_{X{k-1} }^{-2}}} fXk1+(v)=pXk12π 1It is2σXk12(vx^k1)2 (12) f Q k − 1 ( x − f ( v ) ) = σ Q k − 1 2 π e − ( x − a v ) 2 2 σ Q k − 1 2 For example: f ( v ) = f v (13) \color{Green} \tag{13} f_{Q_{k-1 }}(x-f(v))=\frac{1}{\sigma_{Q{k-1}}\sqrt{2\pi}}e^{-\frac{(x-av)^{2}} {2 \sigma_{Q{k-1}}^{2}}} ~~~~~~~~Lower: f(v)=fv fQk1(xf(v))=pQk12π 1It is2σQk12(xav)2        in:f(v)=fv(13)
Top demand caution μ Q k = 0 \color {red} \mu_{Q_{k}}=0 mQk=0, lower surface regrasp f X k − 1 + ( v ) f_{X_{k-1}}^{+} (v)fXk1+(v) f Q k − 1 ( x − f ( v ) ) f_{Q_{k-1}}(x-f(v)) fQk1(xf(v)) Differential (11) Given: f X k − ( x ) = ∫ − ∞ + ∞ 1 σ Q k − 1 2 π e − ( x − f v ) 2 2 σ Q k − 2 ⋅ 1 σ X k − 1 + 2 π e − ( v − x ^ k − 1 ) 2 2 σ X k − 1 + 2 ⋅ d v (14) \color{Green} \tag{14} f_{X_k} ^-(x) =\int_{-\infty}^{+\infty}\frac{1}{\sigma_{Q{k-1}}\sqrt{2\pi}}e^{-\frac{ (x-fv)^{2}}{2 \sigma_{Q{k-1}}^{2}}} · \frac{1}{ \sigma_{X{k-1}}^+ \sqrt{ 2 \pi}} e^{-\frac{(v-\hat x_{k-1})^{2}}{2 \sigma_{X{k-1}}^{+2}}} · \ mathrm{d} vfXk(x)=+pQk12π 1It is2σQk12(xfv)2pXk1+2π 1It is2σXk1+2(vx^k1)2dv(14) At this time, if we look at the above formula, it is actually relatively clear. As mentioned earlier, the product of two Gaussian density functions can be calculated using Mathematic software to get the answer (will be manually derived later), where the variables v v v will be integrated. After integration, there will be only one variable left x x x ,得答案如下:
f X k − ( x ) ∼ N ( x ˇ k , σ X k − ) = N ( f x ^ k − 1 , f 2 σ X k − 1 + + σ Q k − 1 ) (15) \color{red} \tag{15} f_{X_k}^-(x) \sim N(\check x_{k},\sigma^{-}_{X_{k}})=N(f\hat x_{k-1},f^2\sigma_{X_{k-1}}^{+}+\sigma_{Q_{k-1}}) fXk(x)N(xˇk,pXk)=N(fx^k1,f2σXk1++pQk1)(15)

2.Update step

Complete the prior probability density function f X k − ( x ) f_{X_k}^-(x) fXkThe derivation of (x) is now to find the likelihood probability density function, which is Eq.中 f X k ∣ Y k ( x ) = f R k [ y k − h ( x ) ] f_{X_k | Y_k}(x)=f_{R_{k}}\left [y_{k}-h(x)\right] fXkYk(x)=fRk[ykh(x)] The result of , not to mention other things, seems to be too simple intuitively. Directly refer to the previous formulas (10) (11) and bring it in:
f X k ∣ Y k ( x ) = 1 σ R k 2 π e − ( y − h x ) 2 2 σ R k 2 (15) \color {Green} \tag{15} f_{X_k | Y_k}(x) =\frac{1}{ \sigma_{R_k} \sqrt{2 \pi}} e^{-\frac{(y-hx)^{2}}{2 \sigma_{R_k}^{2}}} fXkYk(x)=pRk2π 1It is2σRk2(yhx)2(15)Needless to say, the above is obviousN(hx,pRk12) is a normal distribution. Now we need to use it to calculate the previous N ( μ X k − , σ X k − 2 ) N (\mu_{X_{k}}^-,\sigma^{-2}_{X_{k}}) N(μXk,pXk2) Define, infinitesimal N ( µ X k + , σ X k + 2 ) N(\mu_{X_{k} }^+,\sigma^{+2}_{X_{k}}) N(μXk+,pXk+2), of course the correction process cannot forget to multiply a normalized variable:
η k = [ ∫ − ∞ + ∞ f X k ∣ Y k ( x ) ⋅ f {X_k | Y_k}(x)·f_{X_k}^-(x)dx]^{-1} thek=[+fXkYk(x)fXk(x)dx]1(16)The answer is still given directly here. I will deduce it in detail when I have time later. , what is given here is x ^ = E ( f X k + ( x ) ) \hat x= E(f_{X_k}^+(x)) x^=E(fXk+(x))
X k + = ( x ^ k , σ X k + ) ∼ N ( h σ X k − y k + σ R k x ˇ k h 2 σ X k − + σ R k , σ R k σ X k − h 2 σ X k − + σ R k ) (17) \color{red} \tag{17} X^+_k=(\hat x_{k},\sigma^+_{X_{k}}) \sim N\left(\frac{h \sigma_{X_k}^{-} y_{k}+\sigma_{R_k} \check x_k}{h^{2} \sigma_{X_k}^{-}+\sigma_{R_k}}, \frac{\sigma_{R_k} \sigma_{X_k}^{-}}{h^{2} \sigma_{X_k}^{-}+\sigma_{R_k}}\right) Xk+=(x^k,pXk+)N(h2σXk+pRkhσXkandk+pRkxˇk,h2σXk+pRkpRkpXk)(17)

3. Simplify the formula

Although the answer has basically been figured out through the above derivation, it is not very friendly to the implementation of programming, so we need to continue to simplify it. Before that, we first sort out the above formula, (15 ) (17) The equivalent formula is as follows:
x ˇ k = f x ^ k − 1 σ X k − = f 2 σ X k − 1 + + σ Q k − 1 (18) \ color{Green} \tag{18}\check x_{k}=f\hat x_{k-1}~~~~~~~~~~~~~~~~~~~~~~~~\ sigma^{-}_{X_{k}}=f^2\sigma_{X_{k-1}}^{+}+\sigma_{Q_{k-1}} xˇk=fx^k1                        pXk=f2σXk1++pQk1(18) x ^ k = h σ X k − y k + σ R k x ˇ k h 2 σ X k − + σ R k               σ X k + = σ R k σ X k − h 2 σ X k − + σ R k (19) \color{Green} \tag{19} \hat x_{k}=\frac{h \sigma_{X_k}^{-} y_{k}+\sigma_{R_k} \check x_k}{h^{2} \sigma_{X_k}^{-}+\sigma_{R_k}}~~~~~~~~~~~~~\sigma^+_{X_{k}}=\frac{\sigma_{R_k} \sigma_{X_k}^{-}}{h^{2} \sigma_{X_k}^{-}+\sigma_{R_k}} x^k=h2σXk+pRkhσXkandk+pRkxˇk             pXk+=h2σXk+pRkpRkpXk(19)According to the above formula, we can know intuitively that it is a recursion Derivation, if it is known x ^ 0 \hat x_0 x^0 σ X 0 + \sigma^+_{X_{0}} pX0+, and observations at each time y k y_k andk, indicates:
【 x ^ 0 , σ X 0 + , y 1 】 → 【 x ^ 1 , σ X 1 + , y 2 】 → ⋯ → 【 x ^ k , σ X k + 】 (20) \color{Green} \tag{20}【\hat x_0,\sigma^+_{X_{0}},y_1】→【\hat x_1,\sigma ^+_{X_{1}},y_2】→\cdots →【\hat x_k,\sigma^+_{X_{k}}】 x^0σX0+y1x^1σX1+,and2x^kσXk+(20)individual(19) is an infinitesimal:x^k=h2σXk+pRkhσXkandk+pRkxˇk=h2σXk+pRkhσXkandk+h2σXk+pRkpRkxˇk=h2σXk+pRkhσXkandk+h2σXk+pRkxˇ(σRk+h2σXk)xˇkh2σXk=h2σXk+pRkhσXkandk+xˇh2σXk+pRkxˇh2σXk=h2σXk+pRkhσXk(ykhxˇ)+xˇ(21) σ X k + = σ R k σ X k − h 2 σ X k − + σ R k = ( h 2 σ X k − + σ R k ) σ X k − − h 2 σ X k − σ X k − h 2 σ X k − + σ R k = σ X k − − h 2 σ X k − σ X k − h 2 σ X k − + σ R k = ( 1 − h σ X k − h 2 σ X k − + σ R k h ) σ X k − (22) \color{Green} \tag{22} \begin{aligned} \sigma^+_{X_{k}}&=\frac{\sigma_{R_k} \sigma_{X_k}^{-}}{h^{2} \sigma_{X_k}^{-}+\sigma_{R_k}} \\&=\frac{(h^2 \sigma_{X_k}^{-}+\sigma_{R_k} )\sigma_{X_k}^{-}-h^2 \sigma_{X_k}^{-}\sigma_{X_k}^{-}}{h^{2} \sigma_{X_k}^{-}+\sigma_{R_k}} \\&=\sigma_{X_k}^{-}-\frac{h^2 \sigma_{X_k}^{-}\sigma_{X_k}^{-}}{h^{2} \sigma_{X_k}^{-}+\sigma_{R_k}}\\&=(1-\frac{h \sigma_{X_k}^{-} }{h^{2} \sigma_{X_k}^{-} +\sigma_{R_k}}h)\sigma_{X_k}^{-} \end{aligned} pXk+=h2σXk+pRkpRkpXk=h2σXk+pRk(h2σXk+pRk)σXkh2σXkpXk=pXkh2σXk+pRkh2σXkpXk=(1h2σXk+pRkhσXkh)σXk(22)You can see that they all have one thing in common, that is Karl Mann gain term: k k = h σ X k − h 2 σ X k − + σ R k (23) \color{Green} \tag{23}k_k=\frac{h \ sigma_{X_k}^{-} }{h^{2} \sigma_{X_k}^{-} +\sigma_{R_k}} kk=h2σXk+pRkhσXk(23)Bring it to (22) (23): x ^ k = k k ( y k − h x ˇ ) + x ˇ σ X k + = ( 1 − h k k ) σ X k − (24) \color{Green} \tag{24} \hat x_{k}=k_k(y_k-h\check x)+\check x~~~~~~~~~~~~~~~~~~\sigma^+_{X_{k}}= (1-hk_k) \sigma_{X_k}^{-} x^k=kk(ykhxˇ)+xˇ                  σXk+=(1hkk)σXk(24)

4. Summary

Through the above series of derivation, the following five formulas can be obtained, which are Kalman’s five core formulas, namely the two formulas in (15) (need to be expanded), and (23) (24) , organized as follows:
①: x ˇ k = f x ^ k − 1 ②: σ X k − = f 2 σ X k − 1 + + σ Q k − 1 (25) \color {red} ①:\tag{25}\check x_{k}= f\hat x_{k-1}~~~~~~~~~~~~~~~②:\sigma^{-}_ {X_{k}}=f^2\sigma_{X_{k-1}}^{+}+\sigma_{Q_{k-1}} xˇk=fx^k1               σXk=f2σXk1++pQk1(25) ③: k k = h σ X k − h 2 σ X k − + σ R k (26) \color{red} \tag{26}③:k_k=\frac{h \sigma_{X_k}^{-} }{h^{2} \sigma_{X_k}^{-} +\sigma_{R_k}} kk=h2σXk+pRkhσXk(26) ④: x ^ k = k k ( y k − h x ˇ ) + x ˇ                   ⑤: σ X k + = ( 1 − h k k ) σ X k − (27) \color{red} \tag{27} ④:\hat x_{k}=k_k(y_k-h\check x)+\check x~~~~~~~~~~~~~~~~~~⑤:\sigma^+_{X_{k}}=(1-hk_k) \sigma_{X_k}^{-} x^k=kk(ykhxˇ)+xˇ                  σXk+=(1hkk)σXk(27)
The above five formulas are obviously recursive. If we assume that Know x ^ 0 \hat x_0 x^0 σ X 0 + \sigma_{X_{0}}^+ pX0+, and observations at each time y k y_k andk, then it can be derived that x ^ k \hat x_k x^k σ X k + \sigma_{X_{k}}^+ pXk+,如下:
【 x ^ 0 , σ X 0 + , y 1 】 → 【 x ^ 1 , σ X 1 + , y 2 】 → ⋯ → 【 x ^ k , σ X k + 】 (28) \color{Green} \tag{28}【\hat x_0,\sigma_{X_{0}}^+,y_1】→【\hat x_1,\sigma_{X_{1}}^+,y_2】→\cdots→【\hat x_k,\sigma_{X_{k}}^+】 x^0,pX0+,and1x^1,pX1+,and2x^k,pXk+(28)
However, there is a big problem with the formula derived above, that is, its Based on one-dimensional derivation, one-dimensional Kalman filter is rarely used in actual applications. Although derivation in this way can provide a good understanding of the entire derivation process, it has too many limitations.

In addition, there are still many places that have not been explained clearly, such as the product and integral of two Gaussian distributions, the linear change of Gaussian distribution, etc., which have not been explained in detail. Don’t worry, subsequent blogs will expand on detailed analysis and deduction.

Guess you like

Origin blog.csdn.net/weixin_43013761/article/details/134224231