Kalman filtering principle (11) Interval smoothing: forward filtering, reverse filtering, two-way interval smoothing, RTS smoothing

Relationship between optimal forecasting, estimation, and smoothing:

[External link picture transfer failed, the source site may have an anti-leeching mechanism, it is recommended to save the picture and upload it directly (img-kZC8JtPg-1686234324707) (Kalman filter and integrated navigation principle (11) Interval smoothing.assets/1686229884025.png )]

Three smoothing methods:

Functional and stochastic models
{ X k = Φ k / k − 1 X k − 1 + Γ k − 1 W k − 1 Z k = H k X k + V k { E [ W k ] = 0 , E [ W k W j T ] = Q k δ k j E [ V k ] = 0 , E [ V k V j T ] = R k δ k j E [ W k V j T ] = 0 \left\{\begin{array} { l } { \boldsymbol { X } _ { k } = \boldsymbol { \Phi } _ { k / k - 1 } \boldsymbol { X } _ { k - 1 } + \boldsymbol { \Gamma } _ { k - 1 } \boldsymbol { W } _ { k - 1 } } \\ { \boldsymbol { Z } _ { k } = \boldsymbol { H } _ { k } \boldsymbol { X } _ { k } + \boldsymbol { V } _ { k } } \end{array} \quad \left\{\begin{array}{ll} \mathrm{E}\left[\boldsymbol{W}_{k}\right]=\mathbf{0}, & \mathrm{E}\left[\boldsymbol{W}_{k} \boldsymbol{W}_{j}^{\mathrm{T}}\right]=\boldsymbol{Q}_{k} \delta_{k j} \\ \mathrm{E}\left[\boldsymbol{V}_{k}\right]=\mathbf{0}, & \mathrm{E}\left[\boldsymbol{V}_{k} \boldsymbol{V}_{j}^{\mathrm{T}}\right]=\boldsymbol{R}_{k} \delta_{k j} \\ \mathrm{E}\left[\boldsymbol{W}_{k} \boldsymbol{V}_{j}^{\mathrm{T}}\right]=\mathbf{0} & \end{array}\right.\right. Xk=Phik/k1Xk1+Ck1Wk1Zk=HkXk+Vk E[Wk]=0,E[Vk]=0,E[WkVjT]=0E[WkWjT]=QkdkjE[VkVjT]=Rkdkj
Divide the measurement sequence into two segments: Z ˉ M = [ Z 1 Z 2 ⋯ Z j ⏟ Z ‾ 1 ⋅ j Z j + 1 Z j + 2 ⋯ ZM ⏟ Z ˉ j + 1 + M ] T \bar{Z }_{M}=\underbrace{\left[\boldsymbol{Z}_{1} \boldsymbol{Z}_{2} \cdots \boldsymbol{Z}_{j}\right.}_{\overline{ \boldsymbol{Z}}_{\mathrm{1} \cdot j}} \underbrace{\mathbf{Z}_{j+1} \boldsymbol{Z}_{j+2} \cdots \boldsymbol{Z} _{M}}_{\bar{Z}_{j+1+M}}]^{\mathrm{T}}ZˉM=Z1j [Z1Z2ZjZˉj+1+M Zj+1Zj+2ZM]T

Let’s look at fixed-point smoothing first. Fixed-interval smoothing only expands the range of points, and you can translate the points back and forth.

1. Forward filtering (forward)

Do forward filtering through the previous paragraph, which is the ordinary Kalman filter, and the subscripts are all added with fff means forward,:
{ X ^ f , k / k − 1 = Φ k / k − 1 X ^ f , k − 1 P f , k / k − 1 = Φ k / k − 1 P f , k − 1 Φ k / k − 1 T + Γ k − 1 Q k − 1 Γ k − 1 T K f , k = P f , k / k − 1 H k T ( H k P f , k / k − 1 H k T + R k ) − 1 k = 1 , 2 , ⋯   , j X ^ f , k = X ^ f , k / k − 1 + K f , k ( Z k − H k X ^ f , k / k − 1 ) P f , k = ( I − K f , k H k ) P f , k / k − 1 \left\{\begin{array}{l} \hat{\boldsymbol{X}}_{f, k / k-1}=\boldsymbol{\Phi}_{k / k-1} \hat{\boldsymbol{X}}_{f, k-1} \\ \boldsymbol{P}_{f, k / k-1}=\boldsymbol{\Phi}_{k / k-1} \boldsymbol{P}_{f, k-1} \boldsymbol{\Phi}_{k / k-1}^{\mathrm{T}}+\boldsymbol{\Gamma}_{k-1} \boldsymbol{Q}_{k-1} \boldsymbol{\Gamma}_{k-1}^{\mathrm{T}} \\ \boldsymbol{K}_{f, k}=\boldsymbol{P}_{f, k / k-1} \boldsymbol{H}_{k}^{\mathrm{T}}\left(\boldsymbol{H}_{k} \boldsymbol{P}_{f, k / k-1} \boldsymbol{H}_{k}^{\mathrm{T}}+\boldsymbol{R}_{k}\right)^{-1} \quad k=1,2, \cdots, j \\ \hat{\boldsymbol{X}}_{f, k}=\hat{\boldsymbol{X}}_{f, k / k-1}+\boldsymbol{K}_{f, k}\left(\boldsymbol{Z}_{k}-\boldsymbol{H}_{k} \hat{\boldsymbol{X}}_{f, k / k-1}\right) \\ \boldsymbol{P}_{f, k}=\left(\boldsymbol{I}-\boldsymbol{K}_{f, k} \boldsymbol{H}_{k}\right) \boldsymbol{P}_{f, k / k-1} \end{array}\right. X^f,k/k1=Phik/k1X^f,k1Pf,k/k1=Phik/k1Pf,k1Phik/k1T+Ck1Qk1Ck1TKf,k=Pf,k/k1HkT(HkPf,k/k1HkT+Rk)1k=1,2,,jX^f,k=X^f,k/k1+Kf,k(ZkHkX^f,k/k1)Pf,k=(IKf,kHk)Pf,k/k1
get jjEstimateX ^ f , j , P f , j \hat{\boldsymbol{X}}_{f, j}, \boldsymbol{P}_{f, j} at time jX^f,j,Pf,j

2. Reverse filtering (backward)

Pushing from the back to the front, it is necessary to rewrite the Kalman filter model: change the state transition matrix into an inverse form, which means predicting the previous time from the later time: { X k =
Φ k / k − 1 X k − 1 + Γ k − 1 W k − 1 Z k = H k X k + V k ⟹ { X k = Φ k + 1 / k − 1 X k + 1 − Φ k + 1 / k − 1 Γ k W k Z k = H k X k + V k \left\{\begin{array}{l}\boldsymbol{X}_{k}={\color{red}\boldsymbol{\Phi}_{k / k-1}} \boldsymbol {X}_{k-1}+\boldsymbol{\Gamma}_{k-1} \boldsymbol{W}_{k-1} \\ \boldsymbol{Z}_{k}=\boldsymbol{H} _{k} \boldsymbol{X}_{k}+\boldsymbol{V}_{k}\end{array} \Longrightarrow\left\{\begin{array}{l}\boldsymbol{X}_{k }={\color{red}\boldsymbol{\Phi}_{k+1 / k}^{-1}} \boldsymbol{X}_{k+1}-{\color{red}\boldsymbol{\ Phi}_{k+1 / k}^{-1}} \boldsymbol{\Gamma}_{k} \boldsymbol{W}_{k} \\ \boldsymbol{Z}_{k}=\boldsymbol{ H}_{k} \boldsymbol{X}_{k}+\boldsymbol{V}_{k}\end{array}\right.\right.{ Xk=Phik/k1Xk1+Ck1Wk1Zk=HkXk+Vk{ Xk=Phik+1/k1Xk+1Phik+1/k1CkWkZk=HkXk+Vk
Φ k + 1 / k ∗ = Φ k + 1 / k − 1 \boldsymbol{\Phi}_{k+1 / k}^{*}=\boldsymbol{\Phi}_{k+1 / k} ^{-1}Phik+1/k=Phik+1/k1, Γ k ∗ = − Φ k + 1 / k − 1 Γ k \boldsymbol{\Gamma}_{k}^{*}=-\boldsymbol{\Phi}_{k+1 / k}^{-1 } \boldsymbol{\Gamma}_{k}Ck=- Fk+1/k1Ck W k + 1 ∗ = W k \boldsymbol{W}_{k+1}^{*}=\boldsymbol{W}_{k} Wk+1=Wk, get a new function model:
{ X k = Φ k + 1 / k ∗ X k + 1 + Γ k ∗ W k + 1 ∗ Z k = H k X k + V k \left\{\begin{array} {l} \boldsymbol{X}_{k}=\boldsymbol{\Phi}_{k+1 / k}^{*} \boldsymbol{X}_{k+1}+\boldsymbol{\Gamma}_ {k}^{*} \boldsymbol{W}_{k+1}^{*} \\ \boldsymbol{Z}_{k}=\boldsymbol{H}_{k} \boldsymbol{X}_{ k}+\boldsymbol{V}_{k} \end{array}\right.{ Xk=Phik+1/kXk+1+CkWk+1Zk=HkXk+Vk
Do Kalman filtering on the new function model:
{ X ^ b , k / k + 1 = Φ k / k + 1 ∗ X ^ b , k + 1 P b , k / k + 1 = Φ k / k + 1 ∗ P b , k + 1 ( Φ k / k + 1 ∗ ) T + Γ k ∗ Q k Γ k ∗ K b , k = P b , k / k + 1 H k T ( H k P b , k / k + 1 H k T + R k ) − 1 k = M − 1 , M − 2 , ⋯   , j + 1 X ^ b , k = X ^ b , k / k + 1 + K b , k ( Z k − H k X ^ b , k / k + 1 ) P b , k = ( I − K b , k H k ) P b , k / k + 1 \left\{\begin{array}{l} \hat{\boldsymbol{X}}_{b, k / k+1}=\boldsymbol{\Phi}_{k / k+1}^{*} \hat{\boldsymbol{X}}_{b, k+1} \\ \boldsymbol{P}_{b, k / k+1}=\boldsymbol{\Phi}_{k / k+1}^{*} \boldsymbol{P}_{b, k+1}\left(\boldsymbol{\Phi}_{k / k+1}^{*}\right)^{\mathrm{T}}+\boldsymbol{\Gamma}_{k}^{*} \boldsymbol{Q}_{k} \boldsymbol{\Gamma}_{k}^{*} \\ \boldsymbol{K}_{b, k}=\boldsymbol{P}_{b, k / k+1} \boldsymbol{H}_{k}^{\mathrm{T}}\left(\boldsymbol{H}_{k} \boldsymbol{P}_{b, k / k+1} \boldsymbol{H}_{k}^{\mathrm{T}}+\boldsymbol{R}_{k}\right)^{-1} \quad k=M-1, M-2, \cdots, j+1 \\ \hat{\boldsymbol{X}}_{b, k}=\hat{\boldsymbol{X}}_{b, k / k+1}+\boldsymbol{K}_{b, k}\left(\boldsymbol{Z}_{k}-\boldsymbol{H}_{k} \hat{\boldsymbol{X}}_{b, k / k+1}\right) \\ \boldsymbol{P}_{b, k}=\left(\boldsymbol{I}-\boldsymbol{K}_{b, k} \boldsymbol{H}_{k}\right) \boldsymbol{P}_{b, k / k+1} \end{array}\right. X^b,k/k+1=Phik/k+1X^b,k+1Pb,k/k+1=Phik/k+1Pb,k+1( Fk/k+1)T+CkQkCkKb,k=Pb,k/k+1HkT(HkPb,k/k+1HkT+Rk)1k=M1,M2,,j+1X^b,k=X^b,k/k+1+Kb,k(ZkHkX^b,k/k+1)Pb,k=(IKb,kHk)Pb,k/k+1
get jjOne-step backward prediction at time j X ^ b , j / j + 1 , P b , j / j + 1 \hat{\boldsymbol{X}}_{b, j / j+1}, \boldsymbol{P}_ {b, j / j+1}X^b,j/j+1,Pb,j/j+1

3. Fusion of fixed point information at time j (smoothing)

Observations X ^ f , j , P f , j \hat{\boldsymbol{X}}_{f, j}, \boldsymbol{P}_{f, j} obtained by forward filtering the previous observationsX^f,j,Pf,j, and the observed value X ^ b , j / j + 1 , P b , j / j + 1 \hat{\boldsymbol{X}}_{b, j / j+1}, \boldsymbol{ P}_{b, j / j+1}X^b,j/j+1,Pb,j/j+1Do information fusion (weighted average), and consider that the results of the two filters are irrelevant:
{ X ^ f , j = X j + Δ f , j X ^ b , j / j + 1 = X j + Δ b , j / j + 1 { Δ f , j ∼ N ( 0 , P f , j ) , Δ b , j / j + 1 ∼ N ( 0 , P b , j / j + 1 ) , cov ⁡ ( U f , j Δ b , j / j + 1 T ) = 0 ⟹ { P s , j = ( P f , j − 1 + P b , j / j + 1 − 1 ) − 1 X ^ s , j = P s , j ( P b , j / j + 1 − 1 X ^ f , j + P f , j − 1 X ^ b , j / j + 1 ) \begin{array}{c} \left\{\begin{array} { l } { \hat { \boldsymbol { X } } _ { f , j } = \boldsymbol { X } _ { j } + \boldsymbol { \Delta } _ { f , j } } \\ { \hat { \boldsymbol { X } } _ { b , j / j + 1 } = \boldsymbol { X } _ { j } + \boldsymbol { \Delta } _ { b , j / j + 1 } } \end{array} \quad \left\{\begin{array}{l} \boldsymbol{\Delta}_{f, j} \sim \mathrm{N}\left(\mathbf{0}, \boldsymbol{P}_{f, j}\right), \\ \boldsymbol{\Delta}_{b, j / j+1} \sim \mathrm{N}\left(\mathbf{0}, \boldsymbol{P}_{b, j / j+1}\right), \\ \operatorname{cov}\left(\boldsymbol{U}_{f, j} \boldsymbol{\Delta}_{b, j / j+1}^{\mathrm{T}}\right)=\mathbf{0} \end{array}\right.\right. \\ \Longrightarrow\left\{\begin{array}{l} \boldsymbol{P}_{s, j}=\left(\boldsymbol{P}_{f, j}^{-1}+\boldsymbol{P}_{b, j / j+1}^{-1}\right)^{-1} \\ \hat{\boldsymbol{X}}_{s, j}=\boldsymbol{P}_{s, j}\left(\boldsymbol{P}_{b, j / j+1}^{-1} \hat{\boldsymbol{X}}_{f, j}+\boldsymbol{P}_{f, j}^{-1} \hat{\boldsymbol{X}}_{b, j / j+1}\right) \end{array}\right. \end{array} X^f,j=Xj+Df,jX^b,j/j+1=Xj+Db,j/j+1 Df,jN(0,Pf,j),Db,j/j+1N(0,Pb,j/j+1),those(Uf,jDb,j/j+1T)=0 Ps,j=(Pf,j1+Pb,j/j+11)1X^s,j=Ps,j(Pb,j/j+11X^f,j+Pf,j1X^b,j/j+1)

4. Interval smoothing based on forward and reverse filtering

  1. Filter from front to back , obtain and store X ^ f , j , P f , j ( j = 1 , 2 , ⋯ , M ) \hat{\boldsymbol{X}}_{f, j}, \boldsymbol{ P}_{f, j}(j=1,2, \cdots, M)X^f,j,Pf,j(j=1,2,,M);
  2. Reverse filtering from back to front to obtain X ^ b , j / j + 1 , P b , j / j + 1 \hat{\boldsymbol{X}}_{b, j / j+1}, \boldsymbol{ P}_{b, j / j+1}X^b,j/j+1,Pb,j/j+1, get X ^ s , j , P s , j \hat{\boldsymbol{X}}_{s, j}, \boldsymbol{P}_{s, j}X^s,j,Ps,j;
  3. ( j = M − 1 , M − 2 , ⋯   , 1 ) (j=M-1, M-2, \cdots, 1) (j=M1,M2,,1 ) Completeinterval smoothing.

5. RTS interval smoothing algorithm

H. R auch, F. T ung, C. S triebel, 1965, the derivation is more complicated

No need for time update, filter forward and backward, obtain and store Φ j / j − 1 T , X ^ f , j / j − 1 \boldsymbol{\Phi}_{j / j-1}^{\ mathrm{T}}, \hat{\boldsymbol{X}}_{f, j / j-1}Phid / d 1T,X^f,j/j1, P f , j / j − 1 , X ^ f , j , P f , j ( j = 1 , 2 , ⋯   , M ) \boldsymbol{P}_{f, j / j-1}, \hat{\boldsymbol{X}}_{f, j}, \boldsymbol{P}_{f, j}(j=1,2, \cdots, M) Pf,j/j1,X^f,j,Pf,j(j=1,2,,M ) and then perform the following RTS algorithm in the order of measurement from back to front:
{ K s , k = P f , k Φ k + 1 / k T P f , k + 1 / k − 1  初值  X ^ s , M = X ^ f , M , P s , M = P f , M X ^ s , k = X ^ f , k + K s , k ( X ^ s , k + 1 − X ^ f , k + 1 / k ) k = M − 1 , M − 2 , ⋯   , 1 P s , k = P f , k + K s , k ( P s , k + 1 − P f , k + 1 / k ) K s , k T \left\{\begin{array}{ll}\boldsymbol{K}_{s, k}=\boldsymbol{P}_{f, k} \boldsymbol{\Phi}_{k+1 / k}^{\mathrm{T}} \boldsymbol{P}_{f, k+1 / k}^{-1} & \text { 初值 } \hat{\boldsymbol{X}}_{s, M}=\hat{\boldsymbol{X}}_{f, M}, \boldsymbol{P}_{s, M}=\boldsymbol{P}_{f, M} \\ \hat{\boldsymbol{X}}_{s, k}=\hat{\boldsymbol{X}}_{f, k}+\boldsymbol{K}_{s, k}\left(\hat{\boldsymbol{X}}_{s, k+1}-\hat{\boldsymbol{X}}_{f, k+1 / k}\right) & k=M-1, M-2, \cdots, 1 \\ \boldsymbol{P}_{s, k}=\boldsymbol{P}_{f, k}+\boldsymbol{K}_{s, k}\left(\boldsymbol{P}_{s, k+1}-\boldsymbol{P}_{f, k+1 / k}\right) \boldsymbol{K}_{s, k}^{\mathrm{T}} & \end{array}\right. Ks,k=Pf,kPhik+1/kTPf,k+1/k1X^s,k=X^f,k+Ks,k(X^s,k+1X^f,k+1/k)Ps,k=Pf,k+Ks,k(Ps,k+1Pf,k+1/k)Ks,kT initial value X^s,M=X^f,M,Ps,M=Pf,Mk=M1,M2,,1
The amount of calculation is similar to that of the interval smoothing of forward and reverse filtering, and the storage capacity has increased a lot

6. Comparison of smoothing accuracy

[External link picture transfer failed, the source site may have an anti-leeching mechanism, it is recommended to save the picture and upload it directly (img-U8jzdNUC-1686234324709) (Kalman filter and integrated navigation principle (11) Interval smoothing.assets/1686232739636.png )]

  • Smoothability problem: states are smoothable only when they are affected by system noise. Random constants have no smoothness.
  • The amount of data to be stored for filtering is relatively large, maybe several G. In engineering, bidirectional filtering + P-array diagonal weighted average can effectively reduce the storage capacity.

Guess you like

Origin blog.csdn.net/daoge2666/article/details/131117234