Filter fusion (2) Complete a simple extended Kalman filter nonlinear system model based on C++

Linear Kalman filtering has been simply implemented before: filter fusion (1) based on C++ to complete a simple Kalman filter linear system and measurement model.
For nonlinear systems, the difference is that there is an additional linearization process, because Gaussian Mapping to a nonlinear function, the result is no longer a Gaussian distribution. According to the different linearization methods, it can also be divided into EKF and UKF. Currently, only EKF is introduced. I will talk about UKF if I have the opportunity in the future. For the basic theoretical part, see the article: Easy to understand and understand the extended
Kalman filter EKF for multi-source sensor fusion.
The intuitive understanding of the division between linear and nonlinear is: only addition and multiplication are linear, and everything except linear is nonlinear . For details, please refer to: The difference between linear and nonlinear .

So how do you linearize a nonlinear function? The method used by the extended Kalman filter is called the first-order Taylor expansion.
h ( x ) ≈ h ( μ ) + ∂ h ( μ ) ∂ x ( x − μ ) h(x)\approx h(\mu)+{∂h(\mu)\over ∂x}(x-\ mu)h(x)h ( m )+xh ( μ )(xm )

Still bflcompare through the library and use bflthe examples provided in the library. Compared with the linear Kalman filter, the system model changes from not having angle information to having angle information, making the system equation become a nonlinear function.
Update equation in Kalman filter:
X k = X k − 1 + V k − 1 ∗ D eltatXk=Xk1+Vk1Deltat
Update equation in extended Kalman filter
:Xk=Xk1+cosθVk1Deltat
Because the angle change is taken into account, it becomes a nonlinear function. But the observation model is still a linear model, just like before.

First, let’s take a look at bflwhat processing needs to be done in . There are two functions that need to be rewritten by yourself, because only you know the system model here, as well as the corresponding Jacobian matrix!

//返回条件概率密度的期望值,期望值由系统方程决定
ColumnVector NonLinearAnalyticConditionalGaussianMobile::ExpectedValueGet() const
{
    
    
  ColumnVector state = ConditionalArgumentGet(0);
  ColumnVector vel  = ConditionalArgumentGet(1);
  state(1) += cos(state(3)) * vel(1);
  state(2) += sin(state(3)) * vel(1);
  state(3) += vel(2);
  return state + AdditiveNoiseMuGet();
}

The code corresponds to the above system equation formula, with the addition of angle information, statewhich is the state quantity maintained by EKF, including x, y, θ x, y, \theta respectively.x , y , θ , in whichθ \thetaJust add θ directly. Then there is the Jacobian matrix of the system equation, which is the key to linearization. Because the time intervals in the examples here are all 1, soDeltat = 1 Delta_t = 1Deltat=1. xk = xk − 1 + cos θ ∗ V k − 1 yk = yk − 1 + sin θ ∗ V k − 1 θ k = θ k − 1 + D comes θ x_k = x_{k-1 }
+ cos\theta* V_{k-1} \\ y_k = y_{k-1} + sin\theta* V_{k-1} \\ \theta_k = \theta_{k-1} + Will_{\theta}xk=xk1+cosθVk1yk=yk1+sinθVk1ik=ik1+D e t l ai ∂ x k ( μ ) ∂ x = 1 ∂ x k ( μ ) ∂ y = 0 ∂ x k ( μ ) ∂ θ = − s i n θ ∗ V k − 1 {∂x_k(\mu)\over ∂x} = 1 \qquad {∂x_k(\mu)\over ∂y} = 0 \qquad {∂x_k(\mu)\over ∂\theta} = -sin\theta*V_{k-1} xxk( m ).=1yxk( m ).=0θxk( m ).=sinθVk1∂ yk ( μ ) ∂ x = 0 ∂ yk ( μ ) ∂ y = 1 ∂ yk ( μ ) ∂ θ = cos θ ∗ V k − 1 {∂y_k(\mu)\over ∂x} = 0 \qquad { ∂y_k(\mu)\over ∂y} = 1 \qquad {∂y_k(\mu)\over ∂\theta} = cos\theta*V_{k-1}xyk( m ).=0yyk( m ).=1θyk( m ).=cosθVk1∂ θ k ( μ ) ∂ x = 0 ∂ θ k ( μ ) ∂ y = 0 ∂ θ k ( μ ) ∂ θ = 1 {∂{\theta}_k(\mu)\over ∂x} = 0 \quad {∂{\theta}_k(\mu)\over ∂y} = 0 \qquad {∂{\theta}_k(\mu)\over ∂\theta} =xθk( m ).=0yθk( m ).=0θθk( m ).=1The
code is as follows:

//求第i个条件参数求偏导数。本例中只对状态(第一个条件参数)求偏导数
Matrix NonLinearAnalyticConditionalGaussianMobile::dfGet(unsigned int i) const
{
    
    
	if (i==0)//derivative to the first conditional argument (x)
	  {
    
    
		ColumnVector state = ConditionalArgumentGet(0);
		ColumnVector vel = ConditionalArgumentGet(1);
		Matrix df(3,3);
		df(1,1)=1;
		df(1,2)=0;
		df(1,3)=-vel(1)*sin(state(3));
		df(2,1)=0;
		df(2,2)=1;
		df(2,3)=vel(1)*cos(state(3));
		df(3,1)=0;
		df(3,2)=0;
		df(3,3)=1;
		return df;
		}
		else ...
}

The others are the same as the linear Kalman filter. bflThere is no difference when used in the library after the definition is completed. So how do you implement this part yourself?


According to the first-order Taylor expansion:
h ( x ) ≈ h ( μ ) + ∂ h ( μ ) ∂ x ( x − μ ) h(x)\approx h(\mu)+{∂h(\mu)\over ∂x}(x-\mu)h(x)h ( m )+xh ( μ )(xμ )
h ( x ) = ( xkyk θ k ) = ( xk − 1 + cos θ ∗ V k − 1 yk − 1 + sin θ ∗ V k − 1 θ k − 1 + D will θ ) h(x) = \begin{pmatrix}x_k\y_k\\\theta_k\\\end{pmatrix}=\begin{pmatrix}x_{k-1} + cos\theta* V_{k-1}\\y_{k-1 } + sin\theta* V_{k-1} \\ \theta_{k-1} + Will_{\theta} \\\end{pmatrix}h(x)=xkykik=xk1+cosθVk1yk1+sinθVk1ik1+D e t l ai≈ ( x 0 + cos θ ∗ V 0 y 0 + sin θ ∗ V 0 θ 0 + D comes 0 ) + ( 1 0 − sin θ 0 ∗ V 0 0 1 cos θ 0 ∗ V 0 0 0 1 ) ( x − x 0 y − y 0 θ − θ 0 ) \approx \begin{pmatrix} x_{0} + cos\theta* V_{0} \\ y_{0} + sin\theta* V_{0} \\ \ theta_{0} + Will_{0} \\\end{pmatrix} + \begin{pmatrix} 1 & 0 & -sin\theta_0 * V_0 \\ 0 & 1 & cos\theta_0 * V_0 \\ 0 & 0 & \\\end{pmatrix}\begin{pmatrix} x-x_0 \\ y-y_0 \\ \theta-\theta_0 \\\end{pmatrix}x0+cosθV0y0+sinθV0i0+D e t l a0+100010sinθ0V0cosθ0V01xx0yy0ii0

At this time, take u = 0 u = 0u=0 , when the system model is nonlinear, compared with the linear system, there are two steps:

  1. Taylor at u = 0 u=0u=Expand at 0 and linearize;
  2. Using the Jacobian matrix F j F_j of a nonlinear systemFj, instead of the original FFF matrix, for assisting differential transmission.

The linear expansion has been done, and the Jacobian matrix F j F_jFjDefault function:
F j = ( 1 0 − sin θ ∗ V 0 1 cos θ ∗ V 0 0 1 ) F_j = \begin{pmatrix} 1 & 0 & -sin\theta * V \\ 0 & 1 & cos \theta*V\\0&0&1\\\end{pmatrix}Fj=100010sinθVcosθV1

Let's start modifying the code, mainly to compare with the linearized system. The linearized system can refer to filter fusion (1) based on C++ to complete a simple linear Kalman filter for sensor fusion .

First, some basic parameter dimensions have become three-dimensional! Because we have more states θ \thetai .

  ColumnVector X_k(3);// 状态量 (x,y,θ)
  X_k= prior_Mu;      // 先验
  Matrix P(3,3);      // 先验协防差
  P = prior_Cov;
  Matrix Identity(3,3); // 用于数据格式转换
  Identity(1,1) = 1;  Identity(1,2) = 0.0;  Identity(1,3) = 0.0;
  Identity(2,1) = 0.0;  Identity(2,2) = 1;  Identity(2,3) = 0.0;
  Identity(3,1) = 0.0;  Identity(3,2) = 0.0;  Identity(3,3) = 1;

In order to compare the system update model, the code of the linear system is also taken here. The difference can be clearly seen. The state quantity and the input are not simply added, but multiplied, resulting in the nonlinearity of the system model. Therefore, for the transmission of the co-defense difference, the Jacobian matrix needs to be used.

// 线性系统
//Matrix A(2,2);
//A(1,1) = 1.0;  A(1,2) = 0.0;  A(2,1) = 0.0;  A(2,2) = 1.0;
//Matrix B(2,2);
//B(1,1) = cos(0.8);  B(1,2) = 0.0;  B(2,1) = sin(0.8);  B(2,2) = 0.0;
// X_k = A*X_k + B*input + sysNoise_Mu;
// P = A*(P*A.transpose()) + sysNoise_Cov*Identity;

// 非线性系统
X_k(1) += cos(X_k(3)) * input(1) + sys_noise_Mu(1);
X_k(2) += sin(X_k(3)) * input(1) + sys_noise_Mu(2);
X_k(3) += input(2) + sys_noise_Mu(3);

Matrix df(3,3);
df(1,1)=1; df(1,2)=0; df(1,3)=-input(1)*sin(X_k(3));
df(2,1)=0; df(2,2)=1; df(2,3)=input(1)*cos(X_k(3));
df(3,1)=0; df(3,2)=0; df(3,3)=1;

P = df*(P*df.transpose()) + sys_noise_Cov*Identity;

For the observation update model, it is still linear in this example, so there is no difference and will not be explained here. You can compare the final results, it’s no problem ~

P:[2,2]((0.0499,0),(0,0.000204951))# 自己更新的协防差
K:[2,1]((0),(-0.163961))
Z - Hx:[1](0.0327018)
K(Z - Hx):[2](0,-0.00536182)
x + K(Z - Hx):[2](6.86707,7.11033) # 自己更新的状态量
 
ExpectedValueGet = [2](6.86707,7.11033) measurement = [1](-14.1006) #BFL库更新的状态量
Covariance = [2,2]((0.05,0),(0,0.000204951)) #BFL库更新的协防差

*When the prediction model is also nonlinear, there are two differences:

  1. y = z − H x y = z -Hx y=zH x变为y = z − h ( x ) y = z -h(x)y=zh(x)
  2. Use h ( x ) h(x)Jacobian matrixH j H_j of h ( x )HjReplace HHH matrix.

Guess you like

Origin blog.csdn.net/qq_39266065/article/details/123827224