Filter fusion (1) Complete a simple Kalman filter linear system and measurement model based on C++

In robot positioning, KF or EKF is a commonly used sensor fusion algorithm. I have summarized a lot of usage of EKF before:
How to understand Kalman Filter (Kalman Filter) to achieve data fusion.
Easy to understand and understand. Extended Kalman Filter EKF is used for multiple sources. sensor fusion

Most of them are written based on MATLAB, so I just have time to briefly write down the calculation method based on C++ and compare it with bfl.

Simply put, EKF is divided into two processes, prediction and update. The prediction part generally uses sensors with relatively high data frequency, such as IMU or Odom data. In order to compare the effect, the simulated sensor data in the bfl example is used here. , first make a simple analysis of the examples in bfl:

  for (time_step = 0; time_step < NUM_TIME_STEPS-1; time_step++)
    {
    
    
      auto start = std::chrono::steady_clock::now();
      // DO ONE STEP WITH MOBILE ROBOT
      mobile_robot.Move(input);
      // DO ONE MEASUREMEN
      ColumnVector measurement = mobile_robot.Measure();
      // UPDATE FILTER
      // filter.Update(&sys_model,input,&meas_model,measurement);
      filter.Update(&sys_model,input);
      filter.Update(&meas_model,measurement);
      measurement = mobile_robot.Measure();
      x = filter.PostGet()->ExpectedValueGet();  
      covarLabelMeas = filter.PostGet()->CovarianceGet();
      cout << " ExpectedValueGet = " <<  x <<" measurement = " <<  measurement << endl;

    } // estimation loop

Among them mobile_robot.Move(input), the sensor data is obtained through simulation and the distance forward is simulated. And mobile_robot.Measure()is to obtain an observation value at that moment.

Then you can directly call the model defined in EKF, where filter.Update(&sys_model,input)is the predicted value update, and filter.Update(&meas_model,measurement)is the observed value update. The difference is the probability distribution of the model, and its function will be overloaded in the bfl library.

After updating, you can filter.PostGet()->ExpectedValueGetget the optimal state in EKF through .

The following is an introduction to the covariance distribution in the model, which mainly includes: prior, system model, and observation values. To put it simply, it means whether the initial position is accurate, how much error will be introduced in the prediction process, and whether sensor hardware problems will bring errors in the observation data.

A priori covariance:

  ColumnVector prior_Mu(2);
  prior_Mu(1) = PRIOR_MU_X;
  prior_Mu(2) = PRIOR_MU_Y;
  SymmetricMatrix prior_Cov(2);
  prior_Cov(1,1) = PRIOR_COV_X;
  prior_Cov(1,2) = 0.0;
  prior_Cov(2,1) = 0.0;
  prior_Cov(2,2) = PRIOR_COV_Y;
  Gaussian prior(prior_Mu,prior_Cov);

System noise:

  // create gaussian
  ColumnVector sysNoise_Mu(2);
  sysNoise_Mu(1) = MU_SYSTEM_NOISE_X;
  sysNoise_Mu(2) = MU_SYSTEM_NOISE_Y;

  SymmetricMatrix sysNoise_Cov(2);
  sysNoise_Cov = 0.0;
  sysNoise_Cov(1,1) = SIGMA_SYSTEM_NOISE_X;
  sysNoise_Cov(1,2) = 0.0;
  sysNoise_Cov(2,1) = 0.0;
  sysNoise_Cov(2,2) = SIGMA_SYSTEM_NOISE_Y;

  Gaussian system_Uncertainty(sysNoise_Mu, sysNoise_Cov);

  // create the model
  LinearAnalyticConditionalGaussian sys_pdf(AB, system_Uncertainty);
  LinearAnalyticSystemModelGaussianUncertainty sys_model(&sys_pdf);

Observation noise:

  // Construct the measurement noise (a scalar in this case)
  ColumnVector measNoise_Mu(1);
  measNoise_Mu(1) = MU_MEAS_NOISE;

  SymmetricMatrix measNoise_Cov(1);
  measNoise_Cov(1,1) = SIGMA_MEAS_NOISE;
  Gaussian measurement_Uncertainty(measNoise_Mu, measNoise_Cov);

  // create the model
  LinearAnalyticConditionalGaussian meas_pdf(H, measurement_Uncertainty);
  LinearAnalyticMeasurementModelGaussianUncertainty meas_model(&meas_pdf);

The last step is the construction of the observation matrix. A simple understanding is the conversion relationship between the observed data and the data required for Kalman filtering.

  // create matrix H for linear measurement model
  Matrix H(1,2);
  double wall_ct = 2/(sqrt(pow(RICO_WALL,2.0) + 1));
  H = 0.0;
  H(1,1) = wall_ct * RICO_WALL;
  H(1,2) = 0 - wall_ct;

For example, the observation value is the distance from the wall, but the state quantity maintained in KF is the coordinate value (x, y), so the distance needs to be converted into a coordinate value, but the motion model defined here is the distance, so these conversions are equivalent to Did it in the kinematic model. The motion model for this example is as follows:

  // Create the matrices A and B for the linear system model
  Matrix A(2,2);
  A(1,1) = 1.0;
  A(1,2) = 0.0;
  A(2,1) = 0.0;
  A(2,2) = 1.0;
  Matrix B(2,2);
  B(1,1) = cos(0.8);  // 0.69670670934717 
  B(1,2) = 0.0;
  B(2,1) = sin(0.8);  //0.71735609089952
  B(2,2) = 0.0;

The above is the fusion process done in the bfl library. Then let's implement a KF filter by ourselves according to the formula.


First, the equation of motion remains unchanged from the original, and the following equation is used for prediction update:
xk​ = A k​ xk − 1 + B k ​ uk ​ + wk x_k​=A_k​x_{k−1}+B_k​u_k ​+w_kxk=Akxk1+Bkuk+wk
In addition to updating the state, a covariance of the state must also be updated, which is a credibility of the current state. Also called state transition equation, where Q k Q_kQkIt is the noise of a system.
P k = A k ∗ P k − 1 ∗ AT + Q k P_{k} = A_k*P_{k-1}*A^T + Q_kPk=AkPk1AT+Qk
The above prediction part is completed. The following is the update process, which can also be considered as the fusion process. The two Gaussian distributions are fused. The first step is to calculate the Kalman gain, which is the question of who to believe more: K = P k ∗
HH ∗ P k ∗ HT + R k K = {P_k*H \over H*P_k*H^T + R_k}K=HPkHT+RkPkH
where P k P_kPkis the credibility of the state currently maintained in KF, and R k R_kRkis the credibility of the observed value, then the calculated KKK is equivalent to a weight.
With the weight, you can know who to trust more. If you trust the observed value more, then the state maintained in KF will rely more on the observed value.
xk = xk − 1 + K ( measurement − H ∗ xk ) x_k = x_{k-1} +K(measurement -H*x_k)xk=xk1+K(measurementHxk)
You can also seeHHThe usage of H observation matrix makes the observation values ​​and state values ​​consistent.

Since the observation value is updated, the corresponding confidence level also needs to be updated:
P k = ( I − K ∗ H ) ∗ P k − 1 P_k = (I - K*H)*P_{k-1}Pk=(IKH)Pk1

Then the update process is completed, and what follows is a process of continuous iteration. The final calculated effect is exactly the same~

cout << "MAIN: Starting estimation" << endl;

ColumnVector X_k(2); // 状态量 (x,y)
X_k= prior_Mu; // 先验
Matrix P(2,2); // 先验协防差
P = prior_Cov;

Matrix Identity(2,2); // 用于数据格式转换
Identity(1,1) = 1;
Identity(1,2) = 0.0;
Identity(2,1) = 0.0;
Identity(2,2) = 1;

for (time_step = 0; time_step < NUM_TIME_STEPS-1; time_step++)
  {
    
    
    auto start = std::chrono::steady_clock::now();
    // DO ONE STEP WITH MOBILE ROBOT
    mobile_robot.Move(input);
    // DO ONE MEASUREMEN
    ColumnVector measurement = mobile_robot.Measure();

    cout << " \n "<< endl;
    // 预测
    // 1.更新状态量
    X_k = A*X_k + B*input + sysNoise_Mu;
    // 2.协防差传播
    P = A*(P*A.transpose()) + sysNoise_Cov*Identity;
    
    // 观测值更新 
    Matrix K,R;
    R = measNoise_Cov;
    // 1.计算增益矩阵
    K = P*H.transpose() *( H*P*H.transpose() + R).inverse();
    // 2.更新状态量
    ColumnVector X_k_1(2);
    X_k = X_k + K *(measurement - H*X_k);
    // 3.协防差传播
    P = (Identity - K*H)*P;

    cout << " K:" << K  <<endl;
    cout << " Z - Hx:" << measurement - H*X_k <<endl;
    cout << " K(Z - Hx):" << K *(measurement - H*X_k) <<endl;
    cout << " x + K(Z - Hx):" << X_k <<endl;

  } // estimation loop

The specific code is as above, and the nonlinear Kalman filter based on C++ will be updated later, which takes the steering angle into consideration.

Guess you like

Origin blog.csdn.net/qq_39266065/article/details/123515576