ROS EKF robot pose estimation function package: robot_pose_ekf detailed explanation

feature pack usage

file structure

insert image description here

  • There is no launch folder, there are two launch files outside
  • There is no config folder, parameter settings are carried out in the launch file
  • src folder - store cpp files
  • include folder - store header files
  • srv—storage server parameter file
  • CMakeLists.txt — build files
  • package.xml — feature package information file

configuration parameters

The parameter configuration of the function package of robot_pose_ekf is performed in the launch file, and there is no yaml file

robot_pose_ekfThe default startup file (launch) for EKF nodes can be found in the package directory. The launch file contains a number of configurable parameters:

<launch>

<node pkg="robot_pose_ekf" type="robot_pose_ekf" name="robot_pose_ekf">
  <param name="output_frame" value="odom_combined"/>
  <param name="base_footprint_frame" value="base_footprint"/>
  <param name="freq" value="30.0"/>
  <param name="sensor_timeout" value="1.0"/>  
  <param name="odom_used" value="true"/>
  <param name="imu_used" value="true"/>
  <param name="vo_used" value="true"/>

  <remap from="odom" to="pr2_base_odometry/odom" />
</node>

</launch>

The parameter functions are as follows:

  • output_frame : the output coordinate system name
  • base_footprint_frame : robot coordinate system
  • freq: Filter update and release frequency Note: A higher frequency only means that more robot pose information can be obtained for a period of time, but it does not improve the accuracy of each pose estimation
  • sensor_timeout : When a sensor stops sending information to the filter, how long the filter waits without a sensor before starting to work again
  • odom_used : Whether the odometer data is input
  • imu_used : Whether the IMU data is input
  • vo_used : Whether the visual odometry data is input

The robot_pose_ekf node does not require all three sensor sources to be available at all times. Each source gives a pose estimate and a covariance. These sources operate at different rates and with different latencies. Sources will appear and disappear over time, and nodes will automatically detect and use available sensors.

subscribed topics

specific code

The subscription code for the topic is set odom_estimation_node.cppin

    // subscribe to odom messages
    if (odom_used_){
    
    
      ROS_DEBUG("Odom sensor can be used");
      odom_sub_ = nh.subscribe("odom", 10, &OdomEstimationNode::odomCallback, this);
    }
    else ROS_DEBUG("Odom sensor will NOT be used");

    // subscribe to imu messages
    if (imu_used_){
    
    
      ROS_DEBUG("Imu sensor can be used");
      imu_sub_ = nh.subscribe("imu_data", 10,  &OdomEstimationNode::imuCallback, this);
    }
    else ROS_DEBUG("Imu sensor will NOT be used");

    // subscribe to vo messages
    if (vo_used_){
    
    
      ROS_DEBUG("VO sensor can be used");
      vo_sub_ = nh.subscribe("vo", 10, &OdomEstimationNode::voCallback, this);
    }
    else ROS_DEBUG("VO sensor will NOT be used");

    if (gps_used_){
    
    
      ROS_DEBUG("GPS sensor can be used");
      gps_sub_ = nh.subscribe("gps", 10, &OdomEstimationNode::gpsCallback, this);
    }
    else ROS_DEBUG("GPS sensor will NOT be used");

If you need to modify the topic name to adapt to your own robot, you can remap it in the launch file or modify it directly in the source code

wheel speed odometer

Source Topic Name: odom
Message Type: nav_msgs/Odometry

2D pose (used by wheel odometry): The 2D pose contains the position and orientation information of the robot on the ground and the covariance of the pose. The message used to send the 2D pose actually represents a 3D pose, but the z, pitch and roll components are simply ignored.

The original message definition looks like this

# This represents an estimate of a position and velocity (速度) in free space.  
# The pose in this message should be specified in the coordinate frame given by header.frame_id.
# The twist in this message should be specified in the coordinate frame given by the child_frame_id
Header header
string child_frame_id
geometry_msgs/PoseWithCovariance pose
geometry_msgs/TwistWithCovariance twist

Inertial Navigation Data

Source topic name: imu_data
Message type: sensor_msgs/Imu

3D orientation (used by the IMU): 3D orientation provides Roll, Pitch and Yaw information of the robot base relative to the world coordinate system. Roll and Pitch angles are absolute angles (since the IMU has a gravity reference), while Yaw angles are relative angles. The covariance matrix is ​​used to specify the uncertainty of the orientation measurement. robot_pose_ekf will not start when only this topic message is received, because it also needs messages from topic vo or odom.

The original message definition looks like this

# This is a message to hold data from an IMU (Inertial Measurement Unit)
#
# Accelerations should be in m/s^2 (not in g's), and rotational velocity should be in rad/sec
#
# If the covariance of the measurement is known, it should be filled in (if all you know is the 
# variance of each measurement, e.g. from the datasheet, just put those along the diagonal)
# A covariance matrix of all zeros will be interpreted as "covariance unknown", and to use the
# data a covariance will have to be assumed or gotten from some other source
#
# If you have no estimate for one of the data elements (e.g. your IMU doesn't produce an orientation 
# estimate), please set element 0 of the associated covariance matrix to -1
# If you are interpreting this message, please check for a value of -1 in the first element of each 
# covariance matrix, and disregard the associated estimate.
Header header
geometry_msgs/Quaternion orientation
float64[9] orientation_covariance # Row major about x, y, z axes
geometry_msgs/Vector3 angular_velocity
float64[9] angular_velocity_covariance # Row major about x, y, z axes
geometry_msgs/Vector3 linear_acceleration
float64[9] linear_acceleration_covariance # Row major x, y z 

visual odometer

Source topic name: vo
Message type: nav_msgs/Odometry

3D pose (used by Visual Odometry): 3D pose can fully represent the position and orientation of the robot, as well as the covariance of the pose. When the sensor only measures a part of the 3D pose (eg the wheel odometry only measures a 2D pose), it is possible to assign a larger covariance to the part of the 3D pose that is not actually measured.

The original message definition looks like this

# This represents an estimate of a position and velocity in free space.  
# The pose in this message should be specified in the coordinate frame given by header.frame_id.
# The twist in this message should be specified in the coordinate frame given by the child_frame_id
Header header
string child_frame_id
geometry_msgs/PoseWithCovariance pose
geometry_msgs/TwistWithCovariance twist

posted topic

Topic name: robot_pose_ekf/odom_combined
Topic type: geometry_msgs/PoseWithCovarianceStamped

Output of the filter (estimated 3D pose of the robot)

The original message definition looks like this

# This expresses an estimated pose with a reference coordinate frame and timestamp

Header header
PoseWithCovariance pose

Note that there is no speed in this

How robot_pose_ekf works

All sensor sources that feed information to a filter node have their own frame of reference and are subject to drift over time. Therefore, the absolute poses from each sensor cannot be directly compared. So the node uses the relative pose difference of each sensor to update the extended Kalman filter.

As the robot moves around, the pose uncertainty becomes larger and larger over time, and the covariance grows infinitely. It doesn't make sense to publish the covariance of the pose itself, so the sensor source publishes how the covariance changes over time (for example, the covariance of the velocity). Note that using world observations (e.g. measuring distances to known walls) will reduce uncertainty in robot pose; however, this is localization, not odometry

Assuming that the robot last updated the pose filter at time t_0, the node will only perform the next filter update after receiving each sensor measurement value (time stamp > t_0). For example, if a message timestamp (t_1 > t_0) is received on the odom topic, and a message is also received on the imu_data topic (its timestamp t_2 > t_1 > t_0), the filter will be updated to the latest available for all sensor information Moment, this moment is t_1. The odom pose is directly given at t_1, but the imu pose needs to be obtained by linear interpolation between t_0 and t_2. During the period from t_0 to t_1, the robot pose filter is updated using the odom and IMU relative poses.

The figure below shows the experimental results of the PR2 robot robot, which starts to move from the green initial position and finally returns to the starting position. A perfect odometry xy graph should be an accurate closed loop graph. The blue line above is the input of the wheel odometer, and the blue point is its estimated end position. The red line is the output of robot_pose_ekf, robot_pose_ekf integrates the information of wheel odometer and IMU, and gives the red end position point.
insert image description here

Guess you like

Origin blog.csdn.net/qq_32761549/article/details/131080410