Laser SLAM positioning of NDT based on known point cloud map

Overview

For L4 level autonomous driving systems, the positioning module usually integrates GNSS, IMU, wheel speedometer (vehicle chassis), camera and lidar odometry measurements, and uses filtering algorithms (EKF, UKF, etc.) to obtain smooth, centimeter-level Absolute positioning, among which registration positioning (Lidar Odometry) based on high-precision point cloud maps and lidar usually accounts for a large weight in the entire fusion positioning because of its high accuracy and good reliability. It is a relatively important part of the automatic driving positioning system. A reliable source of "absolute positioning" data.

Based on the open source algorithm framework on the Internet and the experience sharing of experts, this demo implements the positioning function based on the mapping algorithm (SC-LEGO-LOAM) + point cloud matching algorithm (NDT) under the ROS system.

point cloud map

LiDAR positioning of autonomous vehicles usually relies on high-precision point cloud maps constructed offline in advance. The reasons for this are as follows:

  • Autonomous driving systems at level 4 or above have very high requirements for positioning accuracy and stability, and the absolute error needs to be controlled within 20cm;
  • Pure SLAM currently cannot meet the requirements for positioning accuracy and reliability of autonomous driving. That is, our current research is difficult to achieve online mapping and positioning of autonomous vehicles (problems include closed-loop optimization, global optimization, error accumulation correction, etc.)
  • The complete production process of high-precision map manufacturers requires a large amount of computing power and labor. They can produce ideal point cloud maps and semantic maps, but they require offline production (time and manpower);
  • Lidar positioning can be achieved relatively simply using high-precision maps. After integrating IMU and wheel speedometer, the accuracy and reliability of this positioning method basically meet the needs of autonomous vehicle positioning.

Therefore, based on the above objective reasons, the current L4 and most L3 autonomous driving system positioning modules are still based on pre-built high-precision maps. The sensor used for this registration is camera ( Such as mobileeye), most manufacturers still use the lidar registration idea.

The point cloud map is the "map for positioning" that is built in advance for lidar registration.

We used the LEGO-LOAM and SC-LEGO-LOAM algorithms to construct point cloud maps respectively. For point cloud maps with larger areas, the Scan Context method is obviously used to perform loop closure detection on the point cloud map and the loop closure effect after attitude map optimization. Better, so here we use the point cloud map constructed by SC-LEGO-LOAM for the next step of point cloud matching and positioning.

Place the built map into the map folder. The map_loader node in the project is mainly used to load the map:

MapLoader::MapLoader(ros::NodeHandle &nh){
    
    
    std::string pcd_file_path, map_topic;
    nh.param<std::string>("pcd_path", pcd_file_path, "");
    nh.param<std::string>("map_topic", map_topic, "point_map");

    init_tf_params(nh);

    pc_map_pub_ = nh.advertise<sensor_msgs::PointCloud2>(map_topic, 10, true);

    file_list_.push_back(pcd_file_path);

    auto pc_msg = CreatePcd();
    
    auto out_msg = TransformMap(pc_msg);

    if (out_msg.width != 0) {
    
    
		out_msg.header.frame_id = "map";
		pc_map_pub_.publish(out_msg);
	}

}

In the constructor, the path of the pcd file and the map topic are read, and the transformation parameters of the map are initialized (if the map does not need to be transformed, the values ​​​​are set to 0)

void MapLoader::init_tf_params(ros::NodeHandle &nh){
    
    
    nh.param<float>("x", tf_x_, 0.0);
    nh.param<float>("y", tf_y_, 0.0);
    nh.param<float>("z", tf_z_, 0.0);
    nh.param<float>("roll", tf_roll_, 0.0);
    nh.param<float>("pitch", tf_pitch_, 0.0);
    nh.param<float>("yaw", tf_yaw_, 0.0);
    ROS_INFO_STREAM("x" << tf_x_ <<"y: "<<tf_y_<<"z: "<<tf_z_<<"roll: "
                        <<tf_roll_<<" pitch: "<< tf_pitch_<<"yaw: "<<tf_yaw_);
}

The function CreatePcd() is used to load pcd, TransformMap() is used to translate and rotate the map, and we use Eigen and pcl::transformPointCloud() to implement point cloud transformation:

sensor_msgs::PointCloud2 MapLoader::TransformMap(sensor_msgs::PointCloud2 & in){
    
    
    pcl::PointCloud<pcl::PointXYZ>::Ptr in_pc(new pcl::PointCloud<pcl::PointXYZ>);
    pcl::fromROSMsg(in, *in_pc);

    pcl::PointCloud<pcl::PointXYZ>::Ptr transformed_pc_ptr(new pcl::PointCloud<pcl::PointXYZ>);

    Eigen::Translation3f tl_m2w(tf_x_, tf_y_, tf_z_);                 // tl: translation
    Eigen::AngleAxisf rot_x_m2w(tf_roll_, Eigen::Vector3f::UnitX());  // rot: rotation
    Eigen::AngleAxisf rot_y_m2w(tf_pitch_, Eigen::Vector3f::UnitY());
    Eigen::AngleAxisf rot_z_m2w(tf_yaw_, Eigen::Vector3f::UnitZ());
    Eigen::Matrix4f tf_m2w = (tl_m2w * rot_z_m2w * rot_y_m2w * rot_x_m2w).matrix();

    pcl::transformPointCloud(*in_pc, *transformed_pc_ptr, tf_m2w);

    SaveMap(transformed_pc_ptr);
    
    sensor_msgs::PointCloud2 output_msg;
    pcl::toROSMsg(*transformed_pc_ptr, output_msg);
    return output_msg;
}

Input point cloud downsampling

The objective function optimized by the NDT algorithm is mainly the similarity of the probability distribution of the input point cloud and the target point cloud. The computational complexity of this registration algorithm is positively related to two factors:

  • Density of points in the input point cloud
  • Bias in initial pose estimate

The denser the input point cloud points, the greater the computational complexity required for NDT registration; the worse the
initial pose estimate (the further it deviates from the true pose), the greater the corresponding computational complexity. If the initial pose is too poor, NDT may not even be able to convergence.
Autonomous driving LiDAR positioning has high requirements for real-time performance. The less time it takes for point cloud registration, the better. Therefore, we can downsample the input point cloud to improve the speed of NDT registration. In this article, we use VoxelGrid to downsample the input point cloud. The sampling method reduces the density of the input point cloud. The code is in voxel_grid_filter.cpp of the project. The main code is as follows:

static void scan_callback(const sensor_msgs::PointCloud2::ConstPtr& input)
{
    
    
  pcl::PointCloud<pcl::PointXYZ> scan;
  pcl::fromROSMsg(*input, scan);

  if(measurement_range != MAX_MEASUREMENT_RANGE){
    
    
    scan = removePointsByRange(scan, 0, measurement_range);
  }

  pcl::PointCloud<pcl::PointXYZ>::Ptr scan_ptr(new pcl::PointCloud<pcl::PointXYZ>(scan));
  pcl::PointCloud<pcl::PointXYZ>::Ptr filtered_scan_ptr(new pcl::PointCloud<pcl::PointXYZ>());

  sensor_msgs::PointCloud2 filtered_msg;

  // if voxel_leaf_size < 0.1 voxel_grid_filter cannot down sample (It is specification in PCL)
  if (voxel_leaf_size >= 0.1)
  {
    
    
    // Downsampling the velodyne scan using VoxelGrid filter
    pcl::VoxelGrid<pcl::PointXYZ> voxel_grid_filter;
    voxel_grid_filter.setLeafSize(voxel_leaf_size, voxel_leaf_size, voxel_leaf_size);
    voxel_grid_filter.setInputCloud(scan_ptr);
    voxel_grid_filter.filter(*filtered_scan_ptr);
    pcl::toROSMsg(*filtered_scan_ptr, filtered_msg);
  }
  else
  {
    
    
    pcl::toROSMsg(*scan_ptr, filtered_msg);
  }

  filtered_msg.header = input->header;
  std::cout <<"frame.id"<<filtered_msg.header.frame_id << std::endl ;
  filtered_points_pub.publish(filtered_msg);

}

After obtaining the point cloud, first intercept the point cloud. We only retain points within the MAX_MEASUREMENT_RANGE distance for positioning (MAX_MEASUREMENT_RANGE = 120 meters in this article). The main parameter of VoxelGrid downsampling is voxel_leaf_size, which sets the downsampling selection. The side length of the cube (in meters). Only 1 point is retained in such a cube. This parameter can be configured in the points_downsample.launch file:

<arg name="leaf_size" default="3.0" />

As shown above, this article uses a leaf size of 3 meters. This parameter can be determined according to the density of the lidar points you actually use. Although we pursue real-time registration, we do not want to sacrifice too much positioning accuracy. Therefore, the degree of downsampling of the input point cloud needs to balance real-time performance and positioning accuracy. According to experience, if you guess that a 16-line lidar is used, then it is more appropriate to control the leaf size of the downsampling to 1-2m. When the laser used For radars with 32 lines and above, the leaf size can be set to 2-3m.
The downsampled point cloud will be output to /filtered_pointsthe topic for subsequent NDT registration and positioning.

Using NDT to provide high-precision positioning for autonomous vehicles

This part of the code is mainly implemented in ndt.cpp:

Initial pose acquisition.
All methods that use pre-built maps for registration and positioning need to provide an initial pose. In industrial practice, this initial pose is usually obtained through gnss. In this article, we simplify this step and manually specify it in Rviz. Initial posture. The initial posture set in Rviz is usually sent /initialpose topicto the top . NdtLocalizerIn the constructor, write a subscriber to listen to the topic:

initial_pose_sub_ = nh_.subscribe("initialpose", 100, &NdtLocalizer::callback_init_pose, this);

When the initial posture ( geometry_msgs::PoseWithCovarianceStamped) is transmitted, the following callback is executed callback_init_pose:

void NdtLocalizer::callback_init_pose(
  const geometry_msgs::PoseWithCovarianceStamped::ConstPtr & initial_pose_msg_ptr)
{
    
    
  if (initial_pose_msg_ptr->header.frame_id == map_frame_) {
    
    
    initial_pose_cov_msg_ = *initial_pose_msg_ptr;
  } else {
    
    
    // get TF from pose_frame to map_frame
    geometry_msgs::TransformStamped::Ptr TF_pose_to_map_ptr(new geometry_msgs::TransformStamped);
    get_transform(map_frame_, initial_pose_msg_ptr->header.frame_id, TF_pose_to_map_ptr);

    // transform pose_frame to map_frame
    geometry_msgs::PoseWithCovarianceStamped::Ptr mapTF_initial_pose_msg_ptr(
      new geometry_msgs::PoseWithCovarianceStamped);
    tf2::doTransform(*initial_pose_msg_ptr, *mapTF_initial_pose_msg_ptr, *TF_pose_to_map_ptr);
    // mapTF_initial_pose_msg_ptr->header.stamp = initial_pose_msg_ptr->header.stamp;
    initial_pose_cov_msg_ = *mapTF_initial_pose_msg_ptr;
  }
  // if click the initpose again, re init!
  init_pose = false;
}

In NDT registration, we mainly focus on changes between four coordinate systems, which are:

  • World coordinate system (frame_id = world)
  • Map coordinate system (frame_id = map)
  • Vehicle base coordinate system (frame_id = base_link)
  • Lidar coordinate system (frame_id = ouster in this article, the frame id will be different depending on the lidar you use)

In this project, we use static_tf.launchtwo fixed transformations to publish world to map and ouster to base_link:

<node pkg="tf2_ros" type="static_transform_publisher" name="localizer_to_base_link" args="0 0 1.753999 0.03 0.01 0.01 base_link rslidar"/>
<node pkg="tf2_ros" type="static_transform_publisher" name="world_to_map" args="0 0 0 0 0 0 map world" />

Note: Here we assume that map and world are in the same coordinate system to simplify the problem. In the development of specific autonomous driving systems, you need to cooperate with the Universal Transverse Mercator (UTM) projection according to the longitude and latitude under the WGS84 coordinate system. To obtain the translation relationship between the current Map and the world coordinate system and the rotation amount in the East North Up (ENU) coordinate system.
localizer_to_base_linkThat is, the transformation relationship between lidar and base link is one of the external parameters of lidar and is also a static transformation.

Return to the initial posture acquisition. After obtaining the initial posture manually specified on Rviz, first unify the coordinate system. If the pose is in the map coordinate system, then save it for subsequent use. If it is in other coordinate systems, first unify the pose. Convert to the map coordinate system and get_transformobtain the transformation relationship through the function, which is defined as follows:

bool NdtLocalizer::get_transform(
  const std::string & target_frame, const std::string & source_frame,
  const geometry_msgs::TransformStamped::Ptr & transform_stamped_ptr)
{
    
    
  if (target_frame == source_frame) {
    
    
    transform_stamped_ptr->header.stamp = ros::Time::now();
    transform_stamped_ptr->header.frame_id = target_frame;
    transform_stamped_ptr->child_frame_id = source_frame;
    transform_stamped_ptr->transform.translation.x = 0.0;
    transform_stamped_ptr->transform.translation.y = 0.0;
    transform_stamped_ptr->transform.translation.z = 0.0;
    transform_stamped_ptr->transform.rotation.x = 0.0;
    transform_stamped_ptr->transform.rotation.y = 0.0;
    transform_stamped_ptr->transform.rotation.z = 0.0;
    transform_stamped_ptr->transform.rotation.w = 1.0;
    return true;
  }

  try {
    
    
    *transform_stamped_ptr =
      tf2_buffer_.lookupTransform(target_frame, source_frame, ros::Time(0), ros::Duration(1.0));
  } catch (tf2::TransformException & ex) {
    
    
    ROS_WARN("%s", ex.what());
    ROS_ERROR("Please publish TF %s to %s", target_frame.c_str(), source_frame.c_str());

    transform_stamped_ptr->header.stamp = ros::Time::now();
    transform_stamped_ptr->header.frame_id = target_frame;
    transform_stamped_ptr->child_frame_id = source_frame;
    transform_stamped_ptr->transform.translation.x = 0.0;
    transform_stamped_ptr->transform.translation.y = 0.0;
    transform_stamped_ptr->transform.translation.z = 0.0;
    transform_stamped_ptr->transform.rotation.x = 0.0;
    transform_stamped_ptr->transform.rotation.y = 0.0;
    transform_stamped_ptr->transform.rotation.z = 0.0;
    transform_stamped_ptr->transform.rotation.w = 1.0;
    return false;
  }
  return true;
}

After obtaining the tf of the map, directly convert the initial posture to the map coordinate system through tf2::doTransform. (This part of the demo will be added later)

Initialize map

The target point cloud in NDT registration is the point cloud map we built using SC-LEGO-LOAM in advance. Write a Subscriber to listen to the point cloud map message sent by the mapLoader node, and execute the following callback:

map_points_sub_ = nh_.subscribe("points_map", 1, &NdtLocalizer::callback_pointsmap, this);
void NdtLocalizer::callback_pointsmap(
  const sensor_msgs::PointCloud2::ConstPtr & map_points_msg_ptr)
{
    
    
  const auto trans_epsilon = ndt_.getTransformationEpsilon();
  const auto step_size = ndt_.getStepSize();
  const auto resolution = ndt_.getResolution();
  const auto max_iterations = ndt_.getMaximumIterations();

  pcl::NormalDistributionsTransform<pcl::PointXYZ, pcl::PointXYZ> ndt_new;

  ndt_new.setTransformationEpsilon(trans_epsilon);
  ndt_new.setStepSize(step_size);
  ndt_new.setResolution(resolution);
  ndt_new.setMaximumIterations(max_iterations);

  pcl::PointCloud<pcl::PointXYZ>::Ptr map_points_ptr(new pcl::PointCloud<pcl::PointXYZ>);
  pcl::fromROSMsg(*map_points_msg_ptr, *map_points_ptr);
  ndt_new.setInputTarget(map_points_ptr);
  // create Thread
  // detach
  pcl::PointCloud<pcl::PointXYZ>::Ptr output_cloud(new pcl::PointCloud<pcl::PointXYZ>);
  ndt_new.align(*output_cloud, Eigen::Matrix4f::Identity());

  // swap
  ndt_map_mtx_.lock();
  ndt_ = ndt_new;
  ndt_map_mtx_.unlock();
}

The most critical step is ndt_new.setInputTarget(map_points_ptr); after obtaining the point cloud map, set the target point cloud of ndt to the point cloud map, which is also the basic parameter of the NDT algorithm here:

ndt_new.setTransformationEpsilon(trans_epsilon); The minimum change amount of the search;
ndt_new.setStepSize(step_size); The step size of the search
ndt_new.setResolution(resolution); The size of the ND voxel of the target point cloud, in meters
ndt_new.setMaximumIterations(max_iterations) ;The number of iterations optimized using Newton's method

NDT registration positioning

Point cloud registration and positioning is mainly implemented in the following callbacks:

void NdtLocalizer::callback_pointcloud(
  const sensor_msgs::PointCloud2::ConstPtr & sensor_points_sensorTF_msg_ptr)

This callback listens to the downsampled point cloud and first parses the PointCloud2 message into the PointCloud structure of pcl:

const std::string sensor_frame = sensor_points_sensorTF_msg_ptr->header.frame_id;
const auto sensor_ros_time = sensor_points_sensorTF_msg_ptr->header.stamp;

boost::shared_ptr<pcl::PointCloud<pcl::PointXYZ>> sensor_points_sensorTF_ptr(
new pcl::PointCloud<pcl::PointXYZ>);

The point cloud is in the lidar coordinate system, so the data is then projected into the base_link coordinate system :

geometry_msgs::TransformStamped::Ptr TF_base_to_sensor_ptr(new geometry_msgs::TransformStamped);
get_transform(base_frame_, sensor_frame, TF_base_to_sensor_ptr);
  //const auto exe_start_time = std::chrono::system_clock::now();
const Eigen::Affine3d base_to_sensor_affine = tf2::transformToEigen(*TF_base_to_sensor_ptr);
const Eigen::Matrix4f base_to_sensor_matrix = base_to_sensor_affine.matrix().cast<float>();

boost::shared_ptr<pcl::PointCloud<pcl::PointXYZ>> sensor_points_baselinkTF_ptr(
new pcl::PointCloud<pcl::PointXYZ>);
pcl::transformPointCloud(
*sensor_points_sensorTF_ptr, *sensor_points_baselinkTF_ptr, base_to_sensor_matrix);

Set as input point cloud for NDT:

 ndt_.setInputSource(sensor_points_baselinkTF_ptr);

  if (ndt_.getInputTarget() == nullptr) {
    
    
    ROS_WARN_STREAM_THROTTLE(1, "No MAP!");
    return;
  }

Finally, we also need to set the initial pose estimate for this registration, which needs to be divided into the following two situations:

Eigen::Matrix4f initial_pose_matrix;
  if (!init_pose){
    
    
    Eigen::Affine3d initial_pose_affine;
    tf2::fromMsg(initial_pose_cov_msg_.pose.pose, initial_pose_affine);
    initial_pose_matrix = initial_pose_affine.matrix().cast<float>();
    // for the first time, we don't know the pre_trans, so just use the init_trans, 
    // which means, the delta trans for the second time is 0
    pre_trans = initial_pose_matrix;
    init_pose = true;
  }else
  {
    
    
    // use predicted pose as init guess (currently we only impl linear model)
    initial_pose_matrix = pre_trans * delta_trans;
  }

If it is the first registration, the initial pose we specified manually in Rviz is used, otherwise the initial estimate predicted by the linear model (uniform angular velocity) is used. The NDT implemented by pcl requires that the initial pose estimate be represented by Eigen::Matrix4f (that is, the standard homogeneous transformation matrix), so in the above code, if it is the first registration, Pose needs to be converted to Eigen::Matrix4f and completed using a tf2::fromMsg()function , for non-primary registration, our idea is to use the positioning result of the last NDT (transformation matrix pre_trans) + the linear transformation amount (transformation matrix delta_trans). In linear algebra, if the vector AB is used to describe the transformation of the last positioning (that is, the transformation from the last positioning base_link to the map origin, that is pre_trans), and BC represents the transformation from the current positioning to the previous positioning (that is delta_trans), then the current positioning AC It can be expressed as:
AC=AB * BC
, so the initial attitude estimate of the current position can be expressed as pre_trans * delta_trans.

Then we set this initial estimate and use ndt for registration:

ndt_.align(*output_cloud, initial_pose_matrix);

Obtained the positioning transformation matrix ( base_link to map transformation )result_pose_matrix

const Eigen::Matrix4f result_pose_matrix = ndt_.getFinalTransformation();

Convert it into Pose msg and publish it as TF to complete this positioning:

// publish
  geometry_msgs::PoseStamped result_pose_stamped_msg;
  result_pose_stamped_msg.header.stamp = sensor_ros_time;
  result_pose_stamped_msg.header.frame_id = map_frame_;
  result_pose_stamped_msg.pose = result_pose_msg;

  if (is_converged) {
    
    
    ndt_pose_pub_.publish(result_pose_stamped_msg);
  }

  // publish tf(map frame to base frame)
     publish_tf(map_frame_, base_frame_, result_pose_stamped_msg);

In addition, we also need to calculate the next delta_trans for initial pose estimation:

// calculate the delta tf from pre_trans to current_trans
  delta_trans = pre_trans.inverse() * result_pose_matrix;
  
  pre_trans = result_pose_matrix;

delta_trans is the difference between the current transformation and the previous transformation (the difference between translation and rotation). Finally, the current transformation is saved as pre_trans for the next initial pose estimation. At this point, the NDT registration process ends.

practice

Open terminal

source devel/setup.bash
roslaunch ndt_localizer ndt_localizer.launch

Insert image description here
Insert image description here
The real-time output matching time is basically around 8ms, which meets the real-time requirements of automatic driving registration and positioning, and the number of matching iterations and linear transformation matrix will be output in real time.

Insert image description here

Through rostopic echo /ndt_pose, you can view the positioning and angle information output by the topic in real time.
Insert image description here

Guess you like

Origin blog.csdn.net/xiaojinger_123/article/details/118720558#comments_28443927