SLAM project: Starting from scratch, C++ implements the back-end optimization of SLAM and visualizes the results in gazebo

The backend of SLAM is a very important part of the entire SLAM. It generally undertakes the largest amount of calculations in the entire SLAM process, so the calculation time of the backend is usually relatively long. In order to ensure the real-time performance of the system, multi-threading is often adopted for the backend. The end independently starts a thread, and at the same time, during the back-end optimization process, tricks such as optimizing only poses and not optimizing road signs are adopted to control the overall scale. This article uses C++ to add a back-end to the SLAM we implemented earlier, so that the front-end The pose is optimized here to build a more accurate map.

The following articles can take you step by step to implement a complete SLAM system (without adjusting libraries or copying open source frameworks), which can be used as a good resume project.

front end:

SLAM project: Reproduce the 2D laser odometry from scratch, and use your own radar or gazebo to run it, explaining the principle and code implementation process in detail_Soumes' Blog-CSDN Blog

rear end:

SLAM project: Starting from scratch, C++ implements SLAM back-end optimization and visualizes the results in gazebo_Soumes' blog-CSDN blog

Mapping:

SLAM project: Reproduce a complete set of two-dimensional laser SLAM algorithms from scratch_Soumes' blog-CSDN blog

The old rule is to show the effect before starting:

 It can be seen that after adding the backend, the mapping in complex environments is also very stable. The robot can also obtain very accurate poses while moving quickly. However, because the backend has been added and multi-threading has not been written yet, so The overall speed dropped, and one calculation was completed in 0.09s. Our radar is 10hz. 0.09s means that the calculation can still be completed within the period of one frame of radar data, so there will be no data congestion and abandonment, and our radar is still guaranteed. Real-time (of course it is much slower than the previous 0.02 seconds, which shows that the backend does bear a large amount of calculation)

The following is a brief introduction to the principle of back-end implementation.

The backend implemented this time is based on the idea of ​​optimization. An accurate pose is obtained through a scan-to-map precise matching, which is fused with the initial pose obtained by the front-end scan-to-scan to obtain an optimized pose. The overall process of pose and posture can be divided into three steps.

The first step is to create a local map

The first step is to create a local map (submap) by obtaining radar data. The purpose of this map is to be used as a map in scan-to-map. It is different from the map we publish globally. The life cycle of this map It is very short and is destroyed after the scan-to-map is completed. Saving fewer pictures can save resources. At the same time, because the robot is moving, the local pictures are constantly changing. If it is saved for a long time, it will not be conducive to our next time at a new location. Matching (If you want to further optimize, you can consider replacing the local subgraph with a sliding window, which can incorporate more environmental information while controlling resources, but the corresponding implementation is more difficult. Difficult)

code show as below:

#ifndef SUBMAP_BACKEND
#define SUBMAP_BACKEND
#include <vector>
#include <ros/ros.h>
#include <sensor_msgs/LaserScan.h>
#include <nav_msgs/OccupancyGrid.h>
#include <cmath>
struct Point_sub {
    int x;
    int y;
};


class submap_creater
{
public:
submap_creater();

std::vector<Point_sub> Bresenham_sub(int x0, int y0, int x1, int y1);
// 接受雷达数据 建立局部子图 返回子图

nav_msgs::OccupancyGrid CreateSubMap(const sensor_msgs::LaserScan::ConstPtr &scan_msg, double r_x, double r_y, double r_theta);
private:
//map param
double resolution_sub;
int map_width_sub;
int map_height_sub;
double map_origin_x_sub;
double map_origin_y_sub;
double occupied_threshold_sub;
double free_threshold_sub;
double angle_robot_sub;
double x_robot_sub;
double y_robot_sub;
int grid_x_sub;
int grid_y_sub;
int robot_location_x_sub;
int robot_location_y_sub;
double temp_angle_increment_sub;
double temp_angle_min_sub;
std::vector<std::vector<int>> grid_map_sub;
nav_msgs::OccupancyGrid map_msg_sub;
};

#endif 

There is too much content in .cc, so I will throw a .h here to show it (no one will read it anyway =. =) Use the core function CreateSubMap to create a submap. Since this submap is only used for matching, there is no need to publish it. For a map, just return the map information map_msg_sub. This information will be obtained later as the matching basis for scan-to-map.

The second step is to implement scan-to-map matching

In the long history of slam, countless scan-to-map matching methods have emerged. Since Hector proposed this matching method in 2011, subsequent cartographers, loam, and even visual orbs and openvins all use scan. -To-map method for matching. This method has higher accuracy and greater computational complexity than scan-to-scan. Since we use scan-to-map as the backend here, what we need is accuracy. , so we no longer deliberately compress the scale, only eliminate invalid points, and perform matching directly. Input a radar data and robot pose and the local map submap obtained in the previous step to obtain a very accurate pose.

code show as below:

#ifndef MATCHER_BACKEND
#define MATCHER_BACKEND
#include <ros/ros.h>
#include <sensor_msgs/LaserScan.h>
#include <pcl/point_types.h>
#include <pcl/registration/icp.h>
#include <nav_msgs/OccupancyGrid.h>
// 扫描匹配给出一个精位姿
class Backend {
public:
    
    std::vector<double> Backend_match(const sensor_msgs::LaserScan::ConstPtr& scan_msg, nav_msgs::OccupancyGrid map_msg);
    //Backend();
private:

    bool is_map_available;
 
};
#endif 

 Through the core function Backend_match, it takes 0.07 seconds to calculate and give you a pose estimation that is accurate to... and quite accurate. Pay attention to these processes here Created some pointers. If you want to play with pointers, you must pay attention to managing the memory, initializing and releasing the pointers, and don't just write =nullptr, otherwise memory-related errors will always be reported, which is very inconvenient to troubleshoot. , a simple and crude method is to remove all those Constptr, we don't play with pointers, just use variables.

The third step is to merge the front-end scan-to-scan pose and the back-end scan-to-map pose.

This step is very critical. Both the front and back ends give a pose. Only by properly fusing these two poses can we obtain the best pose output. If the weights are not set well, it will lead to waste and inaccuracy of data. , here the interpolation method is used to fuse the two poses. For rotation, spherical interpolation is used, and the weight is alpha. For translation, linear difference is used, and the weight is also alpha. The purpose of the difference is to find between the two values. A value. This value is the fusion result of the two. As for where to find it, it depends on alpha. Alpha is used as a weight here and is defined according to timestamps. We obtain the pose timestamp timestamp1 given by the front end and the pose timestamp timestamp2 given by the backend, and define alpha as a quantity between 0 and 1. If The closer the timestamps of the two poses are, the closer the alpha is to 1, and the greater the impact of the pose given by the backend on the fusion result (because if the two are released at a close time, they can be approximately considered to be calculated for the same scene. The obtained pose will naturally be reused through the back-end scan-to-map.) If the timestamps of the two poses are very different, then the alpha will be close to 0, and more emphasis will be placed on the pose given by the front end. This is Because the time difference between the two is much different, it means that the pose may be released late due to the large amount of calculation on the back end. The front-end is very fast, so it must be calculated using the latest information, so at least it can be sure that the pose on the front-end is the latest pose, but the back-end is very slow, and it is not sure whether the pose released by the back-end at this moment is still the pose in the latest environment. , in this case, we pay more attention to the front-end pose. After all, what we want is the pose in the latest environment to ensure real-time performance.

code show as below:

#ifndef POSE_FUSION_H
#define POSE_FUSION_H

#include <Eigen/Core>
#include <Eigen/Geometry>
#include <vector>
#include <ros/ros.h>
using namespace std;
class PoseFusion {
public:
    struct Pose {
        Eigen::Matrix3d rotation_matrix;
        Eigen::Vector3d translation_vector;
    };

    PoseFusion();

    vector<double> integratePoses(std::vector<double> pose1, std::vector<double> pose2, double time1, double time2);

    Pose double_2_pose(double x, double y, double theta);


private:
    double timestamp1_;
    double timestamp2_;
};

#endif // POSE_FUSION_H

The core function integratePose completes the fusion of poses and outputs a new fused pose. You can use this pose to build a map, which will be more accurate than using the pose given by the front end.

Please note: The back-end matching uses radar data scan, which is located in the laser coordinate system, so the given pose is also in the laser coordinate system. Pay attention to the coordinate transformation here and strictly ensure the input and output. All the data is in the same coordinate system, otherwise it will be all messed up. As a result, you don’t know which coordinate system the pose is in. Then you don’t know where to build the map. Coordinate system transformation is always the most important in the slam process. things.

Summarize:

Through the fine matching of scan-to-map and the coarse matching of scan-to-scan on the front-end, we optimize the pose of the front-end and realize the back-end of slam. At this point, the slam system we wrote ourselves is nearing the end. Finally The first step is to add a loopback detection to the entire system, and I will update it after I have researched it.

Guess you like

Origin blog.csdn.net/weixin_43890835/article/details/132108409