5. How to use ORBSLAM3 to generate 2D/3D raster maps that can be used for robot/drone navigation--taking octomap as an example

1 Octomap installation and official documentation

        ​​​​Here we can use the installation method that comes with ROS:

sudo apt install ros-melodic-octomap-msgs ros-melodic-octomap-ros ros-melodic-
octomap-rviz-plugins ros-melodic-octomap-server

        As shown above, the installation is successful:

        If the installation fails, try changing the source with Xiaoyu ROS and then install it:

        Some official documents are as follows. If you are interested, you can study them:https://octomap.github.io/octomap/doc/index.html#gettingstarted_sec icon-default.png?t=N7T8https://octomap.github.io/octomap/doc/index.html#gettingstarted_sec

2 How to use the map points generated by ORBSLAM3 to construct a raster map that can be used for navigation through octomap

2.1 Writing of octomap node

        After we install octomap, we create a launch file. Everything here is fixed. Let me explain to you the meaning of each parameter of the file.

        We create a slam.launch file:

<launch>

    <!-- Octomap Server Node -->
    <node pkg="octomap_server" type="octomap_server_node" name="octomap_server">
        <param name="resolution" value="0.05" />
        <param name="frame_id" type="string" value="/orb_cam_link" />
        <param name="sensor_model/max_range" value="5.0" />
        <remap from="cloud_in" to="/ORB_SLAM3/Point_Clouds" />
        <param name="sensor_model/max_range" value="5000.0" />
        <param name="latch" value="true" />
        <param name="pointcloud_min_z" type="double" value="-1.5" />
        <param name="pointcloud_max_z" type="double" value="10" />
        <param name="occupancy_min_z" type="double" value="0.1" />
        <param name="occupancy_max_z" type="double" value="2" />
        <param name="height_map" type="bool" value="False" />
        <param name="colored_map" value="true" />
    </node>
    <node pkg="tf" type="static_transform_publisher" name="orb_cam_link" args="0 0 0.15 0 0 0 /orb_cam_link /pointCloud 70" />
    <!-- rviz -->
    <node pkg="rviz" type="rviz" name="rviz" args="-d $(find akm_pnc)/rviz/grid.rviz" />
</launch>

        ​​​​Create an Octomap Server Node.

        This parameter file is a ROS launch file, which defines the startup and configuration of several nodes and parameters, mainly for the configuration of Octomap Server, static TF transformation publisher and RViz visualization tool.

Let me explain some of the key parts:

1. **Octomap Server Node**:
    - `pkg="octomap_server"` and `type="octomap_server_node"` specify the node to run and its The package in which it is located.
    - `name="octomap_server"` defines the name of the node.
    - Each `name` parameter under the `param` tag sets some parameters of the Octomap Server:
        ​ - `resolution` sets the map resolution to 0.05.
        - `frame_id` sets the coordinate system of the map to `/orb_cam_link`.
        - `sensor_model/max_range` sets the maximum range of the sensor model to 5.0.
        - `latch` is set to `true`, which means that the parameters will be persisted, that is, the previously set parameter values ​​will be retained when reloading.
        - Other parameters such as `pointcloud_min_z`, `pointcloud_max_z`, `occupancy_min_z`, `occupancy_max_z` are used to set parameters such as point cloud and height range of occupied map.
        - `colored_map` sets whether the map contains color information.

2. **TF static transformation publisher**:
    - A `static_transform_publisher` node is defined under the `node` tag, which is used to publish static TF transformations.
    - `name="orb_cam_link"` defines the name of the publishing node.
    - `args` contains the parameters of the published static transformation: position (0, 0, 0.15), rotation (0, 0, 0) and the names of the target and source coordinate systems` /orb_cam_link` and `/pointCloud`.
 
3. **RViz**:
    - The last node starts the RViz tool and specifies to load a configuration file `grid.rviz`.

Overall, this launch file configures the Octomap Server for building the map, sets up some sensor models, map resolutions, and publishing of static TF transforms, and finally launches the RViz tool to visualize the map and other related data.

        Things to note here! ! There are two very important parameters! !

        The first one is: put your own point cloud topic after to

        <remap from="cloud_in" to="/ORB_SLAM3/Point_Clouds" />

        The second is frame_id: Take a look at the official instructions from ROS ("The static global coordinate system in which the map will be published. When building the map dynamically, the transformation information from the sensor data to this coordinate system needs to be available.", that is, The map will be published to a fixed global coordinate system. In the process of creating the map, it is necessary to obtain the conversion information between the sensor data and this global coordinate system.)

octomap_server - ROS Wikiicon-default.png?t=N7T8http://wiki.ros.org/octomap_server

        <param name="frame_id" type="string" value="/orb_cam_link" />

        Let’s take a look at how to modify the ORB-SLAM3 part!

2.2 ORB-SLAM3 publishes raster map data

2.2.1 Understanding the coordinate system/orb_cam_link, /odom

        We control the simulation program to move forward.

        This is the initial state:

        The current coordinate system isorb_cam_link. We control the simulation program to move forward a certain distance.

        We found that the raster map was partially generated. The green line with the tail is our trajectory. Its topic is /RGBD/Path

        But what if we change the coordinate system to odom? ? It has been stationary at the original point.

        Therefore, the Tcw we estimate is actually the transformation matrix from orb_cam_link to odom coordinate system.

        The track_point and all_point here are the tracked map points and all map points, such as the colored part and the white part in the picture above.

2.2.2 Detailed explanation of dense mapping code How to send all dense point clouds to octomap

        <remap from="cloud_in" to="/ORB_SLAM3/Point_Clouds" />

        ​​​​Here we receive point clouds of type /ORB_SLAM3/Point_Clouds for dense reconstruction, so dense point clouds are needed for input.

        We add a new topic to the dense mapping thread:

        pclPoint_pub = n.advertise<sensor_msgs::PointCloud2>("/ORB_SLAM3/Point_Clouds",1000000);
        pclPoint_local_pub = n.advertise<sensor_msgs::PointCloud2>("/ORB_SLAM3/Point_local_Clouds",1000000);

        We assign the dense point cloud of all frames to octomap:

    /**
     * @brief 根据关键帧生成点云
     * @param kf
     * @param imRGB
     * @param imD
     * @param pose
     * @return
     */
    pcl::PointCloud< PointCloudMapping::PointT >::Ptr PointCloudMapping::generatePointCloud(KeyFrame *kf,const cv::Mat& imRGB, const cv::Mat& imD, const cv::Mat& pose)
    {
        std::chrono::steady_clock::time_point t1 = std::chrono::steady_clock::now();
        PointCloud::Ptr current(new PointCloud);
        PointCloud::Ptr loop_points(new PointCloud);
        for(size_t v = 0; v < imRGB.rows ; v+=3){
            for(size_t u = 0; u < imRGB.cols ; u+=3){
                cv::Point2i pt(u,v);
                bool isDynamic = false;
                float d = imD.ptr<float>(v)[u];
                if(d < 0.1 || d>15)
                    continue;
                PointT p;
                p.z = d;
                p.x = ( u - mCx) * p.z / mFx;
                p.y = ( v - mCy) * p.z / mFy;
                p.b = imRGB.ptr<uchar>(v)[u*3];
                p.g = imRGB.ptr<uchar>(v)[u*3+1];
                p.r = imRGB.ptr<uchar>(v)[u*3+2];
                current->points.push_back(p);
                loop_points->points.push_back(p);
            }
        }
        Eigen::Isometry3d T = Converter::toSE3Quat( pose );
        PointCloud::Ptr tmp(new PointCloud);
        // tmp为转换到世界坐标系下的点云
        pcl::transformPointCloud(*current, *tmp, T.inverse().matrix());

        // depth filter and statistical removal,离群点剔除
        statistical_filter.setInputCloud(tmp);
        statistical_filter.filter(*current);
        (*mPointCloud) += *current;

        pcl::transformPointCloud(*mPointCloud, *tmp, T.inverse().matrix());
        // 加入新的点云后,对整个点云进行体素滤波
        voxel.setInputCloud(mPointCloud);
        voxel.filter(*tmp);
        mPointCloud->swap(*tmp);
        mPointCloud->is_dense = false;
        return loop_points;
    }
    /**
     * @brief 显示点云
     */
    void PointCloudMapping::NormalshowPointCloud()
    {
         0.PointCloude数据结构中含有什么
        // typedef pcl::PointXYZRGBA PointT;
        // typedef pcl::PointCloud<PointT> PointCloud;
        // PointCloud::Ptr pcE;
        // Eigen::Isometry3d T;
        // int pcID;
        PointCloude pointcloude;
        ros::NodeHandle n;
        pclPoint_pub = n.advertise<sensor_msgs::PointCloud2>("/ORB_SLAM3/Point_Clouds",1000000);
        pclPoint_local_pub = n.advertise<sensor_msgs::PointCloud2>("/ORB_SLAM3/Point_local_Clouds",1000000);
        ros::Rate r(5);
        /// 一直在执行
        while(true)
        {
            KeyFrame* kf;
            cv::Mat colorImg, depthImg;
            {
                std::unique_lock<std::mutex> locker(mKeyFrameMtx);
                 1.如果没有关键帧(还没有进入追踪线程,等待关键帧的加入)
                while(mvKeyFrames.empty() && !mbShutdown)
                {
                    mKeyFrameUpdatedCond.wait(locker);
                }
                {
                    unique_lock<mutex> lck( keyframeMutex );
                }
                 2.更新点云(这里代码逻辑有问题)
                if(lastKeyframeSize  == LoopKfId)
                    updatecloud();
                if (!(mvDepthImgs.size() == mvColorImgs.size() && mvKeyFrames.size() == mvColorImgs.size())) {
                    std::cout << RED << "这是不应该出现的情况!" << std::endl;
                    continue;
                }

                if (mbShutdown && mvColorImgs.empty() && mvDepthImgs.empty() && mvKeyFrames.empty()) {
                    break;
                }

                 3.取出我们应该去处理的数据
                kf = mvKeyFrames.front();
                colorImg = mvColorImgs.front();
                depthImg = mvDepthImgs.front();
                mvKeyFrames.pop();
                mvColorImgs.pop();
                mvDepthImgs.pop();
            }

            if (mCx==0 || mCy==0 || mFx==0 || mFy==0)
            {
                mCx = kf->cx;
                mCy = kf->cy;
                mFx = kf->fx;
                mFy = kf->fy;
            }

            {
                std::unique_lock<std::mutex> locker(mPointCloudMtx);
                 4.获得关键帧的位姿
                cv::Mat mTcw_Mat = kf->GetPoseMat();
                 5.pcE中存放点云数据,已经被转化到世界坐标系下了
                pointcloude.pcE=generatePointCloud(kf,colorImg, depthImg, mTcw_Mat);
                 6.存放关键帧的ID
                pointcloude.pcID = kf->mnId;
                 7.存放关键帧的位姿
                pointcloude.T = ORB_SLAM3::Converter::toSE3Quat(mTcw_Mat);

                pointcloud.push_back(pointcloude);
                if(pointcloude.pcE->empty())
                    continue;

                 8.这帧的点云
                pcl_cloud_local_kf = *pointcloude.pcE;
                 9.所有的点云
                pcl_cloud_kf = *mPointCloud;

                 10.转换到ROS坐标系下
                Cloud_transform(pcl_cloud_local_kf,pcl_local_filter);
                Cloud_transform(pcl_cloud_kf,pcl_filter);

                 11.转化为ROS格式的点云
                pcl::toROSMsg(pcl_local_filter, pcl_local_point);

                // TODO 发布给octomap
                pcl::toROSMsg(pcl_filter, pcl_point);

                 12.pclPoint_pub (/ORB_SLAM3/Point_Clouds)
                pcl_local_point.header.frame_id = "/pointCloud_local";
                pcl_point.header.frame_id = "/pointCloud";
                pclPoint_local_pub.publish(pcl_local_point);

                // TODO 发布给octomap
                pclPoint_pub.publish(pcl_point);
                std::cout << YELLOW << "show point cloud, size=" << mPointCloud->points.size() << std::endl;
                lastKeyframeSize++;
            }
        }
        {
            if(!mPointCloud->empty())
            {
                // 存储点云
                string save_path = "./VSLAMRGBD.pcd";
                pcl::io::savePCDFile(save_path, *mPointCloud);
                cout << GREEN << "save pcd files to :  " << save_path << endl;
            }

        }
        mbFinish = true;
    }

        The adaptive scene runs, and the radar is the same. The raster map is established:

        We call the command to save:

rosrun map_server map_saver map:=/projected_map

        You can see our navigation map in the main directory!

Guess you like

Origin blog.csdn.net/qq_41694024/article/details/134692981