Robot Learning Project - Project3:Map My World

1 Overview

Welcome to Project 3: Map My World! In this project, the RTAB-Map package will be used to create a 2D occupancy grid and 3D octomap with a robot from a simulated environment. RTAB-Map (Real-time Appearance-Based Mapping) is a popular solution for SLAM to develop robots capable of mapping the environment in three dimensions. RTAB-Map has good speed and memory management, and provides custom-developed tools for information analysis. More importantly, the quality of the documents on the ROS Wiki ( rtabmap_ros - ROS Wiki ) is very high. If you can make good use of RTAB-Map and robots, it will lay a solid foundation for drawing and positioning.

For this project, we will use the rtabmap_ros package, which is a ROS wrapper (API) for interacting with RTAB-Map. Keep this in mind when looking at related documentation.

project instruction

The project process is as follows:

1) Will develop own package to interface with rtabmap_ros package.

2) Make the necessary changes based on the previous localization project in order to connect the robot with RTAB-Map. An example of this is the addition of RGB-D cameras.

3) Make sure all files are in place, all links are connected correctly, naming is set correctly, and topic mapping is correct. In addition, appropriate startup files need to be generated to start the robot and map its surroundings.

4) When the robot is started, teleop controls it to move around the room, generating an appropriate map of the environment.

2. Simulation settings 

Set the catkin_ws folder and the src folder, and then grab the code from the previous project (copy the simulation environment and robot of the previous project) to the renamed file package (for example: myrobotb). Namely, create packages for launching Gazebo worlds and robot simulations. Then run catkin_make and source devel/setup.bash script. Fire up world.launch to verify it works!

roslaunch myrobotb world.launch

3.RTAB-Map package

RTAB-Map Packakge

Although ROS provides a large number of packages, integrating a ROS package requires an understanding of the package itself and how it connects to the project. Best place to start is the RTAB-Map ROS Wiki page.

According to the documentation http://wiki.ros.org/rtabmap_ros/Tutorials/SetupOnYourRobot, the recommended robot configuration requirements:

1) A 2D Laser, providing the range sensor of sensor_msgs/LaserScan message;

2) 3D camera sensor that provides nav_msgs/Odometry messages;

3) Compatible with openni_launch , openni2_launch or freenect_launch ROS packages. 

But it seems we are missing a 3D camera sensor!

4. Sensor upgrade

In the previous project, RGB camera was used, now it's time to upgrade the robot. Specifically, we will use a simulated Kinect camera for RTAB-Map. Upgrade the sensor as follows:

Add opticalcamera link

For RGB-D cameras in URDF files, an extra link and an extra joint need to be added to the camera link in order for the camera image in Gazebo to be properly aligned with the robot. Note that the parent link of camera_optical_joint should be correctly configured as the original camera link. Add the following joints and links to your robot's .xacro file:

<joint name="camera_optical_joint" type="fixed">
    <origin xyz="0 0 0" rpy="-1.5707 0 -1.5707"/>
    <parent link="camera_link"/>
    <child link="camera_link_optical"/>
  </joint>

  <link name="camera_link_optical">
  </link>

Configure RGB-D camera

To do this, we need to replace the existing camera and its shared object file: libgazebo_ros_camera.so with the Kinect shared object file: libgazebo_ros_openni_kinect.so . At the same time, update <frameName> to the camera_link_optical link you just created .

Beyond that, additional parameters need to be set for RGB-D cameras and matched to the topics published by the corresponding drivers in the real world. We provide an example for the camera code below. Replace it in your existing robot's .gazebo file!

A piece of camera code is provided below: 

<!-- RGBD Camera -->
  <gazebo reference="camera_link">
    <sensor type="depth" name="camera1">
        <always_on>1</always_on>
        <update_rate>20.0</update_rate>
        <visualize>true</visualize>             
        <camera>
            <horizontal_fov>1.047</horizontal_fov>  
            
            <depth_camera>

            </depth_camera>
            <clip>
                <near>0.1</near>
                <far>20</far>
            </clip>
        </camera>
         <plugin name="camera_controller" filename="libgazebo_ros_openni_kinect.so">
            <alwaysOn>true</alwaysOn>
            <updateRate>10.0</updateRate>
            <cameraName>camera</cameraName>
            <frameName>camera_link_optical</frameName>                   
            <imageTopicName>rgb/image_raw</imageTopicName>
            <depthImageTopicName>depth/image_raw</depthImageTopicName>
            <pointCloudTopicName>depth/points</pointCloudTopicName>
            <cameraInfoTopicName>rgb/camera_info</cameraInfoTopicName>              
            <depthImageCameraInfoTopicName>depth/camera_info</depthImageCameraInfoTopicName>            
            <pointCloudCutoff>0.4</pointCloudCutoff>                
                <hackBaseline>0.07</hackBaseline>
                <distortionK1>0.0</distortionK1>
                <distortionK2>0.0</distortionK2>
                <distortionK3>0.0</distortionK3>
                <distortionT1>0.0</distortionT1>
                <distortionT2>0.0</distortionT2>
            <CxPrime>0.0</CxPrime>
            <Cx>0.0</Cx>
            <Cy>0.0</Cy>
            <focalLength>0.0</focalLength>
            </plugin>
    </sensor>
  </gazebo>

5. RTAB-Map Launch file

Launch file

Add the startup file of RTAB-Map: mapping.launch

Our map startup file acts as the master node, which interfaces with all the parts needed to be able to perform SLAM with RTAB-Map . The following provides a tag template for the mapping.launch startup file. Create a mapping.launch startup file in the launch folder .

Read through the code and comments to understand what each part does and why. Feel free to use RTAB-Map's documentation to learn beyond this template. The task here is to remap the correct topics to the ones required by rtabmap .

  • scan
  • rgb/image
  • depth/image
  • rgb/camera_info

In the robot's urdf file, find the actual topic the robot is publishing to. When you find the correct values, substitute them into the <arg> tag at the beginning of this startup file, and then the mapping node can find all the information needed to perform RTAB-Mapping!

<?xml version="1.0" encoding="UTF-8"?>

<launch>
  <!-- Arguments for launch file with defaults provided -->
  <arg name="database_path"     default="rtabmap.db"/>
  <arg name="rgb_topic"   default="/camera/rgb/image_raw"/>
  <arg name="depth_topic" default="/camera/depth/image_raw"/>
  <arg name="camera_info_topic" default="/camera/rgb/camera_info"/>  


  <!-- Mapping Node -->
  <group ns="rtabmap">
    <node name="rtabmap" pkg="rtabmap_ros" type="rtabmap" output="screen" args="--delete_db_on_start">

      <!-- Basic RTAB-Map Parameters -->
      <param name="database_path"       type="string" value="$(arg database_path)"/>
      <param name="frame_id"            type="string" value="base_footprint"/>
      <param name="odom_frame_id"       type="string" value="odom"/>
      <param name="subscribe_depth"     type="bool"   value="true"/>
      <param name="subscribe_scan"      type="bool"   value="true"/>

      <!-- RTAB-Map Inputs -->
      <remap from="scan" to="/scan"/>
      <remap from="rgb/image" to="$(arg rgb_topic)"/>
      <remap from="depth/image" to="$(arg depth_topic)"/>
      <remap from="rgb/camera_info" to="$(arg camera_info_topic)"/>

      <!-- RTAB-Map Output -->
      <remap from="grid_map" to="/map"/>

      <!-- Rate (Hz) at which new nodes are added to map -->
      <param name="Rtabmap/DetectionRate" type="string" value="1"/>

      <!-- 2D SLAM -->
      <param name="Reg/Force3DoF" type="string" value="true"/>

      <!-- Loop Closure Detection -->
      <!-- 0=SURF 1=SIFT 2=ORB 3=FAST/FREAK 4=FAST/BRIEF 5=GFTT/FREAK 6=GFTT/BRIEF 7=BRISK 8=GFTT/ORB 9=KAZE -->
      <param name="Kp/DetectorStrategy" type="string" value="0"/>

      <!-- Maximum visual words per image (bag-of-words) -->
      <param name="Kp/MaxFeatures" type="string" value="400"/>

      <!-- Used to extract more or less SURF features -->
      <param name="SURF/HessianThreshold" type="string" value="100"/>

      <!-- Loop Closure Constraint -->
      <!-- 0=Visual, 1=ICP (1 requires scan)-->
      <param name="Reg/Strategy" type="string" value="0"/>

      <!-- Minimum visual inliers to accept loop closure -->
      <param name="Vis/MinInliers" type="string" value="15"/>

      <!-- Set to false to avoid saving data when robot is not moving -->
      <param name="Mem/NotLinkedNodesKept" type="string" value="false"/>
    </node>
  </group>
</launch>

 For more information refer to:

6. RTAB-Map real-time visualization

real-time visualization

Another tool that can be used is rtabmapviz, which is an additional node for real-time visualization of feature maps, loop closures, etc. Due to computational overhead, this tool is not recommended for mapping in simulations. Rtabmapviz is ideally suited to be deployed on real robots during real-time mapping to ensure the properties needed to complete loop closures are obtained.

If you wish to enable it for your mappings, add this code snippet to your mapping.launch file. This will launch the rtabmapviz GUI and give you real-time feature detection, loop closures and other relevant information about the mapping process.

<!-- visualization with rtabmapviz -->
    <node pkg="rtabmap_ros" type="rtabmapviz" name="rtabmapviz" args="-d $(find rtabmap_ros)/launch/config/rgbd_gui.ini" output="screen">
        <param name="subscribe_depth"             type="bool" value="true"/>
        <param name="subscribe_scan"              type="bool" value="true"/>
        <param name="frame_id"                    type="string" value="base_footprint"/>

        <remap from="rgb/image"       to="$(arg rgb_topic)"/>
        <remap from="depth/image"     to="$(arg depth_topic)"/>
        <remap from="rgb/camera_info" to="$(arg camera_info_topic)"/>
        <remap from="scan"            to="/scan"/>
    </node>

7. ROS Teleop package

In previous experiments and projects, the teleop node has been used to control the robot through the keyboard. Here we also need it so that we can navigate the robot in the environment and perform RTAB-Mapping.

Clone the teleop package into the Workspace src folder and compile! The code can be found here:   GitHub - ros-teleop/teleop_twist_keyboard: Generic Keyboard Teleop for ROS

8.Mapping: Map My World 

Map My World!

Everything is ready. Start the ROS node, let's get started:

First, start Gazeboworld and RViz to spawn the robot in the environment:

roslaunch myrobotb world.launch

Then, start the teleop node:

rosrun teleop_twist_keyboard teleop_twist_keyboard.py

Finally, start the mapping node:

roslaunch myrobotb mapping.launch

Navigate the robot in the simulation, create a map for the environment! When everything is set up, end the node, and the map db file can be found at the location specified in the startup file. If no parameters are modified, it will be located in the /root/.ros/ folder.

Best Practices

It's okay to start at a low speed, our goal is to create a great map with the fewest levels possible. Getting 3 loop closures is enough to map the entire environment. Loop closures can be maximized by repeating similar paths two or three times. This allows for maximum feature detection, facilitating faster loop closures! When you are done mapping, make sure to copy or move the database before mapping the new environment. Remember, restarting the mapping node will delete the previous database!

9.Mapping: Database Viewer database viewer

rtabmap-databaseViewer is a great tool for exploring a database after it has been generated. It is isolated from ROS and allows a complete analysis of the mapping session. Includes how to check loop closures, generate 3D map views, extract images, check feature-map-rich regions, and more!

Let's start by opening the mapping database: rtabmap-databaseViewer ~/.ros/rtabmap.db

After opening, you need to add some windows to better view related information, so:   

  • Say yes to using the database parameters Confirm whether to use the database parameters
  • View -> Constraint View
  • View -> Graph View

These options are more than enough to use as there are many features built into the database viewer!

insert image description here

Let's talk about what you see in the image above. On the left, the 2D mesh map and the path of the robot for all update iterations. In the middle are different images from the mapping process. Here, all the features of the detection algorithm can be viewed through the image. These features are indicated in yellow. So, what is pink? Pink indicates where two images share features, and this information is used to create adjacent links and loop closures! Finally, the constraints view can be seen on the right. Here, it is possible to determine where and how adjacent links and loop closures are created.

The number of loop closures can be seen in the lower left corner. These codes stand for: Neighbor, Neighbor Merge, Global Loop Closure, Local Loop Close by Space, Local Loop Close by Time, User Close Loop, and Priority Link.

When it comes time to design your own environment, this tool can be a great resource for checking that the environment is feature-rich enough to generate global loop closures. A good environment has many properties that can be correlated in order to achieve loop closures.

Guess you like

Origin blog.csdn.net/jeffliu123/article/details/129909256