SLAM Mapping Based on Fusion of Depth Map and 2D LiDAR Information

foreword

This is my undergraduate graduation project related
hmm. . . There are no open source projects on the Internet, only theoretical knowledge and papers, so record the process of doing it yourself.
Too lazy, the zed2 camera I used at the beginning, the fusion has been successful, but later I found out that the zed2 camera generates depth images based on the binocular principle, which cannot be used in a dark environment, so I urgently replaced it with a d435i. Only the camera topic has been changed, and the rest of the process is the same, so I am too lazy to change the process record.

Fusion of camera and 2D lidar

ROS's depthimage_to_laserscan package is pretty good
Download depthimage_to_laserscan

mkdir -p ~/depth2laser_ws/src
cd ~/depth2laser_ws/src
git clone https://github.com/ros-perception/depthimage_to_laserscan.git
cd ..
catkin_make

Test depth map conversion

#运行相机 (需要相机驱动)
roslaunch zed_wrapper zed2.launch

#运行深度图转换
rosrun depthimage_to_laserscan depthimage_to_laserscan image:=/zed2/zed_node/depth/depth_registered

#打开rviz观察
rosrun rviz rviz

Adding laserscan still cannot be displayed, rviz reports that [camera_depth_frame] cannot be found, then give rviz a tf

rosrun tf static_transform_publisher 0 0 0 0 0 0 1 map camera_depth_frame 10

can be displayed normally

Then I added a launch file and placed it in the deptn2laser_ws/src/depthimage_to_laserscan/launch folder

zed2_depthimage_to_laserscan.launch file content

<launch>
    <node pkg="depthimage_to_laserscan" type="depthimage_to_laserscan" name="depthimage_to_laserscan" output="screen">
    <remap from="image" to="/camera/depth/image_rect_raw" />

    <!--激光扫描的帧id。对于来自具有Z向前的“光学”帧的点云,该值应该被设置为具有X向前和Z向上的相应帧。-->
    <param name="output_frame_id" value="/laser"/>
    <!--用于生成激光扫描的像素行数。对于每一列,扫描将返回在图像中垂直居中的那些像素的最小值。-->
    <param name="scan_height" value="220" />
    <!--返回的最小范围(以米为单位)。小于该范围的输出将作为-Inf输出。-->
    <param name="range_min" value="0.45" />
    <!--返回的最大范围(以米为单位)。大于此范围将输出为+ Inf。-->
    <param name="range_max" value="2.00" />
    </node>

</launch>

can be run directly after

roslaunch depthimage_to_laserscan zed2_depthimage_to_laserscan.launch

Camera + Radar

zed2 camera see ubuntu18.04 ZED2 camera calibration

#打开相机
source ~/zed_ws/devel/setup.bash
roslaunch zed_wrapper zed2.launch
#打开d435i
source ~/realsense_ws/devel/setup.bash
roslaunch realsense2_camera rs_camera.launch

For the running cartographer behind the radar and behind see Silan A2 running the cartographer

#打开雷达
source ~/rplidar/devel/setup.bash
roslaunch rplidar_ros rplidar_lidar.launch

I still don't put the code. I am a rookie undergraduate student. Although the fusion is written by myself, it is a simple fusion, and there is no fine operation such as filtering.

#运行修改后的depthimage_to_laserscan
	source ~/fusion_camera_lidar/devel/setup.bash
	roslaunch depthimage_to_laserscan zed2_depthimage_to_laserscan.launch
#运行cartographer
source ~/carto_ws/install_isolated/setup.bash
roslaunch cartographer_ros demo_revo_lds.launch

There is one last question that needs to be noted. In order to facilitate subsequent code debugging, the two topics of /tf /tf_static need to be recorded at the same time when recording the data set, otherwise the final mapping result may be offset.

Guess you like

Origin blog.csdn.net/qq_41746268/article/details/116355176