Intel RealSense SDK 2.0 is a cross-platform development package, including basic camera tools such as realsense-viewer, and also provides rich interfaces for secondary development, including ROS, python, Matlab, node.js, LabVIEW, OpenCV, PCL , .NET etc.
The camera used this time is D435i
, which can provide depth and RGB images, and has an IMU.
This process is to use the D435i camera for SLAM point cloud construction.
Install Intel RealSense SDK and view camera dataLinux
/Ubuntu - RealSense SDK 2.0 Construction Guide
Once installed, view the camera's depth and RGB images
realsense-viewer
Then download the driver function package of the ROS framework that drives each camera of intel
ROS wrapper for Intel® RealSense™ devices
When the installation is complete, you can start to see the data of the camera
We continue to configure SMAL here.
Before starting the camera, we need to set the rs_camera.launch file in the realsense2_camera rospack
The former is to achieve time synchronization of different sensor data (depth, RGB, IMU), that is, have the same timestamp; the
latter will add a number of rostopic, among which we are more concerned that /camera/aligned_depth_to_color/image_raw
the depth image here is aligned with the RGB image
<arg name="enable_sync" default="true"/>
<arg name="align_depth" default="true"/>
Once complete, you can start the camera with the following command
roslaunch realsense2_camera rs_camera.launch
The key is that /camera/color/image_raw
and /camera/aligned_depth_to_color/image_raw
correspond to RGB image and depth image respectively
Then in Examples/RGB-D
, create a new RealSense.yaml
parameter file called,
then edit
#--------------------------------------------------------------------------------------------
# Camera calibration and distortion parameters (OpenCV)
Camera.fx: 909.3504638671875
Camera.fy: 908.5739135742188
Camera.cx: 645.5443725585938
Camera.cy: 369.3554992675781
Camera.k1: 0.0
Camera.k2: 0.0
Camera.p1: 0.0
Camera.p2: 0.0
Camera.k3: 0.0
Camera.width: 1280
Camera.height: 720
#Camera frames per second
Camera.fps: 30.0
#IR projector baseline times fx (aprox.)
Camera.bf: 46.01
#Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale)
Camera.RGB: 1
#Close/Far threshold. Baseline times.
ThDepth: 40.0
#Deptmap values factor,将深度像素值转化为实际距离,原来单位是 mm,转化成 m
DepthMapFactor: 1000.0
#ORB Parameters
#--------------------------------------------------------------------------------------------
#ORB Extractor: Number of features per image
ORBextractor.nFeatures: 1000
#ORB Extractor: Scale factor between levels in the scale pyramid
ORBextractor.scaleFactor: 1.2
#ORB Extractor: Number of levels in the scale pyramid
ORBextractor.nLevels: 8
#ORB Extractor: Fast threshold
#Image is divided in a grid. At each cell FAST are extracted imposing a minimum response.
#Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST
#You can lower these values if your images have low contrast
ORBextractor.iniThFAST: 20
ORBextractor.minThFAST: 7
#--------------------------------------------------------------------------------------------
#Viewer Parameters
#--------------------------------------------------------------------------------------------
Viewer.KeyFrameSize: 0.05
Viewer.KeyFrameLineWidth: 1
Viewer.GraphLineWidth: 0.9
Viewer.PointSize:2
Viewer.CameraSize: 0.08
Viewer.CameraLineWidth: 3
Viewer.ViewpointX: 0
Viewer.ViewpointY: -0.7
Viewer.ViewpointZ: -1.8
Viewer.ViewpointF: 500
Among them, it contains the internal parameters of the camera, which can be obtained through the following, and the parameters of each camera may be different
rostopic echo /camera/color/camera_info
The main thing to pay attention to in yaml here is
# 相机标定和畸变参数(OpenCV)
To modify the data according to the topic [/camera/color/camera_info],
the parameters can be checked against the URL to view the specific parameter information
http://docs.ros.org/en/melodic/api/sensor_msgs/html/msg/CameraInfo.html
That is the following parameters
Camera.fx: 909.3504638671875
Camera.fy: 908.5739135742188
Camera.cx: 645.5443725585938
Camera.cy: 369.3554992675781
Camera.k1: 0.0
Camera.k2: 0.0
Camera.p1: 0.0
Camera.p2: 0.0
Camera.k3: 0.0
Camera.width: 1280
Camera.height: 720
Save and exit when done
then modify src/pointcloudmapping.cc
the file
1. Around line 128
voxel.setLeafSize (0.02f, 0.02f, 0.02f); //Modified place 1 调节点云密度
2, around line 75
p.y = ( m - kf->cy) * p.z / kf->fy;
// p.y = - ( m - kf->cy) * p.z / kf->fy; //Modified place2 与原先添加了负号,将原本颠倒的点云地图上下翻转,方便观察
//【后续测试显示的pcd完全无法使用。不进行上下翻转】
p.r = color.ptr<uchar>(m)[n*3]; //Modified place3 修改颜色显示
p.g = color.ptr<uchar>(m)[n*3+1];
p.b = color.ptr<uchar>(m)[n*3+2];
#########2023年04月17日08:54:17 UTC+8:00#########
#########Due to the ambiguity of the original screenshot, Replace new picture #########
Then compile ORB_SLAM2_modified
./build.sh
./build_ros.sh
Test after compiling
Start the master node
roscore
start camera
roslaunch realsense2_camera rs_camera.launch
Build a map and find an error
rosrun ORB_SLAM2 RGBD Vocabulary/ORBvoc.bin Examples/RGB-D/RealSense.yaml /camera/rgb/image_raw:=/camera/color/image_raw /camera/depth_registered/image_raw:=/camera/aligned_depth_to_color/image_raw
After searching, it may be the yaml file . Opencv4.0 needs to be modified to read the yaml file error error: (-49:Unknown error code -49) Input file is empty in function 'open'
Need to add at the top of the yaml file
%YAML:1.0
rerun
rosrun ORB_SLAM2 RGBD Vocabulary/ORBvoc.bin Examples/RGB-D/RealSense.yaml /camera/rgb/image_raw:=/camera/color/image_raw /camera/depth_registered/image_raw:=/camera/aligned_depth_to_color/image_raw
After CTRL+C, you can see the output pcd file
View saved pcd files
pcl_viewer vslam.pcd
So far, the process of implementing ORB SLAM 2 based on the depth camera Intel RealSense D435i is successful
However, because the instructions to start SLAM are a bit complicated, use roslauch to optimize
the first is to start the offline package
When the started command is not entered in full, the terminal will output the usage used
Usage: rosrun ORB_SLAM2 RGBD path_to_vocabulary path_to_settings
First create a function package named slam
cd ~/catkin_ws/src
catkin_create_pkg slam std_msgs rospy roscpp
Then create a launch file in this feature package, I named it for rgbd.launch
editing
<launch>
<!--定义全局参数-->
<arg name="rgb_image" default="/camera/rgb/image_raw"/>
<arg name="path_to_vacabulary" default="/home/heying/ORB_SLAM2_modified/Vocabulary/ORBvoc.bin"/>
<arg name="path_to_settings" default="/home/heying/ORB_SLAM2_modified/Examples/RGB-D/TUM1_ROS.yaml"/>
<!--播放离线包-->
<!-- 注意这里bag文件的路径必须为绝对路径-->
<node pkg="rosbag" type="play" name="player" output="screen"
args=" /home/heying/ORB_SLAM2_modified/Examples/datasets/rgbd_dataset_freiburg1_xyz.bag">
<remap from="/camera/rgb/image_color" to="/camera/rgb/image_raw" />
<remap from="/camera/depth/image" to="/camera/depth_registered/image_raw" />
</node>
<!--启动ORB-SLAM2 RGBD-->
<node name ="RGBD" pkg="ORB_SLAM2" type="RGBD"
args="$(arg path_to_vacabulary) $(arg path_to_settings)" respawn="true" output="screen">
<!--remap from="/camera/image_raw" to="$(arg rgb_image)"/-->
</node>
</launch>
test after completion
roslaunch slam rgbd.launch
Normal start
live camera
Create a new launch file to start the camera and build a map
touch camera.launch
gedit camera.launch
then write
The first is the global parameter, don’t type the full command first, you can see that the usage method is output, path_to_vocabulary
and the sum that comes out path_to_settings
is the global parameter that needs to be set
rosrun ORB_SLAM2 RGBD
<launch>
<!--定义全局参数-->
<arg name = "path_to_vocabulary" default = "/home/heying/ORB_SLAM2_modified/Vocabulary/ORBvoc.bin"/>
<arg name = "path_to_settings" default = "/home/heying/ORB_SLAM2_modified/Examples/RGB-D/RealSense.yaml"/>
<!--include file="$(find realsense2_camera)/launch/rs_camera.launch"/-->
<!--启动ORB-SLAM2 RGBD-->
<node pkg = "ORB_SLAM2" type ="RGBD" name = "RGBD_camera" args="$(arg path_to_vocabulary) $(arg path_to_settings)" respawn="true" output="screen">
<remap from="/camera/rgb/image_raw" to="/camera/color/image_raw" />
<remap from="/camera/depth_registered/image_raw" to="/camera/aligned_depth_to_color/image_raw" />
</node>
</launch>
Test
start the master node
roscore
Then check the hardware connection of the camera
start camera
roslaunch realsense2_camera rs_camera.launch
Start mapping
roslaunch slam camera.launch
The startup is normal and the effect is perfect.
The pcd file will be generated after the program CTRL+C
View the effect
pcl_viewer vslam.pcd
The effect is good.
So far, the startup command optimization process of using intel D435i to build SLAM point cloud map and slam camera map is over.