The first part extracts the video from the camera and splits it into frames
Article directory
foreword
Use ros to get the video from the camera, and split the video into each frame image, and save the RGB image and Depth image in two folders respectively.
1. Use ros to get the recorded video from the realsence camera and play it
1. Open the ros kernel
The code is as follows (example):
roscore
The demonstration is as follows:
2. Turn on the realsense camera
The code is as follows (example):
roslaunch realsense2_camera demo_pointcloud.launch
The demonstration is as follows:
Open the interface as shown below:
3. View the current topic
The code is as follows (example):
rostopic list
The demonstration is as follows:
Explanation:
"/camera/color/image_raw" is the topic of RGB image;
"/camera/depth/image_rect_raw" is the topic of depth image.
4. Display RGB images and depth images on the camera interface
4.1 Add Image
Click "ADD" in the lower left corner, and the middle interface appears; select "Image"; click "OK".
4.2 Fill in the RGB topic
Expand the "Image" on the left, fill in the RGB image topic in the "Image Topic", and the RGB image will be successfully displayed in the lower left corner.
4.3 Fill in the depth topic
Continue to add according to 4.1, and then fill in the topic of depth image in "Image Topic" according to 4.2, and the image of the depth map will also be displayed in the lower left corner.
5. Record video
The code format is: rosbag record + topic of RGB image + space + topic of depth image -O + path/file name
rosbag record /camera/color/image_raw /camera/depth/image_rect_raw -O /home/midea/video/1
The demonstration is as follows:
Run this line of code to start recording.
Press "ctrl+c" to stop recording.
After recording, there will be a "1.bag" file in the specified directory:
Note: The code for recording video is a capital O
Replenish The difference between uppercase O and lowercase o:
- Lowercase o defaults to naming the file with the recording time
rosbag record /camera/color/image_raw /camera/depth/image_rect_raw -o /home/midea/video/
You can see that there will be a bag file named after time: _2022-12-23-17-48-10.bag
If you fill in the custom name in the back:
rosbag record /camera/color/image_raw /camera/depth/image_rect_raw -o /home/midea/video/2
The effect is to add a custom name in front of the original time: 2_2022-12-23-17-54-51.bag (the suffix ".active" is added here because it is still recording, and the recording is over ". active" suffix disappears.
- Capital O can modify the file name to a custom name
Code:
rosbag record /camera/color/image_raw /camera/depth/image_rect_raw -O /home/midea/video/3
The effect is as follows:
6. Play the recorded video
Unplug the data cable connected to the camera before playing the video.
Then enter the following code:
rosbag play /home/midea/video/1.bag
You can see the recorded content in the lower left corner of the camera display interface:
2. Another way to display topics
rqt_image_view
Enter this code to open the interface showing the thread:
You can also view the recorded thread here.
3. Split the video into each frame and store the RGB and Depth images separately
The file name is: bag2tum.py
Code:
#!/usr/bin/env python
# # coding:utf-8
# Extract images from a bag file.
#PKG = 'beginner_tutorials'
import argparse
import roslib; #roslib.load_manifest(PKG)
import rosbag
import rospy
import cv2
from sensor_msgs.msg import Image
from cv_bridge import CvBridge
from cv_bridge import CvBridgeError
# Reading bag filename from command line or roslaunch parameter.
import os
#import sys
class ImageCreator():
def __init__(self):
self.bridge = CvBridge()
with rosbag.Bag(bag_file, 'r') as bag: #要读取的bag文件;
for topic,msg,t in bag.read_messages():
#print(t)
if topic == "/camera/color/image_raw": #图像的rgb topic;
try:
cv_image = self.bridge.imgmsg_to_cv2(msg,"bgr8")
except CvBridgeError as e:
print (e)
timestr = "%.6f" % msg.header.stamp.to_sec()
#%.6f表示小数点后带有6位,可根据精确度需要修改;
image_name = timestr+ ".png" #图像命名:时间戳.png
cv2.imwrite(rgb_path + image_name, cv_image) #保存;
with open(rgbstamp, 'a') as rgb_time_file:
rgb_time_file.write(timestr+" rgb/"+image_name+"\n")
elif topic == "/camera/depth/image_rect_raw": #图像的depth topic;
try:
cv_image = self.bridge.imgmsg_to_cv2(msg,"16UC1")
except CvBridgeError as e:
print (e)
timestr = "%.6f" % msg.header.stamp.to_sec()
#%.6f表示小数点后带有6位,可根据精确度需要修改;
image_name = timestr+ ".png" #图像命名:时间戳.png
cv2.imwrite(depth_path + image_name, cv_image) #保存;
with open(depthstamp, 'a') as depth_time_file:
depth_time_file.write(timestr+" depth/"+image_name+"\n")
if __name__ == '__main__':
#rospy.init_node(PKG)
try:
bag_file=input('请输入文件路径:')
new_dataset_path=input('请输入生成数据集路径:')
rgb_path = new_dataset_path+'/rgb/'
depth_path= new_dataset_path+'/depth/'
rgbstamp= new_dataset_path+'/rgb.txt'
depthstamp= new_dataset_path+'/depth.txt'
if not os.path.exists(rgb_path):
os.makedirs(rgb_path)
if not os.path.exists(depth_path):
os.makedirs(depth_path)
image_creator = ImageCreator()
except rospy.ROSInterruptException:
pass
You need to modify the following two lines of code to change the topic path of your own RGB image and Depth image:
if topic == "/camera/color/image_raw": #图像的rgb topic;
elif topic == "/camera/depth/image_rect_raw": #图像的depth topic;
Run this code in the terminal of the folder where the code is located. The command to run is:
python bag2tum.py
I report an error here:
It doesn't matter, just change "python" to "python3", and then prompt to enter the file path, enter the path of the recorded bag file, and press Enter.
Fill in the path to store the generated dataset.
After pressing Enter, you can view the generated content in the corresponding directory:
the depth image and the rgb image are stored in different folders respectively, two columns of information are stored in depth.txt and rgb.txt, and the first column is the timestamp of the recording , the second column is the path to the image.
Summarize
At this point, the steps of extracting RGB images and Depth images from the camera are completed.