Intel RealSense D435i: Introduction, installation and use (ROS, Python)

link

  • 1 Introduction
  • 2. Installation and configuration
  • 3. Test and use
  • 4.ROS interface installation
  • 5. ROS interface usage
  • 5.Python interface installation
  • 6.Using Python interface
  • 7. Other matters needing attention
  • 8. References

  • The laboratory recently bought an Intel RealSense D435i camera for me to study and collect data. Since I have never been exposed to related content before, I simply learned about it. This blog mainly introduces the 435i camera from the perspective of use and does not involve theoretical issues. As an aside, although the RealSense cameras on Intel’s official website look very big, when you actually hold them in your hands, you will find that they are actually very small and can be held in one hand, as shown in the picture below.

    1 Introduction

    Intel RealSense D435i is a consumer-grade depth camera launched by Intel. Its main components are shown in the figure below. It mainly contains an RGB camera, two infrared cameras and an infrared transmitter, in addition to an IMU unit (this is the difference between D435i and D435, i means imu). Simply put, its depth imaging principle is active stereoscopic infrared imaging, not binocular RGB camera imaging in the traditional sense. This needs to be noted. With the depth map (3D point cloud) and the corresponding RGB image, it is easy to obtain the RGB-D point cloud. So from an output perspective, the D435i can be seen as an RGB-D sensor camera. It can be used later with the RGB-D mode in ORB-SLAM. Of course, you can also use only monocular RGB images and run in monocular SLAM mode, or combine monocular with IMU and run in Mono-Initial mode. The only thing that doesn't work is the binocular RGB mode (because the two IR cameras are single channel). Of course, we can obtain binocular infrared images and use them as input to perform binocular SLAM, and the results will be similar. Therefore, it can be seen that D435i is a relatively "all-round" sensor, which can be used from monocular, monocular + IMU, binocular, binocular + IMU, and RGB-D.

    Some of its technical parameters are briefly listed here:

    • Depth Technology: Active Stereo IR
    • Image sensor technology: 3μm × 3μm pixel size, global shutter
    • Depth field of view (H×V): 86°×57° (±3°)
    • Depth resolution & frame rate: 1280×720, 90FPS (highest)
    • RGB sensor technology: rolling shutter
    • RGB sensor resolution & frame rate: 1920×1080, 30FPS (maximum)
    • RGB sensor FOV (H×V): 69°×42° (±1°)
    • Minimum depth distance (Min-Z): 0.105m
    • Maximum range: about 10m
    • Dimensions (length, width and height): 90mm × 25mm × 25mm

    From the above parameters, we can also see some of its characteristics. For example, the sizes of the depth map and the RGB image are different. In other words, only the part of the RGB image that overlaps the depth map has depth information, otherwise there is no depth information. At the same time, the frame rates are also different. If RGB-D information needs to be used, time synchronization may also be an issue that needs to be dealt with. The second point is that the RGB sensor uses a rolling shutter, so in some high-speed motion scenes, a jelly effect may occur. Finally, due to the use of active infrared ranging technology, the signal strength emitted by the infrared sensor itself is limited, with a maximum of about 10m, so it is not suitable for large outdoor scenes.

    2. Installation and configuration

    Using it on the computer side requires the use of RealSense's special driver and SDK. Here we take Ubuntu as an example to introduce the installation and usage process.

    (1)Add Keys
    apt-key adv --keyserver keys.gnupg.net --recv-key C8B3A55A6F3EFCDE || apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-key C8B3A55A6F3EFCDE
    
    (2)Add Repositories

    Ubuntu 16.04:

    add-apt-repository "deb http://realsense-hw-public.s3.amazonaws.com/Debian/apt-repo xenial main" -u
    

    Ubuntu 18.04:

    add-apt-repository "deb http://realsense-hw-public.s3.amazonaws.com/Debian/apt-repo bionic main" -u
    
    (3)Install Libraries
    apt-get install librealsense2-dkms
    apt-get install librealsense2-utils
    
    (4) Install Dev and Debug tools
    apt-get install librealsense2-dev
    apt-get install librealsense2-dbg
    

    It should be noted that it is librealsense2-dbgrelatively large, about 110MB, and because it is an external network, it may take a while to download.

    3. Test and use

    First, connect the D435i to the computer, and then enter in the terminal to realsense-viewerstart the data visualization interface, as shown in the figure below. Then click the Stereo Module on the left to turn on the depth map display, the RGB Module to display RGB images, and the Motion Module to display IMU related data, as shown in the figure below. If the screen in the picture above can be displayed normally, it means that the basic configuration of RealSense is successful and data can be transferred to the computer.

    4.ROS interface installation

    Of course, the above is the basic configuration. When you buy RealSense, you cannot just use its built-in Viewer to look at data, but use it to run SLAM. RealSense itself also provides a lot of Wrapper to facilitate programming and calling data. To compile the ROS interface, you must first have a ROS environment on your computer. If not, you can refer to this blog , and I won’t go into details here.

    First, we need to download the source code from GitHub, the warehouse address is here .

    git clone https://github.com/IntelRealSense/realsense-ros.git
    

    In fact, this warehouse is also relatively large, about 30MB. Domestic internet speeds may also take a long time, or may be interrupted midway. In this case, it is also possible to download the zip package directly. Then cloneput the downloaded source code folder into the Catkin Workspace srcfolder, for example, the author's folder is ~/root/catkin_ws/src.

    Then return to the Catkin Workspace root directory and use catkin_makethe command to compile. During the compilation process, you may encounter a situation where a package is missing. In this case, just install the missing package according to the prompt information. For example, when the author installed it, the missing package was a library ddynamic_reconfigure. After downloading, place the code in the src folder of the Catkin workspace catkin_make.

    Finally, don’t forget source ~/.bashrcto update it.

    5. ROS interface usage

    In fact, the ROS interface of Realsense can be understood as a node that reads data and publishes topics. When we write our own ROS program, we can just subscribe to the topics issued by this node, so there is no special situation in using it. Enter a command to start the RealSense ROS node.

    roslaunch realsense2_camera rs_camera.launch
    

    Under normal circumstances, the following output will appear. "RealSense Node Is Up!" appears, indicating that the node has been started successfully. This node will publish the following Topics. We can view the rostopic listexisting Topics, or use tools such as RViz or rqt_image_view to subscribe to these Topics to display data.

    /camera/color/camera_info
    /camera/color/image_raw
    /camera/depth/camera_info
    /camera/depth/image_rect_raw
    /camera/extrinsics/depth_to_color
    /camera/extrinsics/depth_to_infra1
    /camera/extrinsics/depth_to_infra2
    /camera/infra1/camera_info
    /camera/infra1/image_rect_raw
    /camera/infra2/camera_info
    /camera/infra2/image_rect_raw
    /camera/gyro/imu_info
    /camera/gyro/sample
    /camera/accel/imu_info
    /camera/accel/sample
    

    Here we use rqt_image_view to subscribe to image_rawthe topic, as shown in the figure below. You can see that the data published by RealSense is subscribed to through this topic, and subsequent processing can be carried out.

    In addition, it should be noted that the D435i mentioned earlier supports binocular infrared image output, but it turns off this option by default, which means that it will not have a topic for binocular infrared images. We need to modify the relevant parameters in the corresponding Launch file to obtain infrared images. details as follows.

    First, we open the Launch folder of the Realsense-ROS node, as follows. Find rs_camera.launchthe file and open it. As you can see, there are a lot of parameters for us to set. We need to find enable_infrathese enable_infra1three enable_infra2parameters, , and set them to true. In this way, the next time we run it, the topic of binocular infrared images will appear, as follows. As above, we can still visualize images through RQT, as follows. Left image Right image

    It can be seen that compared to conventional RGB images, it has many more "star"-like spots. These spots are emitted infrared light. The D435i uses this active infrared light to estimate distance, which is somewhat similar to structured light.

    5.Python interface installation

    The Python interface installation is actually very simple, pipjust one line of command.

    pip install pyrealsense2
    
    6.Using Python interface

    The following content is taken from the introduction page of the PyRealSense2 package.

    import pyrealsense2 as rs
    pipe = rs.pipeline()
    profile = pipe.start()
    try:
      for i in range(0, 100):
        frames = pipe.wait_for_frames()
        for f in frames:
          print(f.profile)
    finally:
        pipe.stop()
    

    If it can run successfully, enter the following results: It means the installation is successful.

    7. Other matters needing attention
    (1) Fusion of IMU information

    As we saw in the previous Topic List, D435i can output gyroscope and accelerometer information. We can read it separately, or merge the /camera/gyro/sampletwo /camera/accel/sampleTopics together, which is called /camera/imu. In this way, one Topic contains both gyroscope data and accelerometer data. Accelerometer data. The specific method is also very simple, or modify the configuration options in the Launch file. rs_camera.launchFind this parameter in the file and unite_imu_methodset its value to copyor linear_interpolation, so that when you start the node again and start D435i, you can see this Topic rs_camerain the Topic list ./camera/imu

    (2) Depth map alignment problem

    D435i can output the original depth map ( /camera/depth/image_rect_raw) and the depth map aligned with the monocular RGB image ( /camera/aligned_depth_to_color/image_raw). These two depth maps are different. Particular attention needs to be paid when collecting data. For example, when collecting RGBD data, what is needed is the aligned depth. If the original depth is accidentally collected, it will basically be unusable. How to output aligned depth maps is also very simple, just modify the configuration options in the Launch file. rs_camera.launchFind this parameter in the file align_depthand set its value to true. (Before setting this option, remember to turn it on enable_depth, otherwise there will be no data)

    It should also be noted that, as mentioned earlier, the depth measurement of the D435i is achieved through an infrared transmitter, so if you want accurate depth data, you must not block the infrared transmitter.

    (3) Infrared speckle problem

    Because the D435i actively emits infrared speckle, it can obtain relatively accurate depth information. However, these infrared speckles will appear in pictures taken by binocular infrared cameras. For larger or brighter scenes, infrared speckles are not very obvious. But if you are in a smaller or darker scene, infrared speckle will be very obvious. A significant impact of this is that it will seriously interfere with the feature point extraction of visual SLAM. Because the speckles are very bright and obvious, and their positions are relatively fixed, they are often considered feature points, which can cause SLAM to mistakenly think that the robot is not moving when matching between frames. As a result, the collected data cannot be used. As shown below. Therefore, if you want to use images captured by an infrared camera for visual SLAM, you must remember to turn off the infrared transmitter in dark or small scenes. Of course, if the infrared transmitter is turned off, the depth information will be less accurate. The collected data cannot be used in RGBD mode. Therefore, for darker or smaller scenes, RGBD data and infrared binocular data must be collected separately, otherwise the situation mentioned above will occur. Infrared speckle seriously interferes with visual matching.

    (4) Infrared transmitter switch

    It is mentioned above that turning off the infrared transmitter, how to turn it on and off is also very simple. After starting rs_camerathe node and running D435i, open a new terminal and enter to rosrun rqt_reconfigure rqt_reconfigurestart the Reconfigure node, as follows. Then select the one on the left stereo_module, then find emitter_enabledthe option and select "Close". The effect of the switch is shown in the figure below. Of course, if you are careful, you will find that in addition to the infrared transmitter, there are many options that can be set, such as exposure, gain, etc., which can be adjusted as needed.

    (5) Different resolutions of D435i

    D435i's RGB, depth, and binocular images can support different resolutions and frame rates. One thing to note, however, is that this does not mean that the resolution and frame rate can be specified arbitrarily, but rather chosen among several supported modes. So how do we know what resolutions and frame rates the D435i supports? The method is also very simple. We can directly enter it in the terminal to realsense-vieweropen Realsense Viewer and view it in the settings of the corresponding module, as shown in the figure below. For convenience, here is a brief summary of the supported resolutions and frame rates:

    • Monocular camera (RGB image): Resolution: 320×180, 320×240, 424×240, 640×360, 640×480, 848×480, 960×540, 1280×720, 1920×1080; Frame rate: 6, 15, 30, 60
    • Binocular infrared camera (grayscale image & depth image): Resolution: 256×144, 424×240, 480×270, 640×360, 640×400, 640×480, 848×100, 848×480, 1280×720, 1280×800; Frame rate: 6, 15, 25 ,30,60,90,100,300
    • IMU (Accelerometer & Gyroscope): Accelerometer frame rate: 63,250; Gyroscope frame rate: 200,400

    It should be noted that the above resolutions and frame rates are not arbitrary combinations. For example, the infrared 640×480 resolution supports up to 90 frames. If it is set to 100 or 300 frames, an error will also be reported. So a safe way is, if you don't know the combination of resolution and frame rate, open Realsense Viewer and set it up first. If it can run successfully, it means it is supported, otherwise, change the configuration.

    It should also be noted that the lowest frame rate supported by D435i is 6Hz. If a lower frame rate is required, further processing will be required. Regarding how to handle it, you can refer to another blog I wrote about ROS Bag handling. In addition, although it is said that the minimum is 6Hz, actual tests have found that if it is reduced to 6Hz, the RGB camera is prone to a jelly effect. Therefore, unless there are special needs, it is not recommended to set the frame rate too low. You can use the default frame rate and subsequently process it to a lower frame rate.

    (6) View the default parameters of D435i camera

    The D435i camera has a default internal parameter when it leaves the factory. The viewing method is very simple. After running D435i, we can see many topics called Topics in the Topic list camera_info. These Topics are the corresponding camera internal parameters and other parameters. As shown below, the parameters of the RGB camera are viewed. So if there is anything unclear, camera_infojust check directly under the corresponding Topic.

    (7) Routine process of data collection

    Here are some simple suggestions for D435i data collection. First of all, if you want to collect RGBD data, you must remember to record the aligned depth instead of the original output depth. Second, if the acquisition scene is a darker or smaller scene, there may be many infrared spots appearing in the binocular infrared image. If you want to use binocular images for visual SLAM, you must remember to turn off the infrared transmitter.

    The following takes a process of using D435i to collect monocular RGB, infrared binocular, and IMU data as an example to introduce the general process and steps.

    First, clarify the data type to be collected, image size, frame rate, depth alignment and other information. These are all rs_camera.launchset in the corresponding parameters of the file.

    Then, type in the terminal roslaunch realsense2_camera rs_camera.launchto start the D435i. If you need to turn off the infrared transmitter after startup, just follow the third point mentioned above.

    Then, use it rqt_image_viewfor visualization. Multiple rqt_image_viewnodes can be opened to view RGB images, left and right infrared images, etc. respectively.

    Finally, open the terminal, enter rosbag record /camera/color/image_raw /camera/infra1/image_rect_raw /camera/infra2/image_rect_raw /camera/imu, and start recording. After the recording is completed, press Ctrl+C to save and exit.

    The above is the main content of this blog. Later, with the deepening of learning and understanding, other contents of RealSense D435i will be further introduced, such as calibration, running actual SLAM, collecting data, etc.

    8. References
    • [1]https://www.intelrealsense.com/zh-hans
    • [2]https://www.intelrealsense.com/zh-hans/depth-camera-d435i
    • [3]https://www.pianshen.com/article/2177388157
    • [4]https://github.com/IntelRealSense/realsense-ros
    • [5]https://blog.csdn.net/sinat_36502563/article/details/89174282
    • [6]https://pypi.org/project/pyrealsense2/
        </section>
    <p style="margin-bottom:0px;text-align:center;font-size:16px">本文作者原创,未经许可不得转载,谢谢配合</p>
    
    <table width="100%" cellspacing="0" cellpadding="0" border="0" bgcolor="#ffffff">
    

    Back to top

Guess you like

Origin blog.csdn.net/luoganttcc/article/details/131711823