Realsense相机疑难问题

不同传感器模块之间的外参问题

问题描述

在使用Realsense API 的过程中,遇到一个相机不同传感器的之间的转换问题。一开始通过get_extrinsics_to 方法得到的了相机深度模块转换到彩色模块的问题,代码如下:

import pyrealsense2 as rs
import numpy as np
import cv2

pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 1280, 720, rs.format.bgr8, 30)
profile = pipeline.start(config)
stream_profile_color = profile.get_stream(rs.stream.color)
stream_profile_depth = profile.get_stream(rs.stream.depth)
color_intrs = stream_profile_color.as_video_stream_profile().get_intrinsics()
depth_intrs = stream_profile_depth.as_video_stream_profile().get_intrinsics()
extrinsics = stream_profile_depth.get_extrinsics_to(stream_profile_color)
print("extrins:\n", extrinsics)

之后在API文档中看到rs.rs2_transform_point_to_point() 的函数,便想验证这个函数的内在转换是不是通过坐标乘以外参变换矩阵 所得,所以根据上面得到的外参矩阵 来进行验证:

import pyrealsense2 as rs
import numpy as np
import cv2

pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 1280, 720, rs.format.bgr8, 30)
profile = pipeline.start(config)
stream_profile_color = profile.get_stream(rs.stream.color)
stream_profile_depth = profile.get_stream(rs.stream.depth)
color_intrs = stream_profile_color.as_video_stream_profile().get_intrinsics()
depth_intrs = stream_profile_depth.as_video_stream_profile().get_intrinsics()
extrinsics = stream_profile_depth.get_extrinsics_to(stream_profile_color)
print("extrins:\n", extrinsics)
align_to = rs.stream.color
align = rs.align(align_to)
x = 766
y = 486
frames = pipeline.wait_for_frames()
aligned_frames = align.process(frames)
aligned_depth_frames = aligned_frames.get_depth_frame()
dis = aligned_depth_frames.get_distance(x, y)
camera_coordinate = rs.rs2_deproject_pixel_to_point(depth_intrs, [x, y], dis)
camera_coordinate1 = rs.rs2_deproject_pixel_to_point(color_intrs, [x, y], dis)
# generate the transform matrix on my own
rotation_matrix = np.array(extrinsics.rotation).reshape(3, 3)
translation_matrix = np.array(extrinsics.translation).reshape(3, 1)
tf_matrix = np.vstack((np.hstack((rotation_matrix, translation_matrix)), [0, 0, 0, 1]))
print("tf_matrix:", tf_matrix)
# Convert to homogeneous coordinates
camera_coordinate_my_own = np.dot(tf_matrix, np.append(camera_coordinate, [1]).reshape(4, 1))
camera_coordinate_api = rs.rs2_transform_point_to_point(extrinsics, camera_coordinate)
# the coordinate unit is meter
print("my own:\n", camera_coordinate_my_own)
print("api:\n", camera_coordinate_api)
print("distance:", dis)
print("depth intrins:", camera_coordinate)
print("color intrins:", camera_coordinate1)
pipeline.stop()

结果如下:

my own:
 [[0.40040609]
 [0.27704751]
 [0.50412326]
 [1.        ]]
api:
 [0.3927474021911621, 0.3071334660053253, 0.49359646439552307]

可以看出两者得出的结果是不一样的,但是原理上我感觉没有错误,所以就针对这个问题进行了研究。

解决办法

打开windows命令行,将目录导航到C:\Program Files (x86)\Intel RealSense SDK 2.0\tools目录下

确保此时电脑连入了Realsense设备

输入rs-enumerate-devices -c命令

得到相机的相关参数,可以查询自己需要的信息

在这里插入图片描述

tf_matrix: [[ 9.99965429e-01  7.62356725e-03  3.31247575e-03  3.04106681e-04]
 [-7.54260505e-03  9.99688208e-01 -2.38026995e-02  1.42350988e-02]
 [-3.49290436e-03  2.37768926e-02  9.99711215e-01 -6.95471419e-03]
 [ 0.00000000e+00  0.00000000e+00  0.00000000e+00  1.00000000e+00]]

可以看到两者是转置的关系,所以问题就出现在这里,自己在生成转换矩阵的时候忘记转置,所以得到了不同的结果。

解决问题

需要修改的代码如下:

rotation_matrix = np.array(extrinsics.rotation).reshape(3, 3).T
translation_matrix = np.array(extrinsics.translation).reshape(3, 1)
tf_matrix = np.vstack((np.hstack((rotation_matrix, translation_matrix)), [0, 0, 0, 1]))
print("tf_matrix:", tf_matrix)

运行得到的结果如下:

my own:
 [[0.39313513]
 [0.30742287]
 [0.49409104]
 [1.        ]]
api:
 [0.393135130405426, 0.3074228763580322, 0.49409106373786926]

结果相同,问题解决,同时也说明rs.rs2_transform_point_to_point() 函数的内在机理就是通过外参矩阵 去完成转换的!

深度图与RGB图在配准之后的内参问题

问题描述

在使用rs.rs2_deproject_pixel_to_point() 的函数去将像素点转化为三维坐标的过程中,对于这个函数的内部参数的使用问题产生疑惑。rs2_deproject_pixel_to_point(intrin: pyrealsense2.pyrealsense2.intrinsics, pixel: List[float[2]], depth: float) 中的intrinsics 参数是使用RGB图像的内参还是深度图像的内参?故进行以下实验:

import pyrealsense2 as rs

pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 1280, 720, rs.format.bgr8, 30)
profile = pipeline.start(config)
stream_profile_color = profile.get_stream(rs.stream.color)
stream_profile_depth = profile.get_stream(rs.stream.depth)
color_intrs = stream_profile_color.as_video_stream_profile().get_intrinsics()
depth_intrs = stream_profile_depth.as_video_stream_profile().get_intrinsics()
print("color_intrs:", color_intrs)
print("depth_intrs:", depth_intrs)
align_to = rs.stream.color
align = rs.align(align_to)
x = 668
y = 508
frames = pipeline.wait_for_frames()
aligned_frames = align.process(frames)
aligned_depth_frames = aligned_frames.get_depth_frame()
aligned_color_frames = aligned_frames.get_color_frame()
aligned_color_intrs = aligned_color_frames.profile.as_video_stream_profile().intrinsics
aligned_depth_intrs = aligned_depth_frames.profile.as_video_stream_profile().intrinsics
print("aligned_color_intrs:", aligned_color_intrs)
print("aligned_depth_intrs:", aligned_depth_intrs)
dis = aligned_depth_frames.get_distance(x, y)
camera_coordinate = rs.rs2_deproject_pixel_to_point(aligned_depth_intrs, [x, y], dis)
camera_coordinate1 = rs.rs2_deproject_pixel_to_point(color_intrs, [x, y], dis)
camera_coordinate2 = rs.rs2_deproject_pixel_to_point(aligned_color_intrs, [x, y], dis)
print("aligned_depth_coordinate:", camera_coordinate)
print("color_coordinate::", camera_coordinate1)
print("aligned_color_coordinate::", camera_coordinate2)
pipeline.stop()

运行结果如下:

color_intrs: [ 1280x720  p[633.544 366.202]  f[631.882 631.281]  Inverse Brown Conrady [-0.0545333 0.0667639 -0.0007223 0.00052989 -0.0208721] ]
depth_intrs: [ 640x480  p[316.003 243.892]  f[388.318 388.318]  Brown Conrady [0 0 0 0 0] ]
aligned_color_intrs: [ 1280x720  p[633.544 366.202]  f[631.882 631.281]  Inverse Brown Conrady [-0.0545333 0.0667639 -0.0007223 0.00052989 -0.0208721] ]
aligned_depth_intrs: [ 1280x720  p[633.544 366.202]  f[631.882 631.281]  Inverse Brown Conrady [-0.0545333 0.0667639 -0.0007223 0.00052989 -0.0208721] ]
aligned_depth_coordinate: [0.0614432729780674, 0.2532774806022644, 1.124000072479248]
color_coordinate:: [0.0614432729780674, 0.2532774806022644, 1.124000072479248]
aligned_color_coordinate:: [0.0614432729780674, 0.2532774806022644, 1.124000072479248]

结论

从以上实验可以看出,在RGB和深度图像进行配准之后,深度图的内参会调整为与RGB图的内参相同的参数,所以无论在rs.rs2_deproject_pixel_to_point() 函数中使用深度图内参还是RGB内参,结果都是一样的。

使用API获取一个RGB像素点的三维坐标(在RGB三维坐标系下)

参考官方文档

明确的几个要点:

  • realsense不同的传感器模块之间对应的坐标系是不同的。也就是说,RGB传感器和深度传感器的坐标系不同。
  • 传感器坐标系的单位是以 为单位的
  • 传感器不同坐标系的转化需要用到外参
  • 深度图和RGB图的像素点在没有进行配准的情况下是没有对应关系的(深度图和Infrared图是重合的),也就是说,从RGB图中获取的一个像素点,深度图中相同坐标的像素点,两者对应真实物理环境中的点是完全不一样的。

介绍两种方法

完成RGB和深度图配准之后获取三维坐标

import pyrealsense2 as rs

pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 1280, 720, rs.format.bgr8, 30)
profile = pipeline.start(config)
stream_profile_color = profile.get_stream(rs.stream.color)
stream_profile_depth = profile.get_stream(rs.stream.depth)
color_intrs = stream_profile_color.as_video_stream_profile().get_intrinsics()
depth_intrs = stream_profile_depth.as_video_stream_profile().get_intrinsics()
print("color_intrs:", color_intrs)
print("depth_intrs:", depth_intrs)
align_to = rs.stream.color
align = rs.align(align_to)
x = 668
y = 508
frames = pipeline.wait_for_frames()
aligned_frames = align.process(frames)
aligned_depth_frames = aligned_frames.get_depth_frame()
aligned_color_frames = aligned_frames.get_color_frame()
aligned_color_intrs = aligned_color_frames.profile.as_video_stream_profile().intrinsics
aligned_depth_intrs = aligned_depth_frames.profile.as_video_stream_profile().intrinsics
print("aligned_color_intrs:", aligned_color_intrs)
print("aligned_depth_intrs:", aligned_depth_intrs)
dis = aligned_depth_frames.get_distance(x, y)
camera_coordinate = rs.rs2_deproject_pixel_to_point(aligned_depth_intrs, [x, y], dis)
camera_coordinate1 = rs.rs2_deproject_pixel_to_point(color_intrs, [x, y], dis)
camera_coordinate2 = rs.rs2_deproject_pixel_to_point(aligned_color_intrs, [x, y], dis)
print("aligned_depth_coordinate:", camera_coordinate)
print("color_coordinate::", camera_coordinate1)
print("aligned_color_coordinate::", camera_coordinate2)
pipeline.stop()

运行结果如下:

color_intrs: [ 1280x720  p[633.544 366.202]  f[631.71 631.109]  Inverse Brown Conrady [-0.0545333 0.0667639 -0.0007223 0.00052989 -0.0208721] ]
depth_intrs: [ 640x480  p[316.003 243.892]  f[388.318 388.318]  Brown Conrady [0 0 0 0 0] ]
aligned_color_intrs: [ 1280x720  p[633.544 366.202]  f[631.71 631.109]  Inverse Brown Conrady [-0.0545333 0.0667639 -0.0007223 0.00052989 -0.0208721] ]
aligned_depth_intrs: [ 1280x720  p[633.544 366.202]  f[631.71 631.109]  Inverse Brown Conrady [-0.0545333 0.0667639 -0.0007223 0.00052989 -0.0208721] ]
distance: 1.124000072479248
aligned_depth_coordinate: [0.06146007776260376, 0.2533468008041382, 1.124000072479248]
color_coordinate:: [0.06146007776260376, 0.2533468008041382, 1.124000072479248]
aligned_color_coordinate:: [0.06146007776260376, 0.2533468008041382, 1.124000072479248]

跳过配准直接根据API获取三维坐标

import pyrealsense2 as rs

pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 1280, 720, rs.format.bgr8, 30)
profile = pipeline.start(config)
depth_scale = profile.get_device().first_depth_sensor().get_depth_scale()
depth_min = 0.11  # meter
depth_max = 2.0  # meter

depth_intrin = profile.get_stream(rs.stream.depth).as_video_stream_profile().get_intrinsics()
color_intrin = profile.get_stream(rs.stream.color).as_video_stream_profile().get_intrinsics()

depth_to_color_extrin = profile.get_stream(rs.stream.depth).as_video_stream_profile().get_extrinsics_to(
    profile.get_stream(rs.stream.color))
color_to_depth_extrin = profile.get_stream(rs.stream.color).as_video_stream_profile().get_extrinsics_to(
    profile.get_stream(rs.stream.depth))
frames = pipeline.wait_for_frames()
depth_frame = frames.get_depth_frame()
x = 668
y = 508
color_point = [x, y]
depth_point = rs.rs2_project_color_pixel_to_depth_pixel(
    depth_frame.get_data(), depth_scale,
    depth_min, depth_max,
    depth_intrin, color_intrin, depth_to_color_extrin, color_to_depth_extrin, color_point)
print("depth_point:", depth_point)
distance = depth_frame.get_distance(int(depth_point[0]), int(depth_point[1]))
print("distance:", distance)
camera_coordinate_depth = rs.rs2_deproject_pixel_to_point(depth_intrin, depth_point, distance)
print("coordinate in depth:", camera_coordinate_depth)
camera_coordinate_color = rs.rs2_transform_point_to_point(depth_to_color_extrin, camera_coordinate_depth)
print("coordinate in color:", camera_coordinate_color)
pipeline.stop()

运行结果如下:

depth_point: [357.95562744140625, 329.447998046875]
distance: 1.124000072479248
coordinate in depth: [0.12143445014953613, 0.24764372408390045, 1.124000072479248]
coordinate in color: [0.061697326600551605, 0.25312334299087524, 1.1230082511901855]

比较两者的运行结果:

distance: 1.124000072479248
aligned_depth_coordinate: [0.06146007776260376, 0.2533468008041382, 1.124000072479248]
color_coordinate:: [0.06146007776260376, 0.2533468008041382, 1.124000072479248]
aligned_color_coordinate:: [0.06146007776260376, 0.2533468008041382, 1.124000072479248]
depth_point: [357.95562744140625, 329.447998046875]
distance: 1.124000072479248
coordinate in depth: [0.12143445014953613, 0.24764372408390045, 1.124000072479248]
coordinate in color: [0.061697326600551605, 0.25312334299087524, 1.1230082511901855]

最终得到的坐标是在RGB坐标系下的三维坐标(单位),两者十分接近,说明两种方法都是正确的。但是第二种方法所需计算消耗要比第一种小,更高效。

两种方法的进一步验证:

import pyrealsense2 as rs

pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 1280, 720, rs.format.bgr8, 30)
profile = pipeline.start(config)
depth_scale = profile.get_device().first_depth_sensor().get_depth_scale()
depth_min = 0.11  # meter
depth_max = 2.0  # meter

depth_intrin = profile.get_stream(rs.stream.depth).as_video_stream_profile().get_intrinsics()
color_intrin = profile.get_stream(rs.stream.color).as_video_stream_profile().get_intrinsics()

depth_to_color_extrin = profile.get_stream(rs.stream.depth).as_video_stream_profile().get_extrinsics_to(
    profile.get_stream(rs.stream.color))
color_to_depth_extrin = profile.get_stream(rs.stream.color).as_video_stream_profile().get_extrinsics_to(
    profile.get_stream(rs.stream.depth))
# align the color and depth streams
align_to = rs.stream.color
align = rs.align(align_to)

frames = pipeline.wait_for_frames()
# get the depth frame after alignment
aligned_frames = align.process(frames)
aligned_depth_frames = aligned_frames.get_depth_frame()
# get the original depth frame
depth_frame = frames.get_depth_frame()
x = 668
y = 508
dis = aligned_depth_frames.get_distance(x, y)
print("distance after alignment:", dis)
color_point = [x, y]
depth_point = rs.rs2_project_color_pixel_to_depth_pixel(
    depth_frame.get_data(), depth_scale,
    depth_min, depth_max,
    depth_intrin, color_intrin, depth_to_color_extrin, color_to_depth_extrin, color_point)
print("depth_point:", depth_point)
distance = depth_frame.get_distance(int(depth_point[0]), int(depth_point[1]))
print("distance without alignment:", distance)
camera_coordinate_depth = rs.rs2_deproject_pixel_to_point(depth_intrin, depth_point, distance)
print("coordinate in depth:", camera_coordinate_depth)
camera_coordinate_color = rs.rs2_transform_point_to_point(depth_to_color_extrin, camera_coordinate_depth)
print("coordinate in color:", camera_coordinate_color)
pipeline.stop()

结果如下:

distance after alignment: 1.127000093460083
depth_point: [357.96136474609375, 329.471923828125]
distance without alignment: 1.127000093460083
coordinate in depth: [0.12177521735429764, 0.24837413430213928, 1.127000093460083]
coordinate in color: [0.0620361790060997, 0.25386834144592285, 1.1260048151016235]

上述代码用两种方法求取了对应同一个RGB像素点 的距离值,可以看到两种方法求取的距离值是完全相同的。由此我们也可以看到两种方法的正确性。

配准之后求取深度值

配准之后获取的深度帧可以直接使用RGB像素坐标来获取对应RGB像素点的深度。因为深度帧RGB帧完成了配准,像素点之间是一一对应的。(相当于每个像素点完成了未配准求取深度值的操作)

未配准根据API求取深度值

未配准的深度帧RGB帧的像素坐标不是一一对应的。想要求取一个RGB像素点的深度值,就得先把这个RGB像素点转换为深度帧下的像素坐标值,在求取距离。

使用过程中又遇到一个问题

我当时以为在验证完两种方法获取XYZ三维坐标之后,就可以根据情况随意选用了。结果本来在D455上计算正确的坐标在L515上有很大的偏差。之后我通过rs.rs2_project_point_to_pixel() (这个函数是将3维坐标转化为像素坐标的API)来验证两种方法的正确性。验证如下:

D455

import pyrealsense2 as rs
import numpy as np
import transforms3d as tfs

# 初始化配置
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 1280, 720, rs.format.bgr8, 30)
profile = pipeline.start(config)

# 获得相机内参和depth_scale
depth_scale = profile.get_device().first_depth_sensor().get_depth_scale()
print("depth_scale:",depth_scale)
depth_min = 0.11  # meter
depth_max = 2.0  # meter
depth_intrin = profile.get_stream(rs.stream.depth).as_video_stream_profile().get_intrinsics()
color_intrin = profile.get_stream(rs.stream.color).as_video_stream_profile().get_intrinsics()
# 获得相机深度模块和彩色模块的外参转换
depth_to_color_extrin = profile.get_stream(rs.stream.depth).as_video_stream_profile().get_extrinsics_to(
    profile.get_stream(rs.stream.color))
color_to_depth_extrin = profile.get_stream(rs.stream.color).as_video_stream_profile().get_extrinsics_to(
    profile.get_stream(rs.stream.depth))

coordinate_collect = []
frames = pipeline.wait_for_frames()
# 使用API获得三维坐标
# get the original depth frame
depth_frame = frames.get_depth_frame()
# 指定像素坐标
x = 850
y = 450
color_point = [x, y]
depth_point = rs.rs2_project_color_pixel_to_depth_pixel(
    depth_frame.get_data(), depth_scale,
    depth_min, depth_max,
    depth_intrin, color_intrin, depth_to_color_extrin, color_to_depth_extrin, color_point)
print("depth pixel:", depth_point)
distance = depth_frame.get_distance(int(depth_point[0]), int(depth_point[1]))
# 获得点在深度模块三维坐标系下的坐标
camera_coordinate_depth = rs.rs2_deproject_pixel_to_point(depth_intrin, depth_point, distance)
depth_point_after_calculation = rs.rs2_project_point_to_pixel(depth_intrin, camera_coordinate_depth)
print("经过计算后的深度坐标为:", depth_point_after_calculation)
# print("coordinate relative to depth sensor", camera_coordinate_depth)
camera_coordinate_color = rs.rs2_transform_point_to_point(depth_to_color_extrin, camera_coordinate_depth)
print("coordinate relative to color sensor using rs.rs2_project_color_pixel_to_depth_pixel():\n",
      camera_coordinate_color)
color_pixel = rs.rs2_project_point_to_pixel(color_intrin, camera_coordinate_color)
print("原始彩色像素坐标为:", [x, y])
print("经过计算后的彩色坐标为:", color_pixel)

align_to = rs.stream.color
align = rs.align(align_to)
aligned_frames = align.process(frames)
aligned_depth_frames = aligned_frames.get_depth_frame()
dis = aligned_depth_frames.get_distance(x, y)
camera_coordinate_align = rs.rs2_deproject_pixel_to_point(color_intrin, [x, y], dis)
print("coordinate relative to the color sensor after alignment:\n", camera_coordinate_align)
color_pixel1 = rs.rs2_project_point_to_pixel(color_intrin, camera_coordinate_align)
print("经过配准后的彩色像素坐标:", color_pixel1)
pipeline.stop()

运行结果如下:

depth_scale: 0.0010000000474974513
depth pixel: [467.0, 293.9060974121094]
经过计算后的深度坐标为: [467.0, 293.9060974121094]
coordinate relative to color sensor using rs.rs2_project_color_pixel_to_depth_pixel():
 [0.4696837365627289, 0.18194164335727692, 1.3616513013839722]
原始彩色像素坐标为: [850, 450]
经过计算后的彩色坐标为: [850.2799072265625, 449.99932861328125]
coordinate relative to the color sensor after alignment:
 [0.4623022973537445, 0.17931503057479858, 1.3420000076293945]
经过配准后的彩色像素坐标: [850.0, 450.0]

注意观察以上结果:
我指定的彩色图片中的像素坐标为:

原始彩色像素坐标为: [850, 450]

通过未配准的方法求取3维坐标进行反算像素坐标的值如下:

经过计算后的彩色坐标为: [850.2799072265625, 449.99932861328125]

通过配准的方法求取3维坐标进行反算像素坐标的值如下:

经过配准后的彩色像素坐标: [850.0, 450.0]

可以看出两种方法求得的像素坐标与指定坐标是近似相等的,也就是说两种方法求取的三维坐标也是近似相等的。但是在L515上,使用未配准的方法去求取三维坐标出现了很大的偏差。

L515

import pyrealsense2 as rs
import numpy as np
import transforms3d as tfs

# 初始化配置
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 1280, 720, rs.format.bgr8, 30)
profile = pipeline.start(config)

# 获得相机内参和depth_scale
depth_scale = profile.get_device().first_depth_sensor().get_depth_scale()
print("depth_scale:",depth_scale)
depth_min = 0.11  # meter
depth_max = 2.0  # meter
depth_intrin = profile.get_stream(rs.stream.depth).as_video_stream_profile().get_intrinsics()
color_intrin = profile.get_stream(rs.stream.color).as_video_stream_profile().get_intrinsics()
# 获得相机深度模块和彩色模块的外参转换
depth_to_color_extrin = profile.get_stream(rs.stream.depth).as_video_stream_profile().get_extrinsics_to(
    profile.get_stream(rs.stream.color))
color_to_depth_extrin = profile.get_stream(rs.stream.color).as_video_stream_profile().get_extrinsics_to(
    profile.get_stream(rs.stream.depth))

coordinate_collect = []
frames = pipeline.wait_for_frames()
# 使用API获得三维坐标
# get the original depth frame
depth_frame = frames.get_depth_frame()
# 指定像素坐标
x = 850
y = 450
color_point = [x, y]
depth_point = rs.rs2_project_color_pixel_to_depth_pixel(
    depth_frame.get_data(), depth_scale,
    depth_min, depth_max,
    depth_intrin, color_intrin, depth_to_color_extrin, color_to_depth_extrin, color_point)
print("depth pixel:", depth_point)
distance = depth_frame.get_distance(int(depth_point[0]), int(depth_point[1]))
# 获得点在深度模块三维坐标系下的坐标
camera_coordinate_depth = rs.rs2_deproject_pixel_to_point(depth_intrin, depth_point, distance)
depth_point_after_calculation = rs.rs2_project_point_to_pixel(depth_intrin, camera_coordinate_depth)
print("经过计算后的深度坐标为:", depth_point_after_calculation)
# print("coordinate relative to depth sensor", camera_coordinate_depth)
camera_coordinate_color = rs.rs2_transform_point_to_point(depth_to_color_extrin, camera_coordinate_depth)
print("coordinate relative to color sensor using rs.rs2_project_color_pixel_to_depth_pixel():\n",
      camera_coordinate_color)
color_pixel = rs.rs2_project_point_to_pixel(color_intrin, camera_coordinate_color)
print("原始彩色像素坐标为:", [x, y])
print("经过计算后的彩色坐标为:", color_pixel)

align_to = rs.stream.color
align = rs.align(align_to)
aligned_frames = align.process(frames)
aligned_depth_frames = aligned_frames.get_depth_frame()
dis = aligned_depth_frames.get_distance(x, y)
camera_coordinate_align = rs.rs2_deproject_pixel_to_point(color_intrin, [x, y], dis)
print("coordinate relative to the color sensor after alignment:\n", camera_coordinate_align)
color_pixel1 = rs.rs2_project_point_to_pixel(color_intrin, camera_coordinate_align)
print("经过配准后的彩色像素坐标:", color_pixel1)
pipeline.stop()

运行结果如下:

depth_scale: 0.0002500000118743628
depth pixel: [398.83453369140625, 233.0365753173828]
经过计算后的深度坐标为: [398.83453369140625, 233.0365753173828]
coordinate relative to color sensor using rs.rs2_project_color_pixel_to_depth_pixel():
 [0.10922971367835999, 0.002470635809004307, 0.5473655462265015]
原始彩色像素坐标为: [850, 450]
经过计算后的彩色坐标为: [838.7570190429688, 357.4697265625]
coordinate relative to the color sensor after alignment:
 [0.16353169083595276, 0.0825377032160759, 0.7725000381469727]
经过配准后的彩色像素坐标: [850.0, 450.0]

注意观察以上结果:
我指定的彩色图片中的像素坐标为:

原始彩色像素坐标为: [850, 450]

通过未配准的方法求取3维坐标进行反算像素坐标的值如下:

经过计算后的彩色坐标为: [838.7570190429688, 357.4697265625]

通过配准的方法求取3维坐标进行反算像素坐标的值如下:

经过配准后的彩色像素坐标: [850.0, 450.0]

可以看到通过未配准的方法求取的像素坐标与指定坐标相差较大,导致这种方法所求得的三维坐标也与正确点偏差较大。但是配准之后的方法依旧是准确的。

进一步分析:

两种方法中所使用的原理是一样的,但是得出的结果却千差万别。现在使用D455做一些实验。修改rs.rs2_project_color_pixel_to_depth_pixel() 中的一些参数,发现在这些参数中,可以修改的只有depth_mindepth_max两个参数,修改两个参数进行实验。

D455

depth_min = 0.1 depth_max = 2.0

depth_min = 0.1  # meter
depth_max = 2.0  # meter

其他代码如上,仅仅修改这两个参数的值,得以下结果:

原始彩色像素坐标为: [850, 450]
经过计算后的彩色坐标为: [850.614990234375, 449.99853515625]
经过配准后的彩色像素坐标: [850.0, 450.0]

depth_min = 0.1 depth_max = 1.0

depth_min = 0.1  # meter
depth_max = 1.0  # meter

其他代码如上,仅仅修改这两个参数的值,得以下结果:

原始彩色像素坐标为: [850, 450]
经过计算后的彩色坐标为: [861.7542114257812, 449.9549255371094]
经过配准后的彩色像素坐标: [850.0, 450.0]

depth_min = 0.1 depth_max = 1.5

depth_min = 0.1  # meter
depth_max = 1.5  # meter

其他代码如上,仅仅修改这两个参数的值,得以下结果:

原始彩色像素坐标为: [850, 450]
经过计算后的彩色坐标为: [850.7327880859375, 449.99749755859375]
经过配准后的彩色像素坐标: [850.0, 450.0]

depth_min = 1.0 depth_max = 1.5

depth_min = 1.0  # meter
depth_max = 1.5  # meter

其他代码如上,仅仅修改这两个参数的值,得以下结果:

原始彩色像素坐标为: [850, 450]
经过计算后的彩色坐标为: [849.806396484375, 450.000732421875]
经过配准后的彩色像素坐标: [850.0, 450.0]

分析

经过上面参数的实验,可以得出:使用rs.rs2_project_color_pixel_to_depth_pixel() 的参数要符合测量的深度的范围,如果所选范围不当,那么所求出的depth_pixel就是不准确的。

L515

depth_min = 0.11 depth_max = 2.0

depth_min = 0.11  # meter
depth_max = 2.0 # meter

其他代码如上,仅仅修改这两个参数的值,得以下结果:

原始彩色像素坐标为: [850, 450]
经过计算后的彩色坐标为: [838.7406005859375, 357.3323669433594]
经过配准后的彩色像素坐标: [850.0, 450.0]

depth_min = 0.1 depth_max = 1.0

depth_min = 0.1  # meter
depth_max = 1.0 # meter

其他代码如上,仅仅修改这两个参数的值,得以下结果:

原始彩色像素坐标为: [850, 450]
经过计算后的彩色坐标为: [837.4801635742188, 346.76116943359375]
经过配准后的彩色像素坐标: [850.0, 450.0]

depth_min = 0.1 depth_max = 1.5

depth_min = 0.1  # meter
depth_max = 1.5 # meter

其他代码如上,仅仅修改这两个参数的值,得以下结果:

原始彩色像素坐标为: [850, 450]
经过计算后的彩色坐标为: [837.4309692382812, 346.347900390625]
经过配准后的彩色像素坐标: [850.0, 450.0]

depth_min = 0.5 depth_max = 1.5

depth_min = 0.5  # meter
depth_max = 1.5 # meter

其他代码如上,仅仅修改这两个参数的值,得以下结果:

原始彩色像素坐标为: [850, 450]
经过计算后的彩色坐标为: [848.8622436523438, 440.7563781738281]
经过配准后的彩色像素坐标: [850.0, 450.0]

depth_min = 0.7 depth_max = 1.5

depth_min = 0.7  # meter
depth_max = 1.5 # meter

其他代码如上,仅仅修改这两个参数的值,得以下结果:

原始彩色像素坐标为: [850, 450]
经过计算后的彩色坐标为: [849.7996215820312, 448.3738098144531]
经过配准后的彩色像素坐标: [850.0, 450.0]

综上,可以看出depth_maxdepth_min对函数rs.rs2_project_color_pixel_to_depth_pixel() 的影响很大,选取准确的区间能够让L515也获得准确的值。

结论

通过对两种方法原理的验证,可以看到未配准的方法所进行的计算量是远小于配准的方法的,在只需要知道一个RGB像素点的深度的使用场景中,使用未配准的方法效率更高。但是在使用函数rs.rs2_project_color_pixel_to_depth_pixel()的过程中,depth_min和depth_max的参数选取至关重要。

猜你喜欢

转载自blog.csdn.net/m0_46649553/article/details/121249649
今日推荐