Structured light inverse camera method reconstruction detailed explanation + code

Code address: In the public account " Computer Vision Workshop ", reply " Reverse Camera Method " in the background, you can download it directly.

Note: The theory in this article is mainly from References 1 and 2, and the code is from Reference 2 published by the research group of Mr. Zuo Chao, Nanjing University of Science and Technology.

The inverse camera method, also known as the triangular stereo model, regards the projector as an "inverse camera", projects structured light, actively marks the "same-named point" in the field of view, and uses a similar binocular parallax principle (not exactly the same). reconstruction.

01 Theory

1.1 Single target targeting

There is an error in the sensor, please refer to this part: This article illustrates the calibration algorithm of monocular camera .

In addition, the Matlab$calibration program also calculates its projection matrix P, which is what we need:

1.3 Reconstruction principle

We regard the projector as a "reverse" camera, assuming that after the system calibration, we have obtained the projection matrix (here changes the notation):

As we said before, from the world coordinate system -> pixel coordinate system, there is the following relationship:

Then there is the following relationship between the world coordinate system and the pixel coordinate system of the camera/projector:

It should be noted that this formula is calculated separately for each pixel, which can be reflected in the code.

Then analyze the accuracy of the formula with which factors:

The advantage of this method is that it is relatively simple, and the measurement is performed outside the calibration range, and the accuracy is relatively high.

02 Practice

2.1 Calibration

Calibration system parameters:

  1. Imaging errors of cameras and projectors

  2. Relative position relationship between camera and projector

(1) Camera calibration

In the code, the first step is to calibrate the camera:

num_x = 11; %number of circles in the x direction of the calibration board
num_y = 9; %number of circles in the y direction of the calibration board
dist_circ = 25; % 标定板




disp('开始相机标定...');


% 角点的像素坐标系
load('camera_imagePoints.mat'); % load the camera image points of the centers


% 角点的世界坐标系
worldPoints = generateCheckerboardPoints([num_y+1,num_x+1], dist_circ);


% 标定相机
[cameraParams,imagesUsed,estimationErrors] = estimateCameraParameters(imagePoints,worldPoints, 'EstimateTangentialDistortion', true);


figure;showReprojectionErrors(cameraParams);title('相机标定的重投影误差');
figure;showExtrinsics(cameraParams);title('相机标定的外参');

The reprojection error is 0.08 pixel, and the accuracy is still quite high. This is because of the use of a high-precision calibration board. It looks like this:

Note: Whether it is a checkerboard or a ring calibration board, the calibration principle is the same, that is, the coordinates under the camera pixel coordinates of the corner/center point are extracted, and then calibrated by Zhang Zhengyou's calibration method.


We take the point where the center of the upper left corner of the first calibration board is located as the origin of the world coordinate system, and save the parameters needed for reconstruction after saving. In the code:

% save Rc_1, Tc_1, KK, inv_KK,% 相机相对于世界坐标系原点的位姿(旋转+平移=外参)Rc_1 = cameraParams.RotationMatrices(:,:,1);Rc_1 = Rc_1';    % 转置Tc_1 = cameraParams.TranslationVectors(1, :);Tc_1 = Tc_1'; % 相机内参KK = cameraParams.IntrinsicMatrix';save('CamCalibResult.mat', 'Rc_1', 'Tc_1', 'KK');

(2) Projector calibration

The projector calibration will be a little more complicated, because it cannot directly see the pixels in the center of the circular calibration plate. This is actually very simple. With the help of the camera, we need to project the phase shift pattern and obtain it by the phase shift method.

Specifically: We know that if we want to perform single-target targeting, two pieces of information must be known. From the perspective of the projector:

  1. The world coordinates of the center of the circle, known, the same as the camera

  2. The pixel coordinates of the center of the circle, unknown because the calibration plate cannot be directly seen

the way is:

  1. Project the phase shift pattern through the projector item calibration plate, take the camera, and decode to obtain the phase map

  2. == Take the phase of each center of the circle, match it with the ideal phase map (projector), and get the pixel coordinates of the center of the circle under the projector==

After obtaining these two pieces of information, the calibration can be performed directly. The code here assumes that we have obtained == the pixel coordinates of the center of the circle under the projector== (to be added later), and the subsequent operations are the same as the camera calibration:

%% step2: 标定投影仪
disp('开始投影仪标定...');
% 加载投影仪下圆心的像素坐标
load('projector_imagePoints.mat'); % load the projector image points of the centers
% 圆心的世界坐标
worldPoints = generateCheckerboardPoints([num_y+1,num_x+1], dist_circ);
[prjParams,imagesUsed,estimationErrors] = estimateCameraParameters(prjPoints,worldPoints, 'EstimateTangentialDistortion', true);
figure; showReprojectionErrors(prjParams); title('投影仪的重投影误差');
figure; showExtrinsics(prjParams);title('投影仪的外参');


% 保存参数
Rc_1 = prjParams.RotationMatrices(:,:,1);
Rc_1 = Rc_1';
Tc_1 = prjParams.TranslationVectors(1, :);
Tc_1 = Tc_1';
KK = prjParams.IntrinsicMatrix';
save('PrjCalibResult.mat', 'Rc_1', 'Tc_1', 'KK');

2.2 Rebuild

Test object:

First set the parameters:

%% step1: input parameterswidth = 640; % camera widthheight = 480; % camera heightprj_width = 912; % projector width

Load calibration parameters:

%camera: Projection matrix Pcload('CamCalibResult.mat');Kc = KK;  % 相机内参Ac = Kc * [Rc_1, Tc_1]; % 计算相机投影矩阵
%projector: Projection matrix Apload('PrjCalibResult.mat');Kp = KK;  % 投影仪内参Ap = Kp * [Rc_1, Tc_1];  % 计算投影仪投影矩阵

It should be noted here that:

Ac = Kc * [Rc_1, Tc_1];
Ap = Kp * [Rp_1, Rc_2];

All are calculating the projection matrix : mapping the coordinates in the world coordinate system to the pixel coordinate system .

Now read the phase map of the test image:

% 条纹频率64,也是间距(一个周期由64个像素组成)用于计算绝对相位,另外的频率1、8用于包裹相位展开f = 64;             % 条纹频率(单个周期条纹的像素个数)load('up_test_obj.mat');up_test_obj = up_test_obj / f;  % 将相位归一化到[0, 2pi]之间figure; imshow(up_test_obj / (2 * pi)); colorbar; title("相位图, freq=" + num2str(f));figure; mesh(up_test_obj); colorbar; title("相位图, freq=" + num2str(f));

Calculate the coordinates of the projector:

% 计算投影仪坐标
x_p = up_test_obj / (2 * pi) * prj_width;

Unlike standard binocular disparity reconstruction, we do not need to perform stereo correction, and reconstruct directly according to $Eq. \ref{3dr}$:

% 3D重建Xws = nan(height, width);Yws = nan(height, width);Zws = nan(height, width);
for y = 1:height   for x = 1:width       if ~isnan(up_test_obj(y, x))           uc = x - 1;           vc = y - 1;           up = (x_p(y, x) - 1);            % Eq. (32) in the reference paper.           A = [Ac(1,1) - Ac(3,1) * uc, Ac(1,2) - Ac(3,2) * uc, Ac(1,3) - Ac(3,3) * uc;                Ac(2,1) - Ac(3,1) * vc, Ac(2,2) - Ac(3,2) * vc, Ac(2,3) - Ac(3,3) * vc;                Ap(1,1) - Ap(3,1) * up, Ap(1,2) - Ap(3,2) * up, Ap(1,3) - Ap(3,3) * up];                      b = [Ac(3,4) * uc - Ac(1,4);                Ac(3,4) * vc - Ac(2,4);                Ap(3,4) * up - Ap(1,4)];                XYZ_w = inv(A) * b;           Xws(y, x) = XYZ_w(1); Yws(y, x) = XYZ_w(2); Zws(y, x) = XYZ_w(3);       end   endend

Point cloud display:

% 点云显示xyzPoints(:, 1) = Xws(:);xyzPoints(:, 2) = Yws(:);xyzPoints(:, 3) = Zws(:);ptCloud = pointCloud(xyzPoints);xlimits = [min(Xws(:)), max(Xws(:))];ylimits = [min(Yws(:)), max(Yws(:))];zlimits = ptCloud.ZLimits;player = pcplayer(xlimits,ylimits,zlimits);xlabel(player.Axes,'X (mm)');ylabel(player.Axes,'Y (mm)');zlabel(player.Axes,'Z (mm)');view(player,ptCloud);

03 Questions

Now there are three more questions:

  1. How to project the pattern and find the absolute phase?

  2. How can I help the projector "see" the calibration plate? Perform projector calibration?

  3. How to improve the accuracy, such as gamma correction?

I will talk about these later~ Welcome to pay attention to the public account!

04 References

  1. GPU-assisted high-resolution, real-time 3-D shape measurement,Song Zhang,OPTICS EXPRESS,2006

  2. Calibration of fringe projection profilometry: A comparative review,Feng Shiji,Chao Zuo,Optics and Lasers in Engineering,2021

Remarks: The author is also a special guest of our "3D Vision From Beginner to Proficient" : a super dry 3D vision learning community

This article is for academic sharing only, if there is any infringement, please contact to delete the article.

Download 1

Reply in the background of the "Computer Vision Workshop" public account: Deep learning , you can download nearly 30 pdf books on deep learning algorithms, 3D deep learning, deep learning framework, target detection, GAN and other related content.

Download 2

Reply in the background of the "Computer Vision Workshop" public account: Computer Vision , you can download 17 pdf books related to computer vision, including computer vision algorithms, Python visual combat, Opencv3.0 learning, etc.

Download 3

Reply in the background of the "Computer Vision Workshop" public account: SLAM , you can download exclusive SLAM related video courses, including visual SLAM and laser SLAM excellent courses.

Heavy! Computer Vision Workshop - Learning Exchange Group has been established

Scan the code to add a WeChat assistant, and you can apply to join the 3D Vision Workshop - Academic Paper Writing and Submission WeChat exchange group, which aims to exchange writing and submission matters such as top conferences, top journals, SCI, and EI.

At the same time , you can also apply to join our subdivision direction exchange group. At present, there are mainly ORB-SLAM series source code learning, 3D vision , CV & deep learning , SLAM , 3D reconstruction , point cloud post-processing , automatic driving, CV introduction, 3D measurement, VR /AR, 3D face recognition, medical imaging, defect detection, pedestrian re-identification, target tracking, visual product landing, visual competition, license plate recognition, hardware selection, depth estimation, academic exchanges, job search exchanges and other WeChat groups, please scan the following WeChat account plus group, remarks: "research direction + school/company + nickname", for example: "3D vision + Shanghai Jiaotong University + Jingjing". Please remark according to the format, otherwise it will not be approved. After the addition is successful, the relevant WeChat group will be invited according to the research direction. Please contact for original submissions .

▲Long press to add WeChat group or contribute

▲Long press to follow the official account

3D vision from introductory to proficient knowledge planet : video courses for 3D vision field ( 3D reconstruction series , 3D point cloud series , structured light series , hand-eye calibration , camera calibration , orb-slam3 and other video courses), knowledge point summary, introduction The advanced learning route, the latest paper sharing, and the question answering will be deeply cultivated, and there will be technical guidance from algorithm engineers from various major factories. At the same time, Planet will cooperate with well-known companies to release 3D vision-related algorithm development jobs and project docking information, creating a gathering area for die-hard fans that integrates technology and employment. Nearly 2,000 Planet members make common progress and knowledge to create a better AI world. Planet Entrance:

Learn the core technology of 3D vision, scan and view the introduction, unconditional refund within 3 days

 There are high-quality tutorial materials in the circle, which can answer questions and help you solve problems efficiently

I find it useful, please give a like and watch~  

Guess you like

Origin blog.csdn.net/qq_29462849/article/details/118098191#comments_20984904