(Corrected accuracy is about 1mm) Realsense d435i depth camera + Aruco + checkerboard + OpenCV hand-eye calibration process record


2023.5 update

Recently, I helped someone do a hand-eye calibration, and after I calibrated, the accuracy can reach about 1mm. So the 10mm error in the original text may be due to a problem with the coordinate system of the arm itself at that time. Then the code used was changed to python-based and placed below.

Newcomers can complete the calibration by referring to the previous code.
If you have any questions, you can leave a message and communicate together~

What is required for hand-eye calibration:
1. Take several checkerboard photos in different postures (I used 10 photos)
2. Record the posture format of the robot arm as (x, y, z, rx, ry, rz) where the posture It is the rpy angle, the unit is deg, and it needs to be saved as an excel file.
The content that needs to be modified in the code:
1. Camera internal reference matrix self.K and distortion parameters
2. Checkerboard size
3. The path to store img and excel

import os
import cv2
import xlrd2
from math import *
import numpy as np

# p[321.501 242.543]  f[606.25 605.65]
class Calibration:
    def __init__(self):
        self.K = np.array([[606.25, 0.00000000e+00, 321.501],
                           [0.00000000e+00, 605.65, 242.543],
                           [0.00000000e+00, 0.00000000e+00, 1.00000000e+00]], dtype=np.float64)
        self.distortion = np.array([[0, 0, 0.0, 0.0, 0]])
        self.target_x_number = 11
        self.target_y_number = 8
        self.target_cell_size = 20

    def angle2rotation(self, x, y, z):
        Rx = np.array([[1, 0, 0], [0, cos(x), -sin(x)], [0, sin(x), cos(x)]])
        Ry = np.array([[cos(y), 0, sin(y)], [0, 1, 0], [-sin(y), 0, cos(y)]])
        Rz = np.array([[cos(z), -sin(z), 0], [sin(z), cos(z), 0], [0, 0, 1]])
        R = Rz @ Ry @ Rx
        return R

    def gripper2base(self, x, y, z, tx, ty, tz):
        thetaX = x / 180 * pi
        thetaY = y / 180 * pi
        thetaZ = z / 180 * pi
        R_gripper2base = self.angle2rotation(thetaX, thetaY, thetaZ)
        T_gripper2base = np.array([[tx], [ty], [tz]])
        Matrix_gripper2base = np.column_stack([R_gripper2base, T_gripper2base])
        Matrix_gripper2base = np.row_stack((Matrix_gripper2base, np.array([0, 0, 0, 1])))
        R_gripper2base = Matrix_gripper2base[:3, :3]
        T_gripper2base = Matrix_gripper2base[:3, 3].reshape((3, 1))
        return R_gripper2base, T_gripper2base

    def target2camera(self, img):
        gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
        ret, corners = cv2.findChessboardCorners(gray, (self.target_x_number, self.target_y_number), None)
        corner_points = np.zeros((2, corners.shape[0]), dtype=np.float64)
        for i in range(corners.shape[0]):
            corner_points[:, i] = corners[i, 0, :]
        object_points = np.zeros((3, self.target_x_number * self.target_y_number), dtype=np.float64)
        count = 0
        for i in range(self.target_y_number):
            for j in range(self.target_x_number):
                object_points[:2, count] = np.array(
                    [(self.target_x_number - j - 1) * self.target_cell_size,
                     (self.target_y_number - i - 1) * self.target_cell_size])
                count += 1
        retval, rvec, tvec = cv2.solvePnP(object_points.T, corner_points.T, self.K, distCoeffs=self.distortion)
        Matrix_target2camera = np.column_stack(((cv2.Rodrigues(rvec))[0], tvec))
        Matrix_target2camera = np.row_stack((Matrix_target2camera, np.array([0, 0, 0, 1])))
        R_target2camera = Matrix_target2camera[:3, :3]
        T_target2camera = Matrix_target2camera[:3, 3].reshape((3, 1))
        return R_target2camera, T_target2camera

    def process(self, img_path, pose_path):
        image_list = []
        for root, dirs, files in os.walk(img_path):
            if files:
                for file in files:
                    image_name = os.path.join(root, file)
                    image_list.append(image_name)
        R_target2camera_list = []
        T_target2camera_list = []
        for img_path in image_list:
            img = cv2.imread(img_path)
            R_target2camera, T_target2camera = self.target2camera(img)
            R_target2camera_list.append(R_target2camera)
            T_target2camera_list.append(T_target2camera)
        R_gripper2base_list = []
        T_gripper2base_list = []
        data = xlrd2.open_workbook(pose_path)
        table = data.sheets()[0]
        for row in range(table.nrows):
            x = table.cell_value(row, 3)
            y = table.cell_value(row, 4)
            z = table.cell_value(row, 5)
            tx = table.cell_value(row, 0)
            ty = table.cell_value(row, 1)
            tz = table.cell_value(row, 2)
            R_gripper2base, T_gripper2base = self.gripper2base(x, y, z, tx, ty, tz)
            R_gripper2base_list.append(R_gripper2base)
            T_gripper2base_list.append(T_gripper2base)
        R_camera2base, T_camera2base = cv2.calibrateHandEye(R_gripper2base_list, T_gripper2base_list,
                                                            R_target2camera_list, T_target2camera_list)
        return R_camera2base, T_camera2base, R_gripper2base_list, T_gripper2base_list, R_target2camera_list, T_target2camera_list

    def check_result(self, R_cb, T_cb, R_gb, T_gb, R_tc, T_tc):
        for i in range(len(R_gb)):
            RT_gripper2base = np.column_stack((R_gb[i], T_gb[i]))
            RT_gripper2base = np.row_stack((RT_gripper2base, np.array([0, 0, 0, 1])))
            # print(RT_gripper2base)

            RT_camera_to_gripper = np.column_stack((R_cb, T_cb))
            RT_camera_to_gripper = np.row_stack((RT_camera_to_gripper, np.array([0, 0, 0, 1])))
            print(RT_camera_to_gripper)#这个就是手眼矩阵

            RT_target_to_camera = np.column_stack((R_tc[i], T_tc[i]))
            RT_target_to_camera = np.row_stack((RT_target_to_camera, np.array([0, 0, 0, 1])))
            # print(RT_target_to_camera)
            RT_target_to_base = RT_gripper2base @ RT_camera_to_gripper @ RT_target_to_camera
            
            print("第{}次验证结果为:".format(i))
            print(RT_target_to_base)
            print('')


if __name__ == "__main__":
    image_path = r"I:\data_handeye\img"
    pose_path = r"I:\pose.xlsx"
    calibrator = Calibration()
    R_cb, T_cb, R_gb, T_gb, R_tc, T_tc = calibrator.process(image_path, pose_path)
    calibrator.check_result(R_cb, T_cb, R_gb, T_gb, R_tc, T_tc)

The result shows:
the following three matrices represent the attitude of the calibration board in the base coordinate system obtained through calculation under different attitudes. The last column is the position, and the unit is mm. It can be seen that the accuracy is about 1mm.

第1次验证结果为:
[[ 3.98925017e-03  9.99742174e-01 -2.23533483e-02 -3.90700113e+02]
 [ 9.99982733e-01 -4.08467234e-03 -4.22477708e-03 -1.11656116e+02]
 [-4.31499393e-03 -2.23361086e-02 -9.99741206e-01 -5.01618116e+00]
 [ 0.00000000e+00  0.00000000e+00  0.00000000e+00  1.00000000e+00]]

第2次验证结果为:
[[ 2.05751209e-03  9.99866123e-01 -1.62327447e-02 -3.90254641e+02]
 [ 9.99993197e-01 -2.10692535e-03 -3.02753423e-03 -1.13056000e+02]
 [-3.06133010e-03 -1.62264051e-02 -9.99863657e-01 -6.26002256e+00]
 [ 0.00000000e+00  0.00000000e+00  0.00000000e+00  1.00000000e+00]]

第3次验证结果为:
[[ 4.87875821e-03  9.99819129e-01 -1.83822511e-02 -3.91281959e+02]
 [ 9.99986520e-01 -4.91059054e-03 -1.68694875e-03 -1.11955299e+02]
 [-1.77691133e-03 -1.83737731e-02 -9.99829609e-01 -6.29677977e+00]
 [ 0.00000000e+00  0.00000000e+00  0.00000000e+00  1.00000000e+00]]

------------------The following is the original text---------------------

1. Preliminary preparation

1.1 Principle of hand-eye calibration

There are many principles of hand-eye calibration on the Internet, and a few blogs are recommended:
Hand-eye calibration_Comprehensive and detailed derivation process
Robot hand-eye calibration
Understand robot hand-eye calibration in
simple terms. Simply put, you need to solve a matrix equation of AX=XB. The specific solution method can be found in
hand-eye calibration AX=XB solution method (literature summary)

1.2 The principle of Aruco returning pose

The ArUco marker was originally proposed by S.Garrido-Jurado et al. in the paper Automatic generation and detection of highly reliable fiducial markers under occlusion published in 2014. The full name of ArUco is Augmented Reality University of Cordoba. Aruco is a positioning marker tool similar to a QR code. By deploying Markers in the environment, it can assist robots in positioning.

Complete analysis and understanding of OpenCV Aruco parameter source code!
ArUco Marker detection principle

1.3 Generate an Aruco Marker

Aruco generates the website online , pay attention to choose the original format, and print it out for later use.
insert image description here

1.4 Install the aruco_ros package

cd ~/catkin_ws/src
git clone https://github.com/pal-robotics/aruco_ros.git 
cd ..
catkin_make

1.5 Install the realsense_ros package

Follow the steps on GitHub to install it.
https://github.com/IntelRealSense/realsense-ros

2. Experimental environment

  • Robotic arm: rokae xMate3 pro
  • Camera: realsense d435i
  • Aruco id:123 size:100mm
  • ubuntu 16.04 ROS+ win 10 RobotAssist
  • Installation method: eye in hand

Experimental Equipment Schematic

3. Experimental process

3.1 Configure Aruco launch file

roscd aruco_ros
cd launch
gedit single.launch

To modify single.launchthe file, you need to modify markerId markerSizethe id and size of the aruco generated for you, and /camera_info /imagepoint to the topic released by realsense, camera_framechange to /camera_link.

<launch>
 
    <arg name="markerId"        default="123"/>
    <arg name="markerSize"      default="0.1"/>    <!-- in m -->
    <arg name="eye"             default="left"/>
    <arg name="marker_frame"    default="aruco_marker_frame"/>
    <arg name="ref_frame"       default=""/>  <!-- leave empty and the pose will be published wrt param parent_name -->
    <arg name="corner_refinement" default="LINES" /> <!-- NONE, HARRIS, LINES, SUBPIX -->
 
 
    <node pkg="aruco_ros" type="single" name="aruco_single">
        <remap from="/camera_info" to="/camera/color/camera_info" />
        <remap from="/image" to="/camera/color/image_raw" />
        <param name="image_is_rectified" value="True"/>
        <param name="marker_size"        value="$(arg markerSize)"/>
        <param name="marker_id"          value="$(arg markerId)"/>
        <param name="reference_frame"    value="$(arg ref_frame)"/> 
        <param name="camera_frame"       value="/camera_link"/>
        <param name="marker_frame"       value="$(arg marker_frame)" />
        <param name="corner_refinement"  value="$(arg corner_refinement)" />
    </node>
 
</launch>
 

3.2 Get the pose of Aruco relative to the camera

Run the aruco_ros+realsense node

 roslaunch realsense2_camera rs_camera.launch 
 roslaunch aruco_ros single.launch

View the displayed image and the returned pose

rosrun image_view image_view image:=/aruco_single/result
rostopic echo /aruco_single/pose

aruco

3.3 Get the pose of the end of the robot arm:

Using Rokae's own upper computer HMI software Robot Assist, it can be easily read.
Among them, ABC is "ZYX" type Euler angle, and Q1~Q4 are (w, x, y, z) quaternions.

rokae hmi

3.4 Opencv solves the hand-eye matrix

Code reference hand-eye calibration (1): Opencv4 implements hand-eye calibration and hand-eye system testing , with some modifications added.
The pose returned by aruco is in the form of quaternion, so attitudeVectorToMatrixthe input of the function is changed to position + Euler angle –> rotation matrix (input is nx6), position + quaternion –> rotation matrix (input is nx7).

#include <iostream>
#include <string>
#include <opencv2/opencv.hpp>
#include <opencv2/core.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/calib3d.hpp>
#include <iterator> 
#include <algorithm>  
using namespace cv;
using namespace std;

template <typename T>
std::ostream& operator<< (std::ostream& out, const std::vector<T>& v) {
    
    
	if (!v.empty()) {
    
    
		out << '[';
		std::copy(v.begin(), v.end(), std::ostream_iterator<T>(out, ", "));
		out << "\b\b]";
	}
	return out;
}
int num = 16;
//相机中组标定板的位姿,x,y,z,w,x,y,z
Mat_<double> CalPose = (cv::Mat_<double>(num, 7) <<
	0.01963879, -0.009811901, 0.205815151, -0.472094888, 0.101775888, -0.345924179, 0.804428087, 
	0.018775674, -0.008453256, 0.187205523, -0.486293263, 0.154739671, -0.358078652, 0.781891409, 
	0.012794545, -0.004475131, 0.181938171, 0.50586501, -0.228167491, 0.37478512, -0.742681831,
	0.032269519, -0.00216203, 0.173868418, 0.520070883, -0.334713601, 0.409799027, -0.670490746, 
	0.036921501, -0.00520581, 0.173751414, 0.509143837, -0.389764156, 0.430712138, -0.635093308, 
	0.028183416, -0.003799309, 0.157465994, -0.493999273, 0.434277098, -0.443106472, 0.609118031, 
	0.011437201, 0.00248707, 0.160097286, 0.505064681, -0.477579273, 0.433387075, -0.573588135,
	0.032602217, 0.021788951, 0.224987134, -0.45692465, 0.54759083, -0.46897897, 0.520982603,
	0.012125032, -0.002286719, 0.17284067, -0.469627705, 0.49778925, -0.506302662, 0.524703054,
	0.005448693, -0.004084263, 0.196215734, 0.420676917, -0.55355367, 0.57085096, -0.43673613, 
	-0.001541587, -0.000362108, 0.187365457, 0.386895866, -0.565030889, 0.607491241, -0.402499782, 
	0.0283868, 0.00129835, 0.21059823, 0.367601287, -0.614671423, 0.616489404, -0.327091961, 
	0.026122695, -0.006088101, 0.222446054, -0.360286232, 0.636446394, -0.615347513, 0.294070155,
	0.016605999, 0.002255499, 0.214847937, -0.322620688, 0.64815307, -0.642728715, 0.250426214, 
	-0.004833174, -0.00322185, 0.206448808, -0.318973207, 0.693907003, -0.618815043, 0.183894282, 
	-0.023912154, -0.006630629, 0.213570014, -0.312473203, 0.731833827, -0.595186057, 0.111952241);
//机械臂末端的位姿,x,y,z,w,x,y,z
Mat_<double> ToolPose = (cv::Mat_<double>(num, 7) <<
	0.508533, -0.260803, 0.292825, -0.0002, 0.2843, 0.8712, 0.4003, 
	0.527713, -0.230407, 0.300856, 0.0012, 0.2461, 0.8979, 0.365,
	0.544016, -0.192872, 0.326135, 0.0075, 0.1894, 0.9311, 0.3117, 
	0.552381, -0.149794, 0.342366, 0.0208, 0.113, 0.9692, 0.2178,
	0.564071, -0.118316, 0.356974, 0.0196, 0.0855, 0.9829, 0.1617, 
	0.574263, -0.073508, 0.354265, 0.0123, 0.0659, 0.991, 0.1162,
	0.573574, -0.038738, 0.363898, 0.01, 0.0166, 0.9961, 0.0866, 
	0.563025, -0.000004, 0.432336, 0, 0, 1, 0, 
	0.561599, 0.018044, 0.376489, 0.0398, 0.0384, 0.9984, 0.0135, 
	0.553613, 0.117616, 0.38041, 0.0523, 0.0278, 0.993, -0.102,
	0.546303, 0.163889, 0.350932, 0.0606, 0.0419, 0.9854, -0.1537, 
	0.527343, 0.191386, 0.356761, 0.066, -0.0058, 0.9719, -0.2261, 
	0.525386, 0.218939, 0.352163, 0.0683, -0.0293, 0.9638, -0.2561,
	0.510583, 0.26087, 0.30687, 0.0742, -0.0232, 0.9467, -0.3125,
	0.494977, 0.290977, 0.257741, 0.0708, -0.089, 0.9264, -0.359, 
	0.464551, 0.322355, 0.214358, 0.0712, -0.1525, 0.8982, -0.4062);
//R和T转RT矩阵
Mat R_T2RT(Mat& R, Mat& T)
{
    
    
	Mat RT;
	Mat_<double> R1 = (cv::Mat_<double>(4, 3) << R.at<double>(0, 0), R.at<double>(0, 1), R.at<double>(0, 2),
		R.at<double>(1, 0), R.at<double>(1, 1), R.at<double>(1, 2),
		R.at<double>(2, 0), R.at<double>(2, 1), R.at<double>(2, 2),
		0.0, 0.0, 0.0);
	cv::Mat_<double> T1 = (cv::Mat_<double>(4, 1) << T.at<double>(0, 0), T.at<double>(1, 0), T.at<double>(2, 0), 1.0);

	cv::hconcat(R1, T1, RT);//C=A+B左右拼接
	return RT;
}

//RT转R和T矩阵
void RT2R_T(Mat& RT, Mat& R, Mat& T)
{
    
    
	cv::Rect R_rect(0, 0, 3, 3);
	cv::Rect T_rect(3, 0, 1, 3);
	R = RT(R_rect);
	T = RT(T_rect);
}

//判断是否为旋转矩阵
bool isRotationMatrix(const cv::Mat& R)
{
    
    
	cv::Mat tmp33 = R({
    
     0,0,3,3 });
	cv::Mat shouldBeIdentity;

	shouldBeIdentity = tmp33.t() * tmp33;

	cv::Mat I = cv::Mat::eye(3, 3, shouldBeIdentity.type());

	return  cv::norm(I, shouldBeIdentity) < 1e-6;
}

/** @brief 欧拉角 -> 3*3 的R
*	@param 	eulerAngle		角度值
*	@param 	seq				指定欧拉角xyz的排列顺序如:"xyz" "zyx"
*/
cv::Mat eulerAngleToRotatedMatrix(const cv::Mat& eulerAngle, const std::string& seq)
{
    
    
	CV_Assert(eulerAngle.rows == 1 && eulerAngle.cols == 3);

	eulerAngle /= 180 / CV_PI;
	cv::Matx13d m(eulerAngle);
	auto rx = m(0, 0), ry = m(0, 1), rz = m(0, 2);
	auto xs = std::sin(rx), xc = std::cos(rx);
	auto ys = std::sin(ry), yc = std::cos(ry);
	auto zs = std::sin(rz), zc = std::cos(rz);

	cv::Mat rotX = (cv::Mat_<double>(3, 3) << 1, 0, 0, 0, xc, -xs, 0, xs, xc);
	cv::Mat rotY = (cv::Mat_<double>(3, 3) << yc, 0, ys, 0, 1, 0, -ys, 0, yc);
	cv::Mat rotZ = (cv::Mat_<double>(3, 3) << zc, -zs, 0, zs, zc, 0, 0, 0, 1);

	cv::Mat rotMat;

	if (seq == "zyx")		rotMat = rotX * rotY * rotZ;
	else if (seq == "yzx")	rotMat = rotX * rotZ * rotY;
	else if (seq == "zxy")	rotMat = rotY * rotX * rotZ;
	else if (seq == "xzy")	rotMat = rotY * rotZ * rotX;
	else if (seq == "yxz")	rotMat = rotZ * rotX * rotY;
	else if (seq == "xyz")	rotMat = rotZ * rotY * rotX;
	else {
    
    
		cv::error(cv::Error::StsAssert, "Euler angle sequence string is wrong.",
			__FUNCTION__, __FILE__, __LINE__);
	}

	if (!isRotationMatrix(rotMat)) {
    
    
		cv::error(cv::Error::StsAssert, "Euler angle can not convert to rotated matrix",
			__FUNCTION__, __FILE__, __LINE__);
	}

	return rotMat;
	//cout << isRotationMatrix(rotMat) << endl;
}

/** @brief 四元数转旋转矩阵
*	@note  数据类型double; 四元数定义 q = w + x*i + y*j + z*k
*	@param q 四元数输入{w,x,y,z}向量
*	@return 返回旋转矩阵3*3
*/
cv::Mat quaternionToRotatedMatrix(const cv::Vec4d& q)
{
    
    
	double w = q[0], x = q[1], y = q[2], z = q[3];

	double x2 = x * x, y2 = y * y, z2 = z * z;
	double xy = x * y, xz = x * z, yz = y * z;
	double wx = w * x, wy = w * y, wz = w * z;

	cv::Matx33d res{
    
    
		1 - 2 * (y2 + z2),	2 * (xy - wz),		2 * (xz + wy),
		2 * (xy + wz),		1 - 2 * (x2 + z2),	2 * (yz - wx),
		2 * (xz - wy),		2 * (yz + wx),		1 - 2 * (x2 + y2),
	};
	return cv::Mat(res);
}

/** @brief ((四元数||欧拉角||旋转向量) && 转移向量) -> 4*4 的Rt
*	@param 	m				1*6 || 1*7的矩阵  -> 6  {x,y,z, rx,ry,rz}   7 {x,y,z, qw,qx,qy,qz}
*	@param 	useQuaternion	如果是1*7的矩阵,判断是否使用四元数计算旋转矩阵
*	@param 	seq				如果通过欧拉角计算旋转矩阵,需要指定欧拉角xyz的排列顺序如:"xyz" "zyx" 为空表示旋转向量
*/
cv::Mat attitudeVectorToMatrix(cv::Mat m, bool useQuaternion, const std::string& seq)
{
    
    
	CV_Assert(m.total() == 6 || m.total() == 7);
	if (m.cols == 1)
		m = m.t();
	cv::Mat tmp = cv::Mat::eye(4, 4, CV_64FC1);
	//如果使用四元数转换成旋转矩阵则读取m矩阵的第第四个成员,读4个数据
	if (useQuaternion)	// normalized vector, its norm should be 1.
	{
    
    
		cv::Vec4d quaternionVec = m({
    
     3, 0, 4, 1 });
		quaternionToRotatedMatrix(quaternionVec).copyTo(tmp({
    
     0, 0, 3, 3 }));
		// cout << norm(quaternionVec) << endl; 
	}
	else
	{
    
    
		cv::Mat rotVec;
		if (m.total() == 6)
			rotVec = m({
    
     3, 0, 3, 1 });		//6
		else
			rotVec = m({
    
     7, 0, 3, 1 });		//10

		//如果seq为空表示传入的是旋转向量,否则"xyz"的组合表示欧拉角
		if (0 == seq.compare(""))
			cv::Rodrigues(rotVec, tmp({
    
     0, 0, 3, 3 }));
		else
			eulerAngleToRotatedMatrix(rotVec, seq).copyTo(tmp({
    
     0, 0, 3, 3 }));
	}
	tmp({
    
     3, 0, 1, 3 }) = m({
    
     0, 0, 3, 1 }).t();
	//std::swap(m,tmp);
	return tmp;
}
int main()
{
    
    
	//定义手眼标定矩阵
	std::vector<Mat> R_gripper2base;
	std::vector<Mat> t_gripper2base;
	std::vector<Mat> R_target2cam;
	std::vector<Mat> t_target2cam;
	Mat R_cam2gripper = (Mat_<double>(3, 3));
	Mat t_cam2gripper = (Mat_<double>(3, 1));

	vector<Mat> images;
	size_t num_images = num;

	// 读取末端,标定板的姿态矩阵 4*4
	std::vector<cv::Mat> vecHb, vecHc;
	cv::Mat Hcb;//定义相机camera到末端grab的位姿矩阵
	Mat tempR, tempT;

	for (size_t i = 0; i < num_images; i++)//计算标定板位姿
	{
    
    
		cv::Mat tmp = attitudeVectorToMatrix(CalPose.row(i), true, ""); //转移向量转旋转矩阵
		vecHc.push_back(tmp);
		RT2R_T(tmp, tempR, tempT);

		R_target2cam.push_back(tempR);
		t_target2cam.push_back(tempT);
	}
	//cout << R_target2cam << endl;
	for (size_t i = 0; i < num_images; i++)//计算机械臂位姿
	{
    
    
		//cv::Mat tmp = attitudeVectorToMatrix(ToolPose.row(i), true, "zyx");
		cv::Mat tmp = attitudeVectorToMatrix(ToolPose.row(i), true, ""); //机械臂位姿为欧拉角-旋转矩阵
		//tmp = tmp.inv();//机械臂基座位姿为欧拉角-旋转矩阵
		vecHb.push_back(tmp);
		RT2R_T(tmp, tempR, tempT);

		R_gripper2base.push_back(tempR);
		t_gripper2base.push_back(tempT);

	}
	//cout << t_gripper2base[0] << " " << t_gripper2base[1] << " " << t_gripper2base[2] << " " << endl;
	//cout << t_gripper2base << endl;
	//手眼标定,CALIB_HAND_EYE_TSAI法为11ms,最快
	calibrateHandEye(R_gripper2base, t_gripper2base, R_target2cam, t_target2cam, R_cam2gripper, t_cam2gripper, CALIB_HAND_EYE_TSAI);

	Hcb = R_T2RT(R_cam2gripper, t_cam2gripper);//矩阵合并

	std::cout << "Hcb 矩阵为: " << std::endl;
	std::cout << Hcb << std::endl;
	cout << "是否为旋转矩阵:" << isRotationMatrix(Hcb) << std::endl << std::endl;//判断是否为旋转矩阵

	//Tool_In_Base*Hcg*Cal_In_Cam,用第一组和第二组进行对比验证
	cout << "第一组和第二组手眼数据验证:" << endl;
	cout << vecHb[0] * Hcb * vecHc[0] << endl << vecHb[1] * Hcb * vecHc[1] << endl << endl;//.inv()

	cout << "标定板在相机中的位姿:" << endl;
	cout << vecHc[1] << endl;
	cout << "手眼系统反演的位姿为:" << endl;
	//用手眼系统预测第一组数据中标定板相对相机的位姿,是否与vecHc[1]相同
	cout << Hcb.inv() * vecHb[1].inv() * vecHb[0] * Hcb * vecHc[0] << endl << endl;

	cout << "----手眼系统测试----" << endl;
	cout << "机械臂下标定板XYZ为:" << endl;
	for (int i = 0; i < vecHc.size(); ++i)
	{
    
    
		cv::Mat cheesePos{
    
     0.0,0.0,0.0,1.0 };//4*1矩阵,单独求机械臂下,标定板的xyz
		cv::Mat worldPos = vecHb[i] * Hcb * vecHc[i] * cheesePos;
		cout << i << ": " << worldPos.t() << endl;
	}
	getchar();
}

3.5 Experimental results

The accuracy has not yet reached the extreme, and the current error is within 1cm.

insert image description here

4. Summary of related thoughts

4.1 Representation methods of multiple gestures

It can be realized in the code attitudeVectorToMatrix: position + Euler angle –> rotation matrix (input is nx6) position + quaternion –> rotation matrix (input is nx7)

  • The algorithm from Euler angles to rotation matrix is ​​not unique. When using the Euler angle calculation, be sure to pay attention to the order of the Euler angle and the input should be the angle value . If you don't know the order of Euler angles, you can use https://quaternions.online/ to verify.

  • The algorithm for quaternions to rotation matrices is unique. However, the quaternion has two representations (w,x,y,z)and is used in the function (x,y,z,w), but the quaternion of the attitude returned by aruco must be modified!attitudeVectorToMatrix(w,x,y,z)(x,y,z,w)

4.2 The end coordinate system of the robot

  • The end coordinate system of the manipulator is established with the rotation plane of the last axis as the original plane, not the plane of the end flange , with a distance of at least 20cm between the two.
  • The hand-eye matrix obtained from the data in the code is the transformation matrix of the camera relative to the end of the robotic arm . But the end of my robotic arm is equipped with a six-dimensional force sensor and a gripper. If I want to get the hand-eye matrix relative to the end of the gripper, can I simply increase or decrease the z value? At that time, I thought for the first time that I could reduce the z of the original data manipulator pose by the same value to simulate the installation of a tcp at the end, and then substitute it into the program to solve the following data. It is found that the hand-eye matrix does not change, but the z value of the calibration plate in the base coordinate system decreases. After thinking about it carefully, I found that the problem is that the end is not necessarily plumb in the plane under different postures, so this kind of data program thinks that your calibration board is lowered, not that a tcp is installed at the end.
  • Back to the essence of the hand-eye matrix, it is the rotation + translation relative to a certain coordinate system . In theory, it is possible to modify the z value of the translation vector of the hand-eye matrix, and convert it to the tcp coordinate system relatively easily, but this z value seems difficult to control. After trying it, I decided to do another hand-eye calibration, and directly add the tcp coordinate system to the end of the robotic arm.

insert image description here

4.3 How to improve accuracy

The solution method of AX=XB is the Tsai-Lenz two-step method. There are points for attention on the Internet to improve the accuracy as follows. It's very abstract, and I only half understand it, and no one has done a controlled experiment. I hope you will have more discussions and exchanges.

(1) No matter how many sets of motion data are collected for calibration, each set of motion maximizes the motion angle.
(2) Maximize the rotation axis angles of the two sets of motions.
(3) The movement distance of the end of the manipulator in each group of movements should be as small as possible, which can be achieved through path planning.
(4) Minimize the distance from the center of the camera to the calibration plate as much as possible, and use an appropriately small calibration plate.
(5) Collect as many sets of data as possible for solving.
(6) Use a high-precision camera calibration method.
(7) Try to improve the absolute positioning accuracy of the manipulator. If this condition cannot be met, at least the relative motion accuracy needs to be improved.

5. References

Hand-eye calibration_comprehensive and detailed derivation processRobot
hand-eye calibrationA
simple and simple understanding of robot hand-eye calibrationHand-
eye calibrationAX=XB solution method (literature summary)
OpenCV Aruco parameter source code complete analysis and understanding of
ArUco Marker detection principlehttps
://github.com/IntelRealSense/realsense -ros
hand-eye calibration (2): Tsai solution method

Guess you like

Origin blog.csdn.net/Thinkin9/article/details/123743924#comments_27884029