Personal opinion on the results of the eye to hand calibration of the robotic arm

The easy to hand calibration algorithm uses the handeye-calib ROS function package of the Yuxiang ROS master. The master has given detailed usage methods from installation to deployment. Here is a record for the use of the calibration output results.

Screenshot of function package calibration output:

Several commonly used calibration algorithms are used in the function package for calibration, and the output results are respectively calculated for the mean value, variance var, and standard deviation std. For the selection of output results, the standard deviation std can be used to determine, where the smaller the std, the more stable the data.

 Standard deviation explanation, taken from Baidu Encyclopedia:

Standard Deviation (Standard Deviation), a mathematical term, is the arithmetic square root of the arithmetic mean (ie: variance ) from the square of the mean deviation , represented by σ. Standard deviation, also known as standard deviation, or experimental standard deviation, is most commonly used in probability statistics as a measure of the extent of a statistical distribution . The smaller the standard deviation, the less the values ​​deviate from the mean, and vice versa .

The standard deviation is the arithmetic square root of the variance. Standard deviation can reflect the degree of dispersion of a data set. Two sets of data with the same mean may not necessarily have the same standard deviation.

The output of the algorithm function package is the coordinate conversion of base_link-->camera, including translation [x, y, z], rotation [Rx, Ry, Rz], which means the description of the camera coordinate system in the base_link coordinate system, and the output of the rotation It is output in the form of Euler angles. If you use TF to publish base_link-->camera in ROS, you need to convert Euler angles to quaternions. For Euler angles to quaternions, you can use the transform3d library in python or use The following online sites do the conversion.

3D Rotation Converter:https://www.andre-gaschler.com/rotationconverter/

 For the separate use of the calibration matrix, you can first look at the relevant introduction about the matrix:

Homogeneous matrix operation method in python:

import math
import numpy as np
import transform3d as tfs


def Homogeneous_generate(x,y,z,rx,ry,rz, degrees=0):
    """
    齐次矩阵生成,传入参数单位为 m, rad
    """
    if degrees == 1:
        rx = math.degrees(rx)
        ry = math.degrees(ry)
        rz = math.degrees(rz)
    R = tfs.euler.euler2mat(rx, ry, rz, "sxyz") #  R = Rz * Ry * Rz
    print(degrees)
    T = np.asarray([x, y, z])
    # 合成齐次矩阵
    homogeneous_matrix = tfs.affines.compose(T, R, [1,1,1])
    #print("homogeneous_matrix:\n", homogeneous_matrix)
    
    return homogeneous_matrix

if __name__ == "__main__":
    # 将标定的数据带入下面的函数中,即可得到camera坐标系到base_link坐标系转换的齐次矩阵
    T_C2B  = Homogeneous_generate( 0.104953, -0.38583, 0.525413, 0.0146642, -0.847495,  -1.20245)

 To convert the pose recognized by the camera to the conversion method under the base_link of the robot arm body, you can see the following example:

 

 

If you don't know why the matrix is ​​multiplied in this way, you can list the conversion relations from right to left in turn (this is where I was confused at the beginning), analogous to the conversion of objects. The matrix multiplication here uses left multiplication.

Guess you like

Origin blog.csdn.net/m0_46259216/article/details/126404177