Calibration series 1. Introduction to the basics of hand-eye calibration

1. The purpose of hand-eye calibration

        In actual automated industrial production, we often need robots and cameras to cooperate to realize material positioning, alignment and other operations. The goal of hand-eye calibration is to achieve spatial mapping of the camera coordinate system and the robot coordinate system. In layman's terms, the camera is equivalent to the eye. The manipulator is equivalent to a hand. The manipulator can go wherever the eyes see. In this process, we use a common two-dimensional calibration method, the 9-point method, to achieve hand-eye calibration.

        A brief introduction to some concepts in the calibration process::

Pixel coordinate system: The position of the pixel in the image. Generally, the coordinate origin is the upper left corner of the image, and the unit is pixels.

Camera coordinate system: In the camera coordinate system, the camera center is the origin of the coordinate system, which vertically corresponds to the center of the image, and the unit is millimeters.

Manipulator coordinate system: relative to the manipulator base, the coordinates are the coordinates of the manipulator end flange relative to the base. A tool coordinate system will be established according to the actual application, no matter which one it is, it is relative to the base.

Positioning process: The pixel coordinate system of the feature point is affine to the robot coordinate system, and the robot can pick up and discharge the material at the same position.

Swinging process: The position of the manipulator to absorb the material is not necessarily certain, but it can be accurately placed into the target position every time. It can be regarded as a fixed fixture. The position and posture of the material loading process are uncertain, but the material can be accurately placed into the jig. Tool in.

2. Introduction to spatial radiation transformation

       The radial transformation of space is simply introduced as the mapping of coordinates in one space to another space. There is a one-to-one correspondence between points in the two spaces. Two-dimensional space can understand the coordinate relationship between two planes. There are three main mapping relationships: translation, rotation, and scaling. The main problem is rotation. You must remember that it is rotation relative to the base coordinate system, not relative to its own tool coordinate system. For the rotation and translation matrix, it is rotated first and then translated. Affine transformation, also known as affine mapping, means that in geometry, a vector space is transformed into another vector space by a linear transformation followed by a translation. Affine transformation can maintain the "flatness" of the image, including rotation, scaling, translation, and miscut operations. Generally speaking, the affine transformation matrix is ​​a 2*3 matrix. The elements in the third column play the role of translation, the numbers in the first two columns on the diagonal are scaling, and the rest are used for rotation or miscutting.

The specific expression is:

ec2f027a32764f0cb4f5419ec1ea3b00.png

 

        Among them, A is the transformed coordinate matrix, C is the original coordinate matrix, and B is the affine transformation matrix. There are 6 unknown quantities. Assuming that the target image (x, y) is rotated clockwise by θ radians about the axis to the target image, then the transformation matrix The corresponding variables are: So the four unknown quantities a, b, d, and e in the first two columns play the role of rotation, and the two unknown quantities c and f in the third column play the role of translation. The system of equations of affine transformation has 6 unknowns, so to solve it, you need to find 3 sets of mapping points, and the three points exactly determine a plane.

       For a 3-dimensional space radiation transformation, it is a 4*4 matrix. Similarly, if a 4-dimensional space transformation is required, 4 corresponding points are required. For rigid body transformation (without size scaling), a 2-dimensional plane transformation only requires 2 points. Three-dimensional space rigid body transformation only requires 3 points.

A simple example:

           e529f914ac014cb1b2958d6ee981ec3b.png

Commonly used codes in halcon:

*生成一个放射变换用的矩阵
hom_mat2d_identity

*在初始化矩阵的基础上进行旋转、平移、缩放变换
hom_mat2d_translate
hom_mat2d_scale
hom_mat2d_rotate(先绕原点旋转,然后再平移到指定的旋转中心)

*执行放射变换的算子
affine_trans_image
affine_trans_regione
affine_transe_xld

*点与点生成对应矩阵(起码传进去三个点)
vector_to_hom2d

*点和角度进行旋转平移变换
vector_angle_to_rigid
(其中第一个点是旋转中心,绕旋转中心进行旋转再将旋转中心平移到对应点位)

 

3. Types of hand-eye calibration

Common types of hand-eye calibration can be divided into two categories according to the location of the camera:

1. Eye in hand: The camera and the robot hand are bound to one position. When the robot hand moves, the camera also moves with it. The camera and the end of the robot hand move together without relative changes. During calibration, the calibration plate is fixed and the manipulator moves together with the camera.

     45fe877716b24fcfa86517fc6fb70404.png

 

2. Eyes outside the hand: The camera is placed at a fixed position, and its position relative to the robot's base coordinate system remains unchanged.

1265d4dc97064373b5f3322f82f1f141.png

Regarding the related content of 2D and 3D hand-eye calibration, I will continue to update it in subsequent articles.

According to the orientation of the camera, it can be divided into:
1. Upper camera: The camera is directly above and the target is photographed from top to bottom.

2. Lower camera: The camera is below, shooting from bottom to top, which can avoid the influence of robot interference.

                   e281f99cecb748bd9dfb5c67be8fdb56.png

 

 

4. The purpose of hand-eye calibration

From my above descriptions, we can know that the purpose of hand-eye calibration can be summarized as:

1. Solve for the resolution in the X and Y directions;

2. Solve the radial transformation of the image to the X and Y method;

3. Realize the conversion between pixel coordinates and actual coordinates according to the transformation matrix;

 

 

 

Guess you like

Origin blog.csdn.net/qiaodahua/article/details/129922304