Something related to calibration


Preface

This section will summarize several situations of visual calibration used in some projects and explain the general principles.


1. Hand-eye calibration (eyes are not in the hands)

Eliminates camera distortion and out-of-level situations
Often in automation equipment, vision guidance robots or motion modules are needed for accurate positioning. When the camera is stationary (field of view fixed). There are the following two positioning situations.

1) Positioning that does not require angle correction

This kind of positioning is often used for point positioning without needing to calibrate the posture. It is often the case that the end effector is bound to a dispensing device, a spot welding machine, etc.
As shown below:
insert image description here

The camera is fixed, and the xy module is required to accurately move to the target after p appears in the camera field of view.
The specific steps are as follows:
Nine-point calibration: (When calibrating the xy offset in the pixel coordinate system, the corresponding offset in the module module coordinate system is calculated)

  1. Bind the calibration workpiece to the end effector;
  2. Move the calibration workpiece within the field of view;
  3. Obtain the xy coordinates of the xyz module and obtain the pixel xy coordinates of the calibration workpiece Mark point in the image;
  4. Loop through steps 2-3 9 times (usually 9 points form a rectangle) to obtain 9 sets of xyz module coordinates and mark point coordinates; (generally 3 points that are not on the same straight line are enough to calibrate xy, and more points are used to improve accuracy , often taken as 9 points)
  5. Get the mapping relationship between two coordinate systems.

Teaching: (get reference position)

  1. Place the material to be processed in the middle of the camera's field of view;
  2. Move the end effector above the material, simulate the working posture, and record the xy module coordinates at this time to the local control unit as the reference posture;
  3. Visual processing obtains a reference coordinate b (X B , Y b ) through the mapping conversion obtained by calibration .

normal work:

  1. The material is identified in the field of view and the coordinates p (X, Y) are obtained;
  2. The coordinate difference between coordinates p and b (XX B ), (YY b ) is fed back to the control unit;
  3. The control unit then performs positioning by performing difference compensation on the reference attitude.

2) Positioning that requires angle correction

This kind of positioning is often used when posture calibration is required, such as labeling, installation, and grabbing.
As shown in the figure below:
The attitude and inclination of picking up the label may be different. The purpose is to place the label parallel to the red area of ​​the target.
insert image description here
The specific steps are as follows:
Nine-point calibration xy: (When calibrating the xy offset in the pixel coordinate system, the corresponding offset in the module module coordinate system)

  1. Bind the calibration workpiece to the end effector;
  2. Move the calibration workpiece within the field of view;
  3. Obtain the xy coordinates of the xyz module and obtain the pixel xy coordinates of the calibration workpiece Mark point in the image;
  4. Loop through steps 2-3 9 times (usually 9 points form a rectangle) to obtain 9 sets of xyz module coordinates and mark point coordinates; (generally 3 points that are not on the same straight line are enough to calibrate xy, and more points are used to improve accuracy , often taken as 9 points)
  5. Get the mapping relationship between two coordinate systems.

Nine-point calibration U: (calculate the center of rotation)

  1. Bind the calibration workpiece to the end effector;
  2. Move into the field of view and record the current xy and u coordinates (very important);
  3. Get the mack point visually;
  4. Move the U-axis without moving xy within the field of view;
  5. Loop through steps 3-4 9 times (usually 9 points form a rectangle), and obtain 9 sets of mark point coordinates; (generally 3 points are enough to obtain the rotation center, and more points are used to improve accuracy)
  6. In the vision software, the center of the circle is obtained as the rotation center through a 9-point fitting circle (at this time, it is the rotation center corresponding to the current stationary xy coordinate. When the xy of the photo changes, the rotation center needs to be recalibrated).

normal work:

  1. After absorbing the label material, wait (the xyu position must coincide with the calibration U time);
  2. The camera captures the label and obtains the label angle and the XY of the upper left corner;
  3. The camera captures the target object and obtains the placement angle and the XY of the upper left corner;
  4. Here you can get the angle difference θ to be adjusted,
  5. Obtain the rotated coordinates X ' Y ' of the upper left corner of the label through rotation transformation ;
  6. Then we have the rotation angle θ, and the offset distance of the xy axis (the difference between XY and X'Y ' ) .

The second way to work normally (not recommended):

  1. After absorbing the label material, wait (the xyu position must coincide with the calibration U time);
  2. The camera captures the label to obtain the label angle, the xy coordinates of the upper left corner, the length and width of the label, and the distance from the upper left corner of the label to the center of rotation;
  3. The camera captures the target object and obtains the coordinates of the upper left corner of the placement angle;
  4. Find the virtual center of rotation on the target object based on the length and width of the label, the distance between the upper left corner of the label and the lower left corner of the label from the center of rotation;
  5. Here you can get the angle difference θ to be adjusted, and the xy difference between the two rotation centers;
  6. Then we have the rotation angle θ and the offset distance of the rotation center.

2. Hand-eye calibration (eyes in hand)

Eliminates camera distortion and out-of-level situations
Often in automation equipment, vision guidance robots or motion modules are needed for accurate positioning. In the case where the camera will move with the robot hand. There are the following two positioning situations.

I'll update when I think about it.

Guess you like

Origin blog.csdn.net/qq_42504097/article/details/129481148