Robotic arm grabbing process based on fisheye camera

1. Camera calibration

Camera calibration is performed using the camera_calibration tool in ROS, which can also calibrate fisheye cameras. The black and white grid size of the calibration plate is 12x8, and the size of a single square is 20mm.
After calibration, the internal parameters of the camera can be obtained. In subsequent use, the pictures of the fisheye camera need to be corrected through parameters.

rosrun camera_calibration cameracalibrator.py --size 11x8 --square 0.020 image:=/usb_cam/image_raw camera:=/usb_cam --fisheye-fix-skew --fisheye-recompute-extrinsicsts --fisheye-check-conditions

2. Tool calibration

Tool calibration is performed based on the tools provided by the robotic arm manufacturer.

3. Hand-eye calibration

Refer to hand-eye calibration .
Note: The image_callback function obtains an uncorrected image, and the image needs to be corrected when applying.
Fisheye camera correction method:
After ros calibration, there is a head_camera.yaml parameter file under camera_info under .ros

with open(camera_info_file) as f:
	camera_params = yaml.load(f.read(),Loader=yaml.FullLoader)
	camera_width = camera_params['image_width']
	camera_height = camera_params['image_height']
	self.image_size = (camera_width,camera_height)
	# self.K_ = np.asarray([camera_params['camera_matrix']['data']]).reshape((3, 3))
	self.D = np.asarray(camera_params['distortion_coefficients']['data']).reshape((1, 4))
	self.K=np.asarray([camera_params['camera_matrix']['data']]).reshape((3, 3))
	self.Re =np.asarray([camera_params['rectification_matrix']['data']],dtype=np.float64).reshape((3, 3)) 
	self.P =np.asarray([camera_params['projection_matrix']['data']]).reshape((3, 4))
	# 根据alpha比例因子计算新的相机内参,1:所有像素保留,但存在黑色边框。0:损失最多的像素,没有黑色边框.返回矩阵和ROI
	# self.K,_ = cv2.getOptimalNewCameraMatrix(self.K, self.D, self.image_size, 0, self.image_size)
	if camera_type==0:#鱼眼相机
	    self.P=cv2.fisheye.estimateNewCameraMatrixForUndistortRectify(self.K,self.D,self.image_size,None)
	    self.map1,self.map2 = cv2.fisheye.initUndistortRectifyMap(self.K, self.D, None, self.P, self.image_size, cv2.CV_32FC1)

Correct the original image in image_callback:

frame_result =cv2.remap(image, fisheye.map1, fisheye.map2, interpolation=cv2.INTER_LINEAR)

4. Convert pixel coordinates to world coordinates

To convert pixel coordinates to world coordinates, you need to know the internal and external parameters of the camera.

#获取工具中心点的位置和姿态
 RT_end_to_base = np.column_stack((rvecs_robot[i],tvecs_robot[i]))
 RT_end_to_base=np.row_stack((RT_end_to_base,np.array([0,0,0,1])))

  RT_cam_to_end=np.column_stack((R,T))#手眼标定结果
  RT_cam_to_end=np.row_stack((RT_cam_to_end,np.array([0,0,0,1])))
  

  Rmat=Rotation.from_matrix(R)
  rx,ry,rz=Rmat.as_euler('xyz',degrees=True)
  # print("rotation(x,y,z)",rx,ry,rz)
  print("相机到机械臂末端的矩阵",T,rx,ry,rz)
  
  #获取相机在机械臂坐标系下的位置和姿态
  Matrix_camera2Base = np.dot(RT_end_to_base,RT_cam_to_end)
  Matrix_base2Cam=np.linalg.inv(Matrix_camera2Base)
  rvec_cam=cv2.Rodrigues(Matrix_base2Cam[:3,:3])[0]#旋转向量
  tvec_cam = Matrix_base2Cam[:3, 3].reshape((3, 1))#平移向量

Similar coordinates to world coordinates reference https://www.yii666.com/blog/331694.html

Then the robot arm can be controlled to move to the target position

Guess you like

Origin blog.csdn.net/xwb_12340/article/details/132193602