OpenCV comes with functions to implement gray-scale image translation and rotation algorithms (in-plane)

float shift_and_rot_test_opencv(cv::Mat des, Vector2f shift, int rot)
{
  float center_x=(DIM_SAMPLE_POINTS_X-1)/2.0;
  float center_y=(DIM_SAMPLE_POINTS_Y-1)/2.0;
  cv::Point center = cv::Point(center_x, center_y);
  double scale = 1.0;
  //img_shift_rot_test = cv::getRotationMatrix2D(center, (double)rot, 1.0);
  cv::Mat trans_mat = cv::getRotationMatrix2D(center, (double)rot, 1.0);
  trans_mat.at<float>(0,2) += shift(0)/SENSOR_RANGE/2.0*DIM_SAMPLE_POINTS_X;
  trans_mat.at<float>(1,2) += -shift(1)/SENSOR_RANGE/2.0*DIM_SAMPLE_POINTS_Y;
  cv::warpAffine(img4, img_shift_rot_test, trans_mat, img4.size());
  cv::absdiff(img_shift_rot_test, des, img_absdiff);
  //img_absdiff *= 0.5;
  float err = 0.0;
  int rows = img_absdiff.rows;
  int cols = img_absdiff.cols;
  for(int row = 0; row < rows; row++)
    for(int col = 0; col < cols; col++)
    {
      err += img_absdiff.at<float>(row, col);
    }
  cv::imshow("error abs", img_absdiff);
  cv::waitKey(10);
  return err/2.0/current_total;
}

The getRotationMatrix2D function is used to obtain the 2D matrix (2*3) used to rotate the image: https://docs.opencv.org/3.4/da/d54/group__imgproc__transform.html#gafbbc470ce83812914a70abfb604f4326

Then transform through the warpAffine() function, shift is used to represent the offset, and the y-axis of the coordinate system is opposite to the y-axis of the image;

The return value is the error ratio.

TIPS: In order to speed up the calculation, the intermediate data of the cv::Mat type needs to be initialized.

Guess you like

Origin blog.csdn.net/li4692625/article/details/109410866