OpenCV的Sample分析:real_time_tracking(4) OpenCV的Sample分析:real_time_tracking(4)

OpenCV的Sample分析:real_time_tracking(4)

上一次介绍了kalman滤波这个类的整体结构,这次先简单介绍一下PNP问题的初始化

  PnPProblem pnp_detection(params_WEBCAM);
  PnPProblem pnp_detection_est(params_WEBCAM);

其中params_WEBCAM指:

double params_WEBCAM[] = { width*f/sx,   // fx
                           height*f/sy,  // fy
                           width/2,      // cx
                           height/2};    // cy

于是,得查看一下类PnPproblem的构成,

class PnPProblem
{

public:
  explicit PnPProblem(const double param[]);  // custom constructor
  virtual ~PnPProblem();

  bool backproject2DPoint(const Mesh *mesh, const cv::Point2f &point2d, cv::Point3f &point3d);
  bool intersect_MollerTrumbore(Ray &R, Triangle &T, double *out);
  std::vector<cv::Point2f> verify_points(Mesh *mesh);
  cv::Point2f backproject3DPoint(const cv::Point3f &point3d);
  bool estimatePose(const std::vector<cv::Point3f> &list_points3d, const std::vector<cv::Point2f> &list_points2d, int flags);
  void estimatePoseRANSAC( const std::vector<cv::Point3f> &list_points3d, const std::vector<cv::Point2f> &list_points2d,
                           int flags, cv::Mat &inliers,
                           int iterationsCount, float reprojectionError, double confidence );

  cv::Mat get_A_matrix() const { return _A_matrix; }
  cv::Mat get_R_matrix() const { return _R_matrix; }
  cv::Mat get_t_matrix() const { return _t_matrix; }
  cv::Mat get_P_matrix() const { return _P_matrix; }

  void set_P_matrix( const cv::Mat &R_matrix, const cv::Mat &t_matrix);

private:
  /** The calibration matrix */
  cv::Mat _A_matrix;
  /** The computed rotation matrix */
  cv::Mat _R_matrix;
  /** The computed translation matrix */
  cv::Mat _t_matrix;
  /** The computed projection matrix */
  cv::Mat _P_matrix;
};

类PnPproblem的私有成员分别是投影矩阵P,旋转矩阵R,以及平移矩阵t。

类PnPproblem的接口函数主要是像素点的反向投影以及位置姿态估计两部分。

在具体讲解类PnPproblem之前,先看一下该类的初始化函数(任何类都会有初始化函数)。在看初始化函数的时候,会看见explicit。CSDN大牛是这样解释explicit的:

C++中的explicit关键字只能用于修饰只有一个参数类构造函数, 它的作用是表明该构造函数是显示的, 而非隐式的, 跟它相对应的另一个关键字是implicit, 意思是隐藏的,类构造函数默认情况下即声明为implicit(隐式).

说白了,用了explicit,就不能隐式地构造类PnPproblem

PnPProblem::PnPProblem(const double params[])
{
  _A_matrix = cv::Mat::zeros(3, 3, CV_64FC1);   // intrinsic camera parameters
  _A_matrix.at<double>(0, 0) = params[0];       //      [ fx   0  cx ]
  _A_matrix.at<double>(1, 1) = params[1];       //      [  0  fy  cy ]
  _A_matrix.at<double>(0, 2) = params[2];       //      [  0   0   1 ]
  _A_matrix.at<double>(1, 2) = params[3];
  _A_matrix.at<double>(2, 2) = 1;
  _R_matrix = cv::Mat::zeros(3, 3, CV_64FC1);   // rotation matrix
  _t_matrix = cv::Mat::zeros(3, 1, CV_64FC1);   // translation matrix
  _P_matrix = cv::Mat::zeros(3, 4, CV_64FC1);   // rotation-translation matrix

}

把类PnPproblem初始化问题弄明白之后,开始分析它的接口函数estimatePose,

// Estimate the pose given a list of 2D/3D correspondences and the method to use
bool PnPProblem::estimatePose( const std::vector<cv::Point3f> &list_points3d,
                               const std::vector<cv::Point2f> &list_points2d,
                               int flags)
{
  cv::Mat distCoeffs = cv::Mat::zeros(4, 1, CV_64FC1);
  cv::Mat rvec = cv::Mat::zeros(3, 1, CV_64FC1);
  cv::Mat tvec = cv::Mat::zeros(3, 1, CV_64FC1);

  bool useExtrinsicGuess = false;

  // Pose estimation
  bool correspondence = cv::solvePnP( list_points3d, list_points2d, _A_matrix, distCoeffs, rvec, tvec,
                                      useExtrinsicGuess, flags);

  // Transforms Rotation Vector to Matrix
  Rodrigues(rvec,_R_matrix);
  _t_matrix = tvec;

  // Set projection matrix
  this->set_P_matrix(_R_matrix, _t_matrix);

  return correspondence;
}

从这段代码中有几点值得注意,

(1)solvePnP是核心函数,注意它的函数格式

(2)Rodrigues把旋转向量转换成旋转矩阵

(3)this的理解

除了这个函数,再来分析一个简单的例子(正向投影),

// Backproject a 3D point to 2D using the estimated pose parameters
cv::Point2f PnPProblem::backproject3DPoint(const cv::Point3f &point3d)
{
  // 3D point vector [x y z 1]'
  cv::Mat point3d_vec = cv::Mat(4, 1, CV_64FC1);
  point3d_vec.at<double>(0) = point3d.x;
  point3d_vec.at<double>(1) = point3d.y;
  point3d_vec.at<double>(2) = point3d.z;
  point3d_vec.at<double>(3) = 1;

  // 2D point vector [u v 1]'
  cv::Mat point2d_vec = cv::Mat(4, 1, CV_64FC1);
  point2d_vec = _A_matrix * _P_matrix * point3d_vec;

  // Normalization of [u v]'
  cv::Point2f point2d;
  point2d.x = (float)(point2d_vec.at<double>(0) / point2d_vec.at<double>(2));
  point2d.y = (float)(point2d_vec.at<double>(1) / point2d_vec.at<double>(2));

  return point2d;
}

这段代码似乎很简单,但是有一样东西是非常值得学习的,就是怎么在opencv中表示向量,矩阵,以及怎么表示向量的第几个元素,等等一些基础的数学操作

以下的表述会经常用到:

1,cv::Point3f  &point3d

2,cv::Mat  point3d_vec = cv::Mat(4, 1, CV_64FC1);

3,point3d_vec.at<double>(1)  =  point3d.y;

4,cv::Point2f  point2d;

5,point2d.x = (float)(point2d_vec.at<double>(0) / point2d_vec.at<double>(2));

6,_A_matrix.inv()

7,矩阵,向量乘法会自动重载

总而言之,要学会灵活地运用opencv自带的Mat类和Point类表示矩阵和向量









猜你喜欢

转载自blog.csdn.net/qq_39732684/article/details/80497953