Learning Opencv 3 —— Chapter Fourteen Contour Matching

Learning Opencv 3 —— Chapter Fourteen Contour Matching

For the introduction of Opencv profile, please refer to: https://blog.csdn.net/a40850273/article/details/88063478

Moments

Moments is a high-level feature of contours, images and points, calculated as follows

It can be understood as the weighted sum of each pixel in the image, if  x = 0, y = 0, that is  m_{00}, the weight of each pixel is 1. If for a binary image (pixel value is either 1 or 0), then it  m_{00} is the area of ​​non-zero pixel value. If there is a contour, then it  m_{00} is the length of the contour. Similarly, if you  divide  by  m_{10} sum  , it means the average value of the image in the x and y directions.m_{01}m_{00}

cv::moments() is used to calculate the Moments of an image

cv::Moments cv::moments(             // Return structure contains moments
  cv::InputArray points,             // 2-dimensional points or an "image"
  bool           binaryImage = false // false='interpret image values as "mass"'
)

Parameter introduction:

  • points: It can be a two-dimensional array (an image) or a bunch of points (an outline)
  • binaryImage: If True, all non-zero pixel values ​​will be treated as 1

However, the Moments obtained above will change due to the scaling and rotation of the contour.

Central Moments that satisfy translation invariance

For an image or contour, it m_{00} has translation invariance. But higher-order Moments will no longer have this feature. The central Moments are defined as follows

Where ,

The translation invariance is successfully satisfied by solving the center of each pixel. Among them, it can also be clearly obtained  mu_{00} = m_{00}, mu_{10} = mu_{01} = 0.

At the same time, the normalization center Moments that meets the scaling invariance

In order to further introduce scaling invariance, the normalized center Moments is introduced, which is defined as follows

 

Hu invariant moments satisfying rotation invariance

Hu invariant moments are linearly combined by normalizing the central Moments, thereby achieving h_1 invariance to scaling, rotation, and reflection (reflection is not satisfied).

A calculation example is given below

void cv::HuMoments(
  const cv::Moments& moments, // Input is result from cv::moments() function
  double*            hu       // Return is C-style array of 7 Hu moments
);

cv::HuMoments() calculates the 7 hu moments given above by passing in an object of cv::Moments.

Use Hu Moments to match

cv::matchShapes() automatically calculates their Moments based on the two provided goals, and finally compares them based on the criteria given by the user.

double  cv::MatchShapes(
  cv::InputArray object1,      // First array of 2D points or cv:U8C1 image
  cv::InputArray object2,      // Second array of 2D points or cv:U8C1 image
  int            method,       // Comparison method (Table 14-4)
  double         parameter = 0 // Method-specific parameter
);

Parameter Description:

  • object1, object2: The two input targets must be grayscale images or contours
  • method: The matching methods include the following three, and different methods will affect the matching degree returned in the end

among them,\eta_i^x = sign(h_i^x) \cdot log(h_i^x)

  • parameter: The current algorithm is not used, you can simply use the initial value. This parameter is mainly to adapt to the future new method may use custom parameters

Use shape context to compare shapes

The use of Moments for shape matching can be traced back to the 1980s. At the same time, the latest algorithms continue to appear, but since the current Shape module is still under development, here will only briefly introduce the high-level interface.

Structure of the Shape module

The construction of Shape is based on an abstract class cv::ShapeDistanceExtractor. It returns a non-negative number, if the two shapes are exactly the same, it will return 0.

class ShapeContextDistanceExtractor : public ShapeDistanceExtractor {
  public:
  ...
  virtual float computeDistance( InputArray contour1, InputArray contour2 ) = 0;
};

The specific shape distance extraction class will be derived from this base class cv::ShapeDistanceExtractor. Here is a brief introduction to two of them, cv::ShapeTransformer and cv::HistogramCostExtractor.

class ShapeTransformer : public Algorithm {

public:
  virtual void estimateTransformation(
    cv::InputArray      transformingShape,
    cv::InputArray      targetShape,
    vector<cv::DMatch>& matches
  ) = 0;

  virtual float applyTransformation(
    cv::InputArray      input,
    cv::OutputArray     output      = noArray()
  ) = 0;

  virtual void warpImage(
    cv::InputArray      transformingImage,
    cv::OutputArray     output,
    int                 flags       = INTER_LINEAR,
    int                 borderMode  = BORDER_CONSTANT,
    const cv::Scalar&   borderValue = cv::Scalar()
  ) const = 0;
};

class HistogramCostExtractor : public Algorithm {

public:
  virtual void  buildCostMatrix(
    cv::InputArray      descriptors1,
    cv::InputArray      descriptors2,
    cv::OutputArray     costMatrix
  )        
                                         = 0;
  virtual void  setNDummies( int nDummies )         = 0;
  virtual int   getNDummies() const                 = 0;

  virtual void  setDefaultCost( float defaultCost ) = 0;
  virtual float getDefaultCost() const              = 0;
};

The shape transformer represents the class of the remapping algorithm from one pile of points to another. Among them, both affine transformation and perspective transformation can be realized by shape transformation (cv::ThinPlateSplineShapeTransformer in Opencv).

The histogram cost extractor maps the shoveling dirt from one grid to another in the histogram as a cost. The commonly used derived classes are as follows

For each extractors and transformers, there is a factory method (createX()), such as cv::createChiHistogramCostExtractor().

shape context distance extractor

namespace cv {

  class ShapeContextDistanceExtractor : public ShapeDistanceExtractor {

    public:
    ...
    virtual float computeDistance( 
      InputArray contour1, 
      InputArray contour2 
      ) = 0;
  };

  Ptr<ShapeContextDistanceExtractor> createShapeContextDistanceExtractor(
    int   nAngularBins                          = 12,
    int   nRadialBins                           = 4,
    float innerRadius                           = 0.2f,
    float outerRadius                           = 2,
    int   iterations                            = 3,
    const Ptr<HistogramCostExtractor> &comparer 
                                        = createChiHistogramCostExtractor(),
    const Ptr<ShapeTransformer>       &transformer
                                        = createThinPlateSplineShapeTransformer()
  );
}

In essence, the Shape Context algorithm calculates the representations of two or more objects to be compared. Each characterization is based on a series of sub-points on the edge of the shape, and for each sampling point, it constructs a certain histogram to reflect the shape in the polar coordinate system from the point of view. All histograms have the same size nAngularBins * nRadialBins. The points on the two objects to be matched calculate the distance based on chi-squared. Then the algorithm calculates the optimal 1:1 point matching of the two objects to be matched to obtain the smallest sum of chi-squared distances. This algorithm is not fast, but it can provide a relatively good result.

#include "opencv2/opencv.hpp"
#include <algorithm>
#include <iostream>
#include <string>

using namespace std;
using namespace cv;

static vector<Point> sampleContour( const Mat& image, int n=300 ) {

  vector<vector<Point> > _contours;
  vector<Point> all_points;
  findContours(image, _contours, RETR_LIST, CHAIN_APPROX_NONE);
  for (size_t i=0; i <_contours.size(); i++) {
    for (size_t j=0; j <_contours[i].size(); j++)
      all_points.push_back( _contours[i][j] );

  // If too little points, replicate them
//
  int dummy=0;
  for (int add=(int)all_points.size(); add<n; add++)
    all_points.push_back(all_points[dummy++]);

  // Sample uniformly
  random_shuffle(all_points.begin(), all_points.end());
  vector<Point> sampled;
  for (int i=0; i<n; i++)
    sampled.push_back(all_points[i]);
  return sampled;
}

int main(int argc, char** argv) {

  string path    = "../data/shape_sample/";
  int indexQuery = 1;

  Ptr<ShapeContextDistanceExtractor> mysc = createShapeContextDistanceExtractor();

  Size sz2Sh(300,300);
  Mat img1=imread(argv[1], IMREAD_GRAYSCALE);
  Mat img2=imread(argv[2], IMREAD_GRAYSCALE);
  vector<Point> c1 = sampleContour(img1);
  vector<Point> c2 = sampleContour(img2);
  float dis = mysc->computeDistance( c1, c2 );
  cout << "shape context distance between " <<
     argv[1] << " and " << argv[2] << " is: " << dis << endl;

  return 0;
}

Hausdorff distance extractor

Similar to Shape Context distance, Hausdorff distance provides another measure of shape dissimilarity based on the cv::ShapeDistanceExtractor interface.

Hausdorff distance first finds the closest point on another image for each point in an image, and the maximum distance is Hausdorff distance. However, the Hausdorff distance is not symmetric (but it can be symmetric through certain operations).

H(A,B) = max(h(A,B), h(B,A)),among them 

Hausdorff distance extractor can be generated by the factory method cv::createHausdorffDistanceExtractor().

cv::Ptr<cv::HausdorffDistanceExtractor> cv::createHausdorffDistanceExtractor(
  int   distanceFlag = cv::NORM_L2,
  float rankProp     = 0.6
);

At the same time, cv::HausdorffDistanceExtractor and Shape Context distance extractor have the same interface, so cv::computeDistance() can also be used to calculate the distance to the target.

Guess you like

Origin blog.csdn.net/a40850273/article/details/107391302