Feature point Features2D class introduction



Features2D class introduction

cv::Features2DHere are all the classes inherited from in OpenCV :

  1. cv::AgastFeatureDetector: Adaptive and Generic Accelerated Segment Test (AGAST) feature detector.

  2. cv::AKAZE: Accelerated-KAZE (AKAZE) Feature Detector and Descriptor Extractor.

  3. cv::BRISK: Binary Robust Invariant Scalable Keypoints (BRISK) Feature Detector and Descriptor Extractor.

  4. cv::FastFeatureDetector: FAST feature detector.

  5. cv::GFTTDetector: Good Features to Track (GFTT) feature detector.

  6. cv::KAZE: KAZE (Accelerated-Kernelized) Feature Detector and Descriptor Extractor.

  7. cv::MSER: Maximally Stable Extremal Regions (MSER) ​​feature detector and descriptor extractor.

  8. cv::ORB: Oriented FAST and Rotated BRIEF (ORB) Feature Detector and Descriptor Extractor.

  9. cv::SimpleBlobDetector: Simple binarized blob detector.

  10. cv::BRISK_Impl: Implementation of BRISK feature detector and descriptor extractor.

  11. cv::DescriptorMatcher: feature descriptor matcher.

  12. cv::FastFeatureDetector_Impl: Implementation of the FAST feature detector.

  13. cv::FlannBasedMatcher: A feature descriptor matcher based on the FLANN library.

  14. cv::GFTTDetector_Impl: Implementation of the GFTT feature detector.

  15. cv::KAZE_Impl: Implementation of the KAZE feature detector and descriptor extractor.

  16. cv::DescriptorExtractor: feature descriptor extractor.

  17. cv::MSER_Impl: Implementation of MSER feature detector and descriptor extractor.

  18. cv::ORB_Impl: Implementation of the ORB feature detector and descriptor extractor.

  19. cv::SimpleBlobDetector_Impl: Simple implementation of a binarized blob detector.

  20. cv::BFMatcher: Brute-Force Matcher, violent matcher.

  21. cv::FlannIndexParams: FLANN index parameter.

  22. cv::FlannMatcher: A brute force matcher based on the FLANN library.

  23. cv::FlannSearchParams: FLANN search parameters.

  24. cv::GenericDescriptorMatcher: Generic feature descriptor matcher.

  25. cv::VGG: VGG feature detector and descriptor extractor.

1. cv::AgastFeatureDetector

cv::AgastFeatureDetector::create()is a static function in OpenCV used to create the AGAST feature detector. AGAST (Adaptive and Generic Corner Detection Based on the Accelerated Segment Test) is a fast corner detection algorithm for finding salient corner features in images.

The function prototype is as follows:

Ptr<AgastFeatureDetector> cv::AgastFeatureDetector::create(
    int threshold = 10,
    bool nonmaxSuppression = true,
    AgastFeatureDetector::DetectorType type = AgastFeatureDetector::OAST_9_16
)

Parameter explanation:

  • threshold: Threshold, used to control the sensitivity of corner detection. A higher threshold will result in stronger corners being detected, while a lower threshold will result in more corners being detected. The default value is 10.
  • nonmaxSuppression: Whether to perform non-maximum suppression. If set to true, non-maximum suppression will be performed on the detected corners, and only the most significant corners will be kept. The default value is true.
  • type: AGAST detector type, used to specify the algorithm variant used. One of the following types can be selected:
    • AgastFeatureDetector::OAST_5_8: OAST (Original Accelerated Segment Test) algorithm of 5-neighborhood/8-neighborhood.
    • AgastFeatureDetector::OAST_7_12d: 7-neighborhood/12-neighborhood OAST algorithm (default).
    • AgastFeatureDetector::OAST_7_12s: OAST algorithm of 7-neighborhood/12-neighborhood (slightly improved version).
    • AgastFeatureDetector::OAST_9_16: OAST algorithm of 9-neighborhood/16-neighborhood.
  • The return value is a AgastFeatureDetectorsmart pointer ( ) to an object Ptrthat you can use to perform AGAST feature detection.

cv::AgastFeatureDetectorIt is a fast and robust feature detector that can detect extreme points (such as corner points, edge points, etc.) in the image. It implements the Adaptive and Generic Accelerated Segment Test (AGAST) algorithm, which can perform feature detection at different image scales. It is used as follows:

cv::Ptr<cv::AgastFeatureDetector> detector = cv::AgastFeatureDetector::create();
std::vector<cv::KeyPoint> keypoints;
detector->detect(image, keypoints);

2. cv::AKAZE

function cv::AKAZE::create()is the function used in OpenCV to create the AKAZE (Accelerated-KAZE) feature detector and descriptor extractor. AKAZE is a feature detection algorithm based on nonlinear scale space, which detects and describes local features in images.

The function prototype is as follows:

Ptr<AKAZE> cv::AKAZE::create(
    int descriptor_type = AKAZE::DESCRIPTOR_MLDB,
    int descriptor_size = 0,
    int descriptor_channels = 3,
    float threshold = 0.001f,
    int nOctaves = 4,
    int nOctaveLayers = 4,
    int diffusivity = KAZE::DIFF_PM_G2
);

Parameter Description:

  • descriptor_type: Descriptor type, with two options: AKAZE::DESCRIPTOR_KAZEand AKAZE::DESCRIPTOR_MLDB. The default value is AKAZE::DESCRIPTOR_MLDB.
  • descriptor_size: Descriptor size. For AKAZE::DESCRIPTOR_KAZE, it's 64; for AKAZE::DESCRIPTOR_MLDB, it's 0 (calculated automatically). The default value is 0.
  • descriptor_channels: The number of descriptor channels, for which AKAZE::DESCRIPTOR_KAZEit is 1; for which AKAZE::DESCRIPTOR_MLDBit is 3. The default value is 3.
  • threshold: The threshold for feature point detection, used to decide which feature points should be retained. Larger values ​​result in fewer feature points being detected. The default value is 0.001f.
  • nOctaves: number of octaves in scale space. The default value is 4.
  • nOctaveLayers: The number of layers per octave. The default value is 4.
  • diffusivity: Type of scale-space spread function. There are four options: KAZE::DIFF_PM_G1, KAZE::DIFF_PM_G2, KAZE::DIFF_WEICKERTand KAZE::DIFF_CHARBONNIER. The default value is KAZE::DIFF_PM_G2.

The function returns a AKAZEsmart pointer ( Ptr<AKAZE>) to an object that you can use for feature detection and descriptor extraction.

cv::AKAZEis an accelerated feature detector and descriptor extractor. It is based on the KAZE algorithm and uses nonlinear scale space techniques to detect stable keypoints and extract descriptors. It is robust to illumination changes and viewing angle changes, and can perform feature detection and descriptor extraction at different image scales. It is used as follows:

cv::Ptr<cv::AKAZE> detector = cv::AKAZE::create();
std::vector<cv::KeyPoint> keypoints;
cv::Mat descriptors;
detector->detectAndCompute(image, cv::noArray(), keypoints, descriptors);

3. cv::BRISK

cv::BRISK::create()is a function in the OpenCV library to create BRISKa feature detector object. It can be used to detect stable local features in images, which can be used in applications such as image matching, object recognition and 3D reconstruction.

The syntax of this function is as follows:

cv::Ptr<BRISK> cv::BRISK::create(
    int thresh,           // 特征点提取阈值
    int octaves = 3,      // 金字塔层数
    float patternScale = 1.0f // 特征点采样尺度因子
);

The following is a detailed description of the parameters:

  • thresh: Feature point extraction threshold, the higher the value is, the less feature points will be extracted.
  • octaves: Pyramid layers, used to generate multi-scale feature points.
  • patternScale: Feature point sampling scale factor, used to determine the sampling scale used when generating feature points.

The function returns a cv::Ptr<BRISK>type of smart pointer object that can be used to access and call other methods and properties of the BRISK feature detector.

cv::BRISKis a binary feature detector and descriptor extractor. It is based on layering images and building descriptors using blocks. It is robust to rotation, lighting and scale changes. It is used as follows:

cv::Ptr<cv::BRISK> detector = cv::BRISK::create();
std::vector<cv::KeyPoint> keypoints;
cv::Mat descriptors;
detector->detectAndCompute(image, cv::noArray(), keypoints, descriptors);

4. cv::FastFeatureDetector

The cv::FastFeatureDetector::create() function is a function in OpenCV for creating a FastFeatureDetector object.

FastFeatureDetector is a class used in OpenCV to quickly detect image feature points. It is based on the FAST (Features from Accelerated Segment Test) algorithm, which is a fast and robust feature point detection method.

The main function of the create() function is to create a FastFeatureDetector object according to the given parameters. It accepts an optional string argument specifying a variant of the algorithm. If no arguments are specified, the default variant is used by default.

The following is the general usage of the create() function:

Ptr<FastFeatureDetector> cv::FastFeatureDetector::create(
    int threshold,  // 阈值,用于判断像素是否是特征点的强度差异
    bool nonmaxSuppression = true,  // 是否执行非极大值抑制
    int type = FastFeatureDetector::TYPE_9_16  // 算法变体,默认为TYPE_9_16
);

Parameter Description:

  • threshold: The threshold used to determine whether a pixel is a feature point intensity difference. A higher threshold will result in fewer but higher quality feature points being detected.
  • nonmaxSuppression: Whether to perform non-maximum suppression. By default, it performs non-maximum suppression on detected feature points, keeping only local maxima.
  • type: Algorithm variant, there are three optional values: TYPE_5_8, TYPE_7_12 and TYPE_9_16. These values ​​correspond to the FAST algorithm using 5, 7 and 9 adjacent pixels for intensity difference judgments, respectively.

The function returns a Ptr pointer to the newly created FastFeatureDetector object. This pointer can be used to perform feature point detection on the image.

cv::FastFeatureDetectoris a fast feature detector that detects corners in an image. It is based on a simple and fast algorithm for feature detection at different image scales. It is used as follows:

cv::Ptr<cv::FastFeatureDetector> detector = cv::FastFeatureDetector::create();
std::vector<cv::KeyPoint> keypoints;
detector->detect(image, keypoints);

5. cv::GFTTDetector

cv::GFTTDetector::create()is a function in the OpenCV library used to create a GFTT (Good Features to Track) detector object.

GFTT is a feature point detection algorithm for finding feature points with good tracking performance in an image. It determines which points can be considered as good feature points by calculating the corner response value of each pixel point in the image.

The function prototype is as follows:

Ptr<GFTTDetector> cv::GFTTDetector::create(
    int maxCorners,
    double qualityLevel,
    double minDistance,
    int blockSize = 3,
    bool useHarrisDetector = false,
    double k = 0.04
)

Parameter Description:

  • maxCorners: Specify the maximum number of feature points, that is, the maximum number of detected feature points.
  • qualityLevel: Specifies the minimum quality level for corner response values. Only corners with response values ​​above this threshold are kept.
  • minDistance: Specify the minimum distance between two feature points. If the distance between two feature points is less than this value, one of them will be deleted.
  • blockSize: Specifies the size of the field block used when computing the corner response value for each pixel. The default value is 3.
  • useHarrisDetector: A Boolean value that specifies whether to use the Harris corner detection algorithm. If yes true, use the Harris corner detection algorithm to calculate the corner response value; if yes false, use the Shi-Tomasi corner detection algorithm to calculate the corner response value. The default value is false.
  • k: Constant used to calculate Harris corner response values. Valid only when useHarrisDetectorset to true. The default value is 0.04.

return value:

  • Returns a cv::GFTTDetectorsmart pointer ( Ptr) to the object. You can use this pointer to call other methods and functions of the GFTT detector.

cv::GFTTDetectoris a feature detector based on the Harris corner detection algorithm. It can detect corners in images and is robust to illumination and scale changes. It is used as follows:

cv::Ptr<cv::GFTTDetector> detector = cv::GFTTDetector::create();
std::vector<cv::KeyPoint> keypoints;
detector->detect(image, keypoints);

6. cv::KAZE

cv::KAZE::create()function is a function in the OpenCV library used to create the KAZE feature detector and descriptor extractor. KAZE is a feature detection and description algorithm based on nonlinear scale space, which is suitable for tasks such as image matching, object tracking and image stitching in computer vision.

The full declaration of the function is as follows:

Ptr<KAZE> cv::KAZE::create(
    bool extended = false,
    bool upright = false,
    float threshold = 0.001f,
    int nOctaves = 4,
    int nOctaveLayers = 4,
    int diffusivity = KAZE::DIFF_PM_G2
);

The following is a detailed description of each parameter:

  • extended: A boolean indicating whether to use extended descriptors. The default is false, indicating that the default descriptor length (64 dimensions) is used. If set to true, the extended descriptor length (128 dimensions) is used.
  • upright: A boolean indicating whether to use upright features. The default is false, indicating that the detector will compute the orientation of the feature. If set to true, the detector will not calculate the orientation of the feature, only the position of the feature.
  • threshold: A float representing the threshold of the feature response. The default is 0.001f. During feature detection, features below this threshold are discarded.
  • nOctaves: An integer representing the number of layers in the scale space. The default is 4. Higher values ​​will result in more scale space and provide better scale invariance at the cost of more computation.
  • nOctaveLayers: An integer denoting the number of computation layers in each scale-space layer. The default is 4. Higher values ​​provide better feature detail, but also increase computation.
  • diffusivity: An integer indicating the type of diffusion. The default is KAZE::DIFF_PM_G2, indicating that the PM G2 diffusion type is used. You can also choose other diffusion types, such as KAZE::DIFF_PM_G1, , KAZE::DIFF_WEICKERTand KAZE::DIFF_CHARBONNIER.

This function returns a cv::KAZEsmart pointer ( Ptr<KAZE>) to the object, which can be used for feature extraction and matching on the image.

cv::KAZEis an accelerated feature detection 6. cv::KAZE(continued)

detector and descriptor extractor, which can detect stable keypoints at different image scales and extract descriptors. It is robust to illumination and viewing angle changes, and can detect different types of features such as edges and corners. It is used as follows:

cv::Ptr<cv::KAZE> detector = cv::KAZE::create();
std::vector<cv::KeyPoint> keypoints;
cv::Mat descriptors;
detector->detectAndCompute(image, cv::noArray(), keypoints, descriptors);

7. cv::MSER

cv::MSER::create()It is a function in the OpenCV library to create an instance of MSER (Maximally Stable Extremal Regions). MSER is an algorithm for detecting stable extremal regions in an image.

The following is cv::MSER::create()a detailed explanation of the function:

static Ptr<MSER> cv::MSER::create(
    int _delta = 5,
    int _min_area = 60,
    int _max_area = 14400,
    double _max_variation = 0.25,
    double _min_diversity = 0.2,
    int _max_evolution = 200,
    double _area_threshold = 1.01,
    double _min_margin = 0.003,
    int _edge_blur_size = 5
)

Parameter explanation:

  • _delta: The threshold used to control the difference in gray value between regions. Larger values ​​cause more regions to be generated, the default is 5.
  • _min_area: Minimum area. Smaller regions will be discarded, the default is 60.
  • _max_area: The maximum area. Larger regions will be discarded, the default is 14400.
  • _max_variation: Maximum rate of change. It is used to control the change of the area at different scales, and the default value is 0.25.
  • _min_diversity: minimum diversity. Controls the degree of similarity between regions, with a default value of 0.2.
  • _max_evolution: Maximum number of evolutions. The maximum number of evolutions in the control area, the default value is 200.
  • _area_threshold: Area threshold. Used to control the growth of the region, the default value is 1.01.
  • _min_margin: Minimum edge distance. Used to control the minimum edge distance between regions, the default value is 0.003.
  • _edge_blur_size: Edge blur size. The convolution kernel size used to control edge blurring, the default value is 5.

Return Value:
The function returns a MSERsmart pointer ( Ptr<MSER>) pointing to the object.

Use cv::MSER::create()the function to create an MSER object, and then use other member functions of the object to perform the operations of the MSER algorithm, for example, detectRegionsthe function is used to detect stable extremum regions in the image.

cv::MSERis a feature detector that detects extremal regions (such as connected domains) in an image. It is robust to illumination and viewing angle changes, and can perform feature detection at different image scales. It is used as follows:

cv::Ptr<cv::MSER> detector = cv::MSER::create();
std::vector<std::vector<cv::Point>> regions;
detector->detectRegions(image, regions);

8. cv::SimpleBlobDetector

cv::SimpleBlobDetector::create()function is a static member function in the OpenCV library for creating a simple blob detector. This function is used to construct cv::SimpleBlobDetectoran instance of the class, which is used to detect blobs in images.

The following is cv::SimpleBlobDetector::create()a detailed explanation of the function:

Ptr<SimpleBlobDetector> cv::SimpleBlobDetector::create(const SimpleBlobDetector::Params &parameters = SimpleBlobDetector::Params())

parameter:

  • parameters: The parameter of the blob detector, which is SimpleBlobDetector::Paramsan object of type and can be set as needed. If no arguments are provided, default arguments are used.

return value:

  • Ptr<SimpleBlobDetector>: cv::SimpleBlobDetectorA smart pointer to an instance of .

Function:

  • This function is used to create cv::SimpleBlobDetectoran instance of the class for blob detection.

cv::SimpleBlobDetectoris a feature detector based on a blob detection algorithm in binarized images. It can detect circular blobs in images and perform feature detection at different image scales. It is used as follows:

cv::Ptr<cv::SimpleBlobDetector> detector = cv::SimpleBlobDetector::create();
std::vector<cv::KeyPoint> keypoints;
detector->detect(image, keypoints);

9. cv::StarDetector

cv::StarDetector::create()is a function in OpenCV used to create an instance of the Star feature detector. The following is a detailed explanation of the function:

cv::Ptr<cv::StarDetector> cv::StarDetector::create(
    int maxSize = 45,
    int responseThreshold = 30,
    int lineThresholdProjected = 10,
    int lineThresholdBinarized = 8,
    int suppressNonmaxSize = 5
)

Parameter Description:

  • maxSize: The maximum size of the star feature. The default value is 45. A star feature is a corner point with a set of dimensions, which specifies the maximum size of the corner point.
  • responseThreshold: Response threshold. The default value is 30. This parameter defines the threshold of the feature point response value. Only feature points whose response value is greater than the threshold will be detected.
  • lineThresholdProjected: Projection line threshold. The default value is 10. Threshold for finding line segments in a binary image. These line segments can be used to detect corners.
  • lineThresholdBinarized: Binarization line threshold. The default value is 8. Threshold for finding line segments in a binarized image. Again, these line segments can be used to detect corners.
  • suppressNonmaxSize: non-maximum suppression size. The default value is 5. Suppresses the distance threshold of neighboring feature points to avoid detecting too many feature points.

return value:

  • Returns a cv::Ptr<cv::StarDetector>object representing the created Star feature detector instance.
    Note that the Star feature detector is a deprecated feature detector that has been marked as obsolete since OpenCV version 3.1. More modern feature detectors such as SIFT, SURF or ORB are recommended.
    cv::StarDetectoris a feature detector based on the Hessian detector, which can detect the corners in the image. It is robust to illumination and viewing angle changes, and can perform feature detection at different image scales. It is used as follows:
cv::Ptr<cv::StarDetector> detector = cv::StarDetector::create();
std::vector<cv::KeyPoint> keypoints;
detector->detect(image, keypoints);

10. cv::SIFT

cv::SIFT::create()is the creation function of the SIFT (Scale Invariant Feature Transform) algorithm in the OpenCV library. It returns a cv::SIFTsmart pointer to a object that can be used to detect SIFT feature points in an image.

The following is cv::SIFT::create()a detailed explanation of the function:

cv::Ptr<cv::SIFT> cv::SIFT::create(
    int nfeatures = 0,
    int nOctaveLayers = 3,
    double contrastThreshold = 0.04,
    double edgeThreshold = 10,
    double sigma = 1.6
)

Parameter explanation:

  • nfeatures: The maximum number of feature points to detect. The default value is 0, which means to detect all feature points.
  • nOctaveLayers: The number of layers in each group of the pyramid. The default value is 3.
  • contrastThreshold: Threshold for excluding low-contrast features. The default value is 0.04.
  • edgeThreshold: Threshold for excluding marginal responses. The default value is 10.
  • sigma: Gaussian smoothing coefficient. The default value is 1.6.

return value:

  • cv::Ptr<cv::SIFT>: A cv::SIFTsmart pointer to a object.

After using cv::SIFT::create()the function to create a SIFT object, you can call other methods of the object to extract and process SIFT feature points in the image. Some commonly used methods include detectAndCompute(), detect()and compute().

cv::SIFTIt is a scale-space-based feature detector and descriptor extractor, which can detect stable key points and extract descriptors at different image scales. It is robust to illumination and viewing angle changes, and can detect different types of features such as edges and corners. However, since SIFT is based on a patented algorithm, attention needs to be paid to its licensing issues. It is used as follows:

cv::Ptr<cv::SIFT> detector = cv::SIFT::create();
std::vector<cv::KeyPoint> keypoints;
cv::Mat descriptors;
detector->detectAndCompute(image, cv::noArray(), keypoints, descriptors);

11. resume::SURF

cv::SURF::create()It is a function in the OpenCV library, which is used to create a SURF (Speeded Up Robust Features) object, which is used to detect and describe the feature points in the image.

The following is cv::SURF::create()a detailed explanation of the function:

Function prototype:

static Ptr<SURF> cv::SURF::create(
    double hessianThreshold = 100,
    int nOctaves = 4,
    int nOctaveLayers = 3,
    bool extended = false,
    bool upright = false
)

Parameter explanation:

  • hessianThreshold: Hessian threshold, used to filter the detected feature points, the default value is 100. A larger value will filter out weaker feature points, and a smaller value will keep more feature points.
  • nOctaves: The number of pyramid layers, the default value is 4. The number of pyramid layers determines the resolution of the image scale space. Larger values ​​can detect smaller-sized feature points, but the computational overhead will also increase.
  • nOctaveLayers: The number of inner layers of each pyramid layer, the default value is 3. The number of internal layers of each pyramid layer determines the quantity and quality of feature points, and a larger value can detect more feature points, but it will also increase computational overhead.
  • extended: Extended SURF descriptor flag, default value is false. If set to true, an extended SURF descriptor will be generated with a length of 128 dimensions; if set to false, a 64-dimensional SURF descriptor will be generated.
  • upright: Upright (not rotated) SURF flag, default is false. If set to true, the detected feature points will not consider the rotation invariance, which can improve the calculation efficiency.

Return value:
This function returns a Ptr<SURF>pointer to the created SURF object. You can use this object to perform feature point detection and description on an image.

cv::SURFIt is a scale-space-based feature detector and descriptor extractor, which can detect stable key points and extract descriptors at different image scales. It is robust to illumination and viewing angle changes, and can detect different types of features such as edges and corners. However, since SURF is based on a patented algorithm, attention needs to be paid to its licensing issues. It is used as follows:

cv::Ptr<cv::xfeatures2d::SURF> detector = cv::xfeatures2d::SURF::create();
std::vector<cv::KeyPoint> keypoints;
cv::Mat descriptors;
detector->detectAndCompute(image, cv::noArray(), keypoints, descriptors);

12. cv::FastFeatureDetector

cv::FastFeatureDetector::create()is the function in the OpenCV library used to create Fast feature detector objects. Fast feature detector is a high-speed corner detection algorithm commonly used in computer vision and image processing tasks.

The declaration of the function is as follows:

Ptr<FastFeatureDetector> cv::FastFeatureDetector::create(
    int threshold = 10,
    bool nonmaxSuppression = true,
    int type = FastFeatureDetector::TYPE_9_16
);

Parameter explanation:

  • threshold: Threshold for determining corner points. Larger thresholds result in fewer detected corners, while smaller thresholds result in more detected corners. The default value is 10.
  • nonmaxSuppression: A boolean indicating whether to apply non-maximum suppression. If set to true, non-maximum suppression will be applied to detected corners to eliminate duplicate corners. The default value is true.
  • type: Type of Fast feature detector. It defines the pixel comparison mode used by the detector. Available types are:
    • FastFeatureDetector::TYPE_5_8: Use 16 pixels for comparison.
    • FastFeatureDetector::TYPE_7_12: Use 16 pixels for comparison.
    • FastFeatureDetector::TYPE_9_16: Use 16 pixels for comparison (default).

The function returns a smart pointer ( ) to the FastFeatureDetector object Ptr<FastFeatureDetector>through which the methods and properties of the Fast Feature Detector can be accessed.

cv::FastFeatureDetectoris a feature detector based on image brightness changes, which can quickly detect corners in an image. It is robust to illumination and viewing angle changes, and can perform feature detection at different image scales. It is used as follows:

cv::Ptr<cv::FastFeatureDetector> detector = cv::FastFeatureDetector::create();
std::vector<cv::KeyPoint> keypoints;
detector->detect(image, keypoints);

13. cv::AgastFeatureDetector

cv::AgastFeatureDetector::create()is the function used in OpenCV to create an AGAST feature detector object. AGAST (Adaptive and Generic Accelerated Segment Test) is a fast corner detection algorithm that can be used to find corners or key points in images.

The detailed explanation of this function is as follows:

Function prototype:

Ptr<AgastFeatureDetector> cv::AgastFeatureDetector::create(
    int threshold=10,
    bool nonmaxSuppression=true,
    int type=AgastFeatureDetector::OAST_9_16
)

parameter:

  • threshold: Set the threshold to determine whether the pixel is a corner point. The default value is 10.
  • nonmaxSuppression: Whether to apply non-maximum suppression to eliminate redundant corners. The default value is true.
  • type: Specifies the type of AGAST detector. The default is AgastFeatureDetector::OAST_9_16, which means use the 9/16 OAST corner detector. There are other options available such as AgastFeatureDetector::OAST_7_12d, AgastFeatureDetector::OAST_5_8etc.

Return Value:
Returns a AgastFeatureDetectorsmart pointer ( ) pointing to the object Ptr, which can be used to detect corners in the image.

cv::AgastFeatureDetectoris a feature detector based on image brightness changes, which can quickly detect corners in an image. It is robust to illumination and viewing angle changes, and can perform feature detection at different image scales. It is used as follows:

cv::Ptr<cv::AgastFeatureDetector> detector = cv::AgastFeatureDetector::create();
std::vector<cv::KeyPoint> keypoints;
detector->detect(image, keypoints);

14. cv::BRISK

cv::BRISK::create()is a function in OpenCV for creating instances of BRISK (Binary Robust Invariant Scalable Keypoints) keypoint detector and descriptor extractor. BRISK is an algorithm for feature point detection and description in computer vision.

The following is cv::BRISK::create()a detailed explanation of the function:

Ptr<BRISK> cv::BRISK::create(
  int thresh,
  int octaves = 3,
  float patternScale = 1.0f
)

Parameters :

  • thresh: Controls the threshold for feature point detection. Smaller thresholds result in more feature points, but may include more noise. A larger threshold will filter out some weaker feature points, but may miss some details.
  • octaves: The number of layers of the image pyramid. A larger number of layers can detect larger-scale features, but also increases the amount of computation.
  • patternScale: The sampling scale used in BRISK feature descriptor computation.

return value :

  • Ptr<BRISK>: Returns the instance pointer of the BRISK keypoint detector and descriptor extractor.

cv::BRISKis a binary feature detector and descriptor extractor that can detect stable keypoints at different image scales and extract descriptors. It is robust to illumination and viewing angle changes, and can detect different types of features such as corners and edges. It is used as follows:

cv::Ptr<cv::BRISK> detector = cv::BRISK::create();
std::vector<cv::KeyPoint> keypoints;
cv::Mat descriptors;
detector->detectAndCompute(image, cv::noArray(), keypoints, descriptors);

15. cv::ORB

cv::ORB::create()is a function in the OpenCV library for creating instances of ORB (Oriented FAST and Rotated BRIEF) feature detectors and descriptor generators. ORB is a computer vision algorithm that combines the FAST feature detector and the BRIEF descriptor generator and improves on it to provide better speed and performance.

The following is cv::ORB::create()a detailed explanation of the function:

Function prototype:

Ptr<ORB> cv::ORB::create(
    int nfeatures = 500,
    float scaleFactor = 1.2f,
    int nlevels = 8,
    int edgeThreshold = 31,
    int firstLevel = 0,
    int WTA_K = 2,
    int scoreType = ORB::HARRIS_SCORE,
    int patchSize = 31,
    int fastThreshold = 20
)

Parameter Description:

  • nfeatures: Indicates the number of feature points expected to be detected, and the default is 500. The actual number of detected feature points may be less than this value.
  • scaleFactor: Indicates the scale factor between pyramid images, the default is 1.2f. Used to generate multi-scale image pyramids.
  • nlevels: Indicates the number of layers of the pyramid image, the default is 8. Each layer is obtained by downsampling the previous layer image.
  • edgeThreshold: Indicates the edge threshold, the default is 31. Used to filter out pixels with weak edge response when computing corners in an image.
  • firstLevel: Indicates the starting layer index of the pyramid, the default is 0. Usually set to 0, which means to detect features from the bottom layer.
  • WTA_K: Indicates the number of pixel comparisons used when calculating the descriptor, and the default is 2. It can be set to 2 or 3, respectively indicating the use of 2-bit or 3-bit BRIEF descriptor.
  • scoreType: Indicates the score type used when calculating the feature point angle response function, the default is ORB::HARRIS_SCORE. Optionally ORB::FAST_SCORE, the angular response is calculated from the scores of the FAST corner detector.
  • patchSize: Indicates the size of the pixel neighborhood for calculating the BRIEF descriptor, which is 31 by default. Usually coincides with the neighborhood size of the FAST feature detector.
  • fastThreshold: Indicates the response threshold of the FAST feature detector, which is 20 by default. Used to detect feature points.

return value:

  • Ptr<ORB>: Returns the instance pointer of the ORB feature detector and descriptor generator.

cv::ORBis a binary feature detector and descriptor extractor that can detect stable keypoints at different image scales and extract descriptors. It is robust to illumination and viewing angle changes, and can detect different types of features such as corners and edges. It is used as follows:

cv::Ptr<cv::ORB> detector = cv::ORB::create();
std::vector<cv::KeyPoint> keypoints;
cv::Mat descriptors;
detector->detectAndCompute(image, cv::noArray(), keypoints, descriptors);

16. cv::MSER

cv::MSER::create()is a function in the OpenCV library for creating a region detector based on an extreme region (Maximally Stable Extremal Regions, referred to as MSER). MSER is a commonly used image segmentation algorithm for detecting stable connected regions in an image.

The following is cv::MSER::create()a detailed explanation of the function:

cv::Ptr<cv::MSER> cv::MSER::create(
    int delta = 5,
    int min_area = 60,
    int max_area = 14400,
    double max_variation = 0.25,
    double min_diversity = 0.2,
    int max_evolution = 200,
    double area_threshold = 1.01,
    double min_margin = 0.003,
    int edge_blur_size = 5
)

This function returns a cv::Ptr<cv::MSER>object that can be used to create MSER region detectors.
Parameter Description:

  • delta: The delta parameter is used to specify the change threshold of the grayscale value. The default value is 5, which means that pixels whose gray value changes more than 5 will be considered as part of an extreme value region.
  • min_area: The min_area parameter specifies the minimum area area. The default value is 60, which means that areas smaller than 60 pixels will be ignored.
  • max_area: The max_area parameter specifies the maximum area area. The default value is 14400, which means that areas larger than 14400 pixels will be ignored.
  • max_variation: The max_variation parameter is used to specify the gray value change rate of the area. The default value is 0.25, which means that the area whose gray value change rate exceeds 0.25 will be ignored.
  • min_diversity:min_diversity parameter specifies the diversity of the region. The default value is 0.2, which means that regions with diversity less than 0.2 will be ignored.
  • max_evolution: The max_evolution parameter specifies the maximum number of evolution steps for the region. The default value is 200, which means a maximum of 200 steps of evolution.
  • area_threshold:area_threshold parameter is used to specify the area threshold of the area. The default value is 1.01, which means that the regions whose area similarity exceeds 1.01 will be merged.
  • min_margin:min_margin parameter specifies the minimum edge distance of the region. The default value is 0.003, which means that the regions whose edge distance is less than 0.003 will be merged.
  • edge_blur_sizeThe :edge_blur_size parameter is used to specify the size of the edge blur. The default value is 5, which means the edge size is 5x5.

These parameters can be adjusted according to the needs of specific applications to obtain the best results of region detection.

cv::MSERis an image region-based feature detector that can quickly detect stable regions in images. It is robust to illumination and viewing angle changes, and can perform feature detection at different image scales. It is used as follows:

cv::Ptr<cv::MSER> detector = cv::MSER::create();
std::vector<std::vector<cv::Point>> regions;
detector->detectRegions(image, regions);

17. cv::GFTTDetector

cv::GFTTDetector::create()is a function in the OpenCV library used to create an instance of the GFTT (Good Features to Track) corner detector. The following is a detailed explanation of the function:

cv::Ptr<cv::GFTTDetector> cv::GFTTDetector::create(
    int maxCorners, // 要检测的最大角点数
    double qualityLevel, // 角点的最小可接受质量水平
    double minDistance, // 角点之间的最小欧几里得距离
    int blockSize, // 角点检测中使用的领域块的大小
    bool useHarrisDetector = false, // 是否使用Harris角点检测器(默认为false)
    double k = 0.04 // Harris角点检测器的自由参数k
);

Parameter explanation:

  • maxCorners: The maximum number of corners to detect. This refers to the upper limit on the number of strongest corners found in the image.
  • qualityLevel: Minimum acceptable quality level for corner points. It is a value between 0 and 1 indicating the quality threshold for pixels that are corner points. Only those pixels larger than this threshold are considered corners.
  • minDistance: Minimum Euclidean distance between corner points. If the distance between two corner points is less than this value, one of the corner points will be suppressed. This can help ensure that only the strongest corners are selected in dense corner regions.
  • blockSize: The size of the domain block used in corner detection. When calculating the corner response function, this parameter specifies the size of the adjacent area of ​​each pixel.
  • useHarrisDetector: A boolean value specifying whether to use the Harris corner detector. If set to true, the Harris corner detector is used, otherwise the Shi-Tomasi corner detector is used. The default value is false.
  • k: The free parameter k of the Harris corner detector. For the Harris corner detector, it is a free parameter of the corner response function, and its value is generally between 0.04 and 0.06.

return value:

  • A cv::GFTTDetectorsmart pointer ( cv::Ptr<cv::GFTTDetector>) to the instance.

cv::GFTTDetectoris an image gradient-based feature detector that can quickly detect corners in an image. It is robust to illumination and viewing angle changes, and can perform feature detection at different image scales. It is used as follows:

cv::Ptr<cv::GFTTDetector> detector = cv::GFTTDetector::create();
std::vector<cv::KeyPoint> keypoints;
detector->detect(image, keypoints);

18. cv::SimpleBlobDetector

cv::SimpleBlobDetector::create()function is a static method in the OpenCV library for creating a simple blob detector. It returns a cv::Ptr<cv::SimpleBlobDetector>smart pointer to the created detector object. Through this object, you can perform speckle detection on the image and extract the speckle information in the image.

The following is cv::SimpleBlobDetector::create()a detailed explanation of the function:

static Ptr<SimpleBlobDetector> create(const SimpleBlobDetector::Params &parameters = SimpleBlobDetector::Params())

parameter:

  • parameters: An optional parameter to set the parameters of the blob detector. It is SimpleBlobDetector::Paramsan object of type and is empty by default.

return value:

cv::Ptr<cv::SimpleBlobDetector>Smart pointer to the created simple blob detector object.

cv::SimpleBlobDetectoris an image binarization based feature detector that can detect blobs in an image. It is robust to illumination and viewing angle changes, and can perform feature detection at different image scales. It is used as follows:

cv::SimpleBlobDetector::Params params;
cv::Ptr<cv::SimpleBlobDetector> detector = cv::SimpleBlobDetector::create(params);
std::vector<cv::KeyPoint> keypoints;
detector->detect(image, keypoints);

19.cv::KAZE

cv::KAZE::create()is a function in OpenCV to create a KAZE feature detector object. KAZE ("KApaZey") is a feature descriptor based on Scale-Invariant Feature Transform (SIFT for short). Here is cv::KAZE::create()a detailed description of the function:

cv::Ptr<cv::KAZE> cv::KAZE::create(
    bool extended = false,
    bool upright = false,
    float threshold = 0.001f,
    int nOctaves = 4,
    int nOctaveLayers = 4,
    int diffusivity = KAZE::DIFF_PM_G2
)

Parameter explanation:

  • extended(Optional): indicates whether to use the extended KAZE descriptor, the default value is false. If set to true, the dimensionality of the descriptor will increase, providing more feature information.
  • upright(Optional): indicates whether to use the KAZE descriptor without direction (upright), the default value is false. If set to true, the descriptor will not contain orientation information.
  • threshold(Optional): Indicates the threshold for feature point detection, the default value is 0.001f. This threshold determines the quality of the feature points, and a lower threshold produces more feature points.
  • nOctaves(optional): Indicates the number of layers of the image pyramid, the default value is 4. Larger values ​​yield more scale space, but also increase computation.
  • nOctaveLayers(Optional): Indicates the number of interpolation intervals for each pyramid layer, the default value is 4. Increasing this value can increase the density of feature points, but it will also increase the amount of calculation.
  • diffusivity(Optional): Indicates the type of diffusion equation, which affects the scale selection behavior of feature points. The default value is KAZE::DIFF_PM_G2. Other optional values ​​include KAZE::DIFF_PM_G1, , KAZE::DIFF_WEICKERTand KAZE::DIFF_CHARBONNIER.

Return Value:
The function returns a cv::KAZEsmart pointer ( ) pointing to the object cv::Ptr.

Use cv::KAZE::create()the function to easily create the KAZE feature detector object, and set related parameters as required. Feature detector objects can be used to extract feature points and their descriptors in images for various computer vision tasks such as feature matching, image stitching, etc.

cv::KAZEIt is a scale-space-based feature detector and descriptor extractor, which can detect stable key points and extract descriptors at different image scales. It is robust to illumination and viewing angle changes, and can detect different types of features such as corners and edges. It is used as follows:

cv::Ptr<cv::KAZE> detector = cv::KAZE::create();
std::vector<cv::KeyPoint> keypoints;
cv::Mat descriptors;
detector->detectAndCompute(image, cv::noArray(), keypoints, descriptors);

20. cv::AKAZE

cv::AKAZE::create()function is a function in the OpenCV library that creates an instance of the AKAZE (Accelerated-KAZE) feature detector and descriptor extractor. AKAZE is an algorithm for image feature detection and descriptor extraction, which is improved and optimized on the basis of KAZE (Kanade-Lucas-Tomasi Feature Detector and Descriptor Extractor).

The function prototype is as follows:

Ptr<AKAZE> cv::AKAZE::create(
    int descriptor_type = AKAZE::DESCRIPTOR_MLDB,
    int descriptor_size = 0,
    int descriptor_channels = 3,
    float threshold = 0.001f,
    int nOctaves = 4,
    int nOctaveLayers = 4,
    int diffusivity = KAZE::DIFF_PM_G2
)

The following is a detailed description of each parameter:

  • descriptor_type: Descriptor type, which can be one of the following constants:
    • AKAZE::DESCRIPTOR_KAZE_UPRIGHT: Unrotated KAZE descriptor (64 dimensions).
    • AKAZE::DESCRIPTOR_KAZE: KAZE descriptor (128 dimensions).
    • AKAZE::DESCRIPTOR_MLDB_UPRIGHT: Unrotated MLDB descriptor (48 dimensions).
    • AKAZE::DESCRIPTOR_MLDB: MLDB descriptor (64 dimensions).
  • descriptor_size: Descriptor dimension size, the default is 0, which means using the default dimension size. If a non-zero value is specified, the descriptor size will be recalculated.
  • descriptor_channels: The number of channels of the descriptor, the default is 3. Can be 1, 2 or 3. For color images, 3 channels can be used, for grayscale images, 1 channel can be used.
  • threshold: feature point detection threshold, the default is 0.001f. Used to control the number of feature points extracted. The smaller the value, the more feature points are extracted.
  • nOctaves: The number of layers of the image pyramid, the default is 4. The more pyramid layers, the wider the scale space range, and more scale features can be detected.
  • nOctaveLayers: The number of sublayers for each pyramid layer, the default is 4. Each sublayer is a scale space.
  • diffusivity: Diffusion coefficient, default is KAZE::DIFF_PM_G2. Diffusion coefficient controls the diffusion process of the image, can be selected KAZE::DIFF_PM_G1, KAZE::DIFF_WEICKERTor KAZE::DIFF_CHARBONNIER.

The function returns a AKAZEsmart pointer pointing to the type Ptr<AKAZE>, which can be used to call other member functions in the AKAZE class, such as detectAndComputeto detect feature points in the image and calculate their descriptors.

cv::AKAZEIt is a scale-space-based feature detector and descriptor extractor, which can detect stable key points and extract descriptors at different image scales. It is robust to illumination and viewing angle changes, and can detect different types of features such as corners and edges. Can handle robustness and rotation invariance better cv::KAZEthan . cv::AKAZEIt is used as follows:

cv::Ptr<cv::AKAZE> detector = cv::AKAZE::create();
std::vector<cv::KeyPoint> keypoints;
cv::Mat descriptors;
detector->detectAndCompute(image, cv::noArray(), keypoints, descriptors);

21. cv::AgastFeatureDetector

cv::AgastFeatureDetector::create()is a function in the OpenCV library used to create an instance of the AGAST feature detector. AGAST (Adaptive and Generic Accelerated Segment Test) is a fast algorithm for feature detection.

The function prototype is as follows:

Ptr<AgastFeatureDetector> cv::AgastFeatureDetector::create(
    int threshold = 10,
    bool nonmaxSuppression = true,
    AgastType type = AGAST_9_16
)

This function returns a Ptr<AgastFeatureDetector>object, which is a smart pointer to an instance of the AGAST feature detector.

Explanation of function parameters:

  • threshold: Threshold for feature point detection. Only when the difference between a pixel and its neighbors is larger than this threshold, it will be considered as a feature point. The default value is 10.
  • nonmaxSuppression: Whether to perform non-maximum suppression. If set to true, after a feature point is detected, the feature point with the largest response will be selected in the surrounding neighborhood. The default is true.
  • type: Type of AGAST algorithm. Different types can be selected, such as AGAST_5_8, AGAST_7_12d, AGAST_7_12s, OAST_9_16etc. Default is AGAST_9_16.

cv::AgastFeatureDetectoris a fast corner detector that can detect stable corners at different image scales. It is robust to illumination and viewing angle changes, and can detect different types of features such as corners and edges. It is used as follows:

cv::Ptr<cv::AgastFeatureDetector> detector = cv::AgastFeatureDetector::create();
std::vector<cv::KeyPoint> keypoints;
detector->detect(image, keypoints);

22. cv::FastFeatureDetector

cv::FastFeatureDetector::create()FastFeatureDetectoris a static function used to create classes in OpenCV . FastFeatureDetectoris a fast feature detector for finding keypoints (feature points) in images. The following is a detailed explanation of the function:

Ptr<FastFeatureDetector> cv::FastFeatureDetector::create(
    int threshold,        // 阈值,用于确定特征点的强度
    bool nonmaxSuppression = true,   // 是否应用非最大值抑制
    int type = FastFeatureDetector::TYPE_9_16  // 使用的模板类型
)

The function parameters are explained as follows:

  • threshold: Threshold is a parameter used to determine the intensity of feature points. If the grayscale difference between a pixel and its surrounding pixels exceeds a threshold, the pixel is considered a feature point.
  • nonmaxSuppression: non-maximum suppression is a boolean indicating whether non-maximum suppression is applied. If set to true, after feature points are detected, non-maximum suppression will be performed based on pixel intensity to eliminate redundant feature points.
  • type: template type is the type of template used to calculate feature points. OpenCV provides several different template types, such as FastFeatureDetector::TYPE_5_8, , FastFeatureDetector::TYPE_7_12and FastFeatureDetector::TYPE_9_16etc.

The return value is a FastFeatureDetectorsmart pointer ( Ptr<FastFeatureDetector>) to the object that can be used to manipulate and call FastFeatureDetectorother functions of the class.

cv::FastFeatureDetectoris a fast corner detector that can detect stable corners at different image scales. It is robust to illumination and viewing angle changes, and can detect different types of features such as corners and edges. It is used as follows:

cv::Ptr<cv::FastFeatureDetector> detector = cv::FastFeatureDetector::create();
std::vector<cv::KeyPoint> keypoints;
detector->detect(image, keypoints);

23. cv::StarDetector

cv::StarDetector::create()is the function in the OpenCV library used to create the STAR feature detector. STAR (Scale-invariant Feature Transform) is a feature description algorithm for detecting key points in images.

The full declaration of the function is as follows:

cv::Ptr<cv::StarDetector> cv::StarDetector::create(
    int maxSize = 45,
    int responseThreshold = 30,
    int lineThresholdProjected = 10,
    int lineThresholdBinarized = 8,
    int suppressNonmaxSize = 5
)

Parameter Description:

  • maxSize: The maximum scale of keypoints. The default value is 45.
  • responseThreshold: Response threshold, used to filter keypoints. The default value is 30.
  • lineThresholdProjected: Projection threshold for detecting line features. The default value is 10.
  • lineThresholdBinarized: Binarization threshold for detecting line features. The default value is 8.
  • suppressNonmaxSize: Neighborhood size for non-maximum suppression. The default value is 5.

The function returns a cv::StarDetectorsmart pointer to the class through which the STAR feature detector can be used.

cv::StarDetectoris a feature detector based on grayscale images, which can quickly detect stable feature points. It is robust to illumination and viewing angle changes, and can perform feature detection at different image scales. It is used as follows:

cv::Ptr<cv::StarDetector> detector = cv::StarDetector::create();
std::vector<cv::KeyPoint> keypoints;
detector->detect(image, keypoints);

descriptor matching class

insert image description here

FLANN, the fastest neighborhood feature matching method, is a fast matching method, and FLANN is faster when performing batch feature matching. But FLANN uses the nearest approximation, so the accuracy is poor.
If we want the exact matching of the image, we use the BF matching method, and if we want the speed, we use the FLANN matching method.

24. cv::FlannBasedMatcher

cv::FlannBasedMatcher::create()is the function used in OpenCV to create a feature matcher object based on the Flann algorithm. The Flann algorithm is a fast approximate nearest neighbor search algorithm that can be used to efficiently match feature points.

The following is a detailed explanation of the function:

Function prototype:

Ptr<FlannBasedMatcher> cv::FlannBasedMatcher::create(const IndexParams& indexParams = DEFAULT_INDEX_PARAMS, const SearchParams& searchParams = DEFAULT_SEARCH_PARAMS)

parameter:

  • indexParams: A IndexParamsparameter of type specifying the parameters of the Flann index. You can use default parameters DEFAULT_INDEX_PARAMS, or customize parameters according to specific needs.
  • searchParams: A SearchParamsparameter of type specifying the parameters of the Flann search. You can also use default parameters DEFAULT_SEARCH_PARAMSor customize parameters.

return value:

  • A FlannBasedMatchersmart pointer ( Ptr) to the object.

cv::FlannBasedMatcher::create()A feature matcher object based on the Flann algorithm can be created using the function as follows:

Ptr<FlannBasedMatcher> matcher = FlannBasedMatcher::create();

Then, you can use this feature matcher object to perform feature point matching, for example:

std::vector<std::vector<DMatch>> matches;
matcher->knnMatch(descriptor1, descriptor2, matches, k);

Among them, descriptor1and descriptor2is the feature descriptor to be matched, matchesis a two-dimensional vector storing the matching result, kand is the number of nearest neighbors to be matched.

Summary: cv::FlannBasedMatcher::create()The function is used to create a feature matcher object based on the Flann algorithm, and the index parameters and search parameters can be customized as needed. The created feature matcher object can be used for feature point matching.

cv::FlannBasedMatcherIt is a feature matcher based on the Fast Nearest Neighbor Search (FLANN) algorithm, which can match feature points in two images. It is used as follows:

cv::Ptr<cv::FlannBasedMatcher> matcher = cv::FlannBasedMatcher::create();
std::vector<cv::DMatch> matches;
matcher->match(descriptors1, descriptors2, matches);

25. cv::BFMatcher

cv::BFMatcher::create()is a function in the OpenCV library used to create a Brute-Force matcher (BFMatcher). BFMatcher is a feature matching method based on brute force search, which can perform nearest neighbor search in a given set of feature descriptors.

The function prototype is as follows:

Ptr<BFMatcher> cv::BFMatcher::create(int normType = NORM_L2, bool crossCheck = false)

Parameter Description:

  • normType: The normalization type used to compute the distance between feature descriptors. The default value is NORM_L2, indicating that the feature distance is calculated using Euclidean distance. NORM_L1(Manhattan distance) and NORM_HAMMING(Hamming distance) are also options .
  • crossCheck: Boolean value specifying whether to use cross-validation. If yes true, only the best matching pairs that match each other are returned.

return value:

  • Returns a BFMatchersmart pointer ( Ptr<BFMatcher>) to the object.

BFMatcherThe general steps for feature matching using are as follows:

  1. Extract keypoints and compute feature descriptors (e.g., using algorithms such as cv::ORB, , cv::SIFT, etc.).cv::SURF
  2. Create BFMatcheran object, which can be cv::BFMatcher::create()done with .
  3. Use the function BFMatcherof matchto match the two sets of feature descriptors to get matching pairs.
  4. Further processing or analysis is performed based on the matched pairs.

cv::BFMatcherIt is a feature matcher based on brute force matching algorithm, which can match feature points in two images. Comparing with cv::FlannBasedMatcher, cv::BFMatcheris slower to compute, but may produce more accurate matching results in some cases. It is used as follows:

cv::Ptr<cv::BFMatcher> matcher = cv::BFMatcher::create();
std::vector<cv::DMatch> matches;
matcher->match(descriptors1, descriptors2, matches);

In OpenCV, NormTypes are not directly associated with feature point detection algorithms. NormTypes is an enumeration used to specify the type of distance metric, which is mainly used for feature matching and distance calculation between descriptors.
The feature point detection algorithm itself usually does not depend on the distance type in NormTypes. They mainly focus on detecting key points or features in an image and generating corresponding descriptors. These descriptors can be matched with other feature descriptors, and the distance metric for matching can use the distance type in NormTypes.

  1. SIFT (Scale Invariant Feature Transform):

    • descriptor distance metric type: cv::NORM_L1andcv::NORM_L2
  2. SURF (Speed ​​Up Robust Features):

    • descriptor distance metric type: cv::NORM_L1andcv::NORM_L2
  3. ORB(Oriented FAST and Rotated BRIEF):

    • Descriptor distance metric type: cv::NORM_HAMMING,cv::NORM_HAMMING2
  4. BRISK(Binary Robust Invariant Scalable Keypoints):

    • Descriptor distance metric type: cv::NORM_HAMMING,cv::NORM_HAMMING2
  5. AKAZE (Accelerated-KAZE):

    • Descriptor distance metric type: cv::NORM_HAMMING,cv::NORM_HAMMING2
  6. FREAK(Fast Retina Keypoint):

    • Descriptor distance metric type: cv::NORM_HAMMING,cv::NORM_HAMMING2

These are the two main feature matchers provided in OpenCV. When performing feature matching, it is necessary to extract feature points and descriptors in the image through a feature detector and a descriptor extractor, and then perform matching through a feature matcher. The matching results can be screened and optimized by some post-processing methods, such as RANSAC algorithm and fundamental matrix estimation algorithm.

In general, feature detectors and descriptor extractors are very important tools in computer vision. They can help us quickly extract key information from images and use them for image registration, target tracking, 3D reconstruction, and image retrieval. and many other applications. In practical applications, it is necessary to select appropriate feature detectors and descriptor extractors according to different applications and scenarios, and combine some optimization methods for parameter adjustment and performance evaluation.

26. drawMatches draw matching feature points

drawMatchesfunction is a function in OpenCV for drawing matching feature points between two images. It connects the feature points in the two images and draws these connecting lines in the resulting image.

The function prototype is as follows:

void drawMatches(
    InputArray img1, // 第一幅图像
    const std::vector<KeyPoint>& keypoints1, // 第一幅图像的特征点
    InputArray img2, // 第二幅图像
    const std::vector<KeyPoint>& keypoints2, // 第二幅图像的特征点
    const std::vector<DMatch>& matches, // 特征点匹配结果
    OutputArray outImg, // 输出的匹配结果图像
    const Scalar& matchColor = Scalar::all(-1), // 连接线的颜色
    const Scalar& singlePointColor = Scalar::all(-1), // 特征点的颜色
    const std::vector<char>& matchesMask = std::vector<char>(), // 特征点匹配掩码
    int flags = DrawMatchesFlags::DEFAULT // 绘制匹配特征点的标志
);

Parameter Description:

  • img1: the first image (can be a grayscale image or a color image)
  • keypoints1: The feature points of the first image, the type is std::vector<KeyPoint>, each feature point contains its coordinates and other attributes.
  • img2: the second image (with img1the same type as
  • keypoints2: The feature points of the second image, with keypoints1the same type as .
  • matches: Feature point matching result, the type is std::vector<DMatch>, indicating the feature point matching between two images.
  • outImg: The output matching result image, the type is OutputArray.
  • matchColor: The color of the connection line, which can be specified as Scalara value of type, and the default is Scalar::all(-1)a random color.
  • singlePointColor: The color of the feature point, which can be specified as Scalara value of type, and the default is Scalar::all(-1)a random color.
  • matchesMask: Mask vector identifying which matches are valid. The length should be matches1to2the same as the length of and is empty by default.
  • flags: draws the matching flag, can be a combination of the following values:
    • DrawMatchesFlags::DEFAULT: Default flag, show all matches.
    • DrawMatchesFlags::DRAW_OVER_OUTIMG: outImgDraw matches on, the default is to draw on a blank image.
    • DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS: Do not draw individual feature points.
    • DrawMatchesFlags::DRAW_RICH_KEYPOINTS: Draw the size and direction of feature points.
      = Scalar::all(-1), // the color of the connecting line
      const Scalar& singlePointColor = Scalar::all(-1), // the color of the feature points
      const std::vector& matchesMask = std::vector(), / / Feature point matching mask
      int flags = DrawMatchesFlags::DEFAULT // Draw the flag of matching feature points
      );

参数说明:

- `img1`:第一幅图像(可以是灰度图像或彩色图像)
- `keypoints1`:第一幅图像的特征点,类型为`std::vector<KeyPoint>`,每个特征点包含其坐标和其他属性。
- `img2`:第二幅图像(与`img1`具有相同的类型)
- `keypoints2`:第二幅图像的特征点,与`keypoints1`具有相同的类型。
- `matches`:特征点匹配结果,类型为`std::vector<DMatch>`,表示两幅图像之间的特征点匹配。
- `outImg`:输出的匹配结果图像,类型为`OutputArray`。
- `matchColor`:连接线的颜色,可以指定为`Scalar`类型的值,默认为`Scalar::all(-1)`表示随机颜色。
- `singlePointColor`:特征点的颜色,可以指定为`Scalar`类型的值,默认为`Scalar::all(-1)`表示随机颜色。
- `matchesMask`:掩码向量,用于标识哪些匹配是有效的。长度应与`matches1to2`的长度相同,默认为空。
- `flags`:绘制匹配的标志,可以是以下值的组合:
  - `DrawMatchesFlags::DEFAULT`:默认标志,显示所有匹配。
  - `DrawMatchesFlags::DRAW_OVER_OUTIMG`:在`outImg`上绘制匹配,默认是在空白图像上绘制。
  - `DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS`:不绘制单个特征点。
  - `DrawMatchesFlags::DRAW_RICH_KEYPOINTS`:绘制特征点的大小和方向。
使用`drawMatches`函数可以方便地将特征点匹配结果可视化,帮助我们分析和理解特征点匹配的效果。

Guess you like

Origin blog.csdn.net/weixin_43763292/article/details/131256727