Some classes of feature point detection, descriptor calculation, and feature matching in openCV

Introduction to some classes of feature extraction in openCV

    1. FeatureDetetor (feature point extraction)

FeatureDetetor is a virtual class in opencv, and its class definition in opencv is as follows:

class CV_EXPORTS FeatureDetector  
{  
public:  
    virtual ~FeatureDetector();  
    void detect( const Mat& image, vector<KeyPoint>& keypoints,  
        const Mat& mask=Mat() ) const;  
    void detect( const vector<Mat>& images,  
        vector<vector<KeyPoint> >& keypoints,  
        const vector<Mat>& masks=vector<Mat>() ) const;  
    virtual void read(const FileNode&);  
    virtual void write(FileStorage&) const;  
    static Ptr<FeatureDetector> create( const string& detectorType );  
protected:  
    ...  
}; 
通过定义FeatureDetector的对象可以使用多种特征检测方法。通过create()函数调用:
Ptr<detector> FeatureDetector::create(const string& detectorType);
/*
其中特征点类型有以下几种。"FAST""STAR""SIFT""SURF""ORB""BRISK""MSER""GFTT""HARRIS""Dense""SimpleBlob"
*/
 也可以直接用单独的类名定义:
Ptr<FeatureDetector> detector = 单独的类名::create();
/*
单独的类名有:FastFeatureDetector、StarFeatureDetector、SIFT (nonfree module)、SURF (nonfree module)、ORB、BRISK、MSER、GoodFeaturesToTrackDetector、GoodFeaturesToTrackDetector with Harris detector enabled、DenseFeatureDetector、SimpleBlobDetector
*/

The generated keypoint data structure
angle: angle indicates the direction of the key point. According to Lowe’s thesis, in order to ensure that the direction is not deformed, the SIFT algorithm obtains the direction of the point by performing gradient calculations on the neighborhood around the key point. -1 is the initial value.
class_id: When classifying pictures, we can use class_id to distinguish each feature point. If it is not set, it is -1. You need to set octave by yourself: it represents the data extracted from which layer of the pyramid
.
pt: the coordinates of key points
response: the degree of response, which represents the strength and size of the point. I couldn't understand it at first . size-and-response-exactly-represent-in-a-surf-keypoint , http://stackoverflow.com/questions/24699495/opencv-keypoints-response-greater-or-less?lq=1 )—response represents It refers to how good the key point is, more precisely, the degree of the corner point. Instantly understood.
size: The size of the diameter of the point Note a problem: keypoint only saves some basic information of the feature points detected by opencv's sift library, which is what is mentioned above, but the feature vector extracted by sift is not actually in this , the feature vector is extracted by SiftDescriptorExtractor, and the result is placed in a Mat data structure. This data structure actually saves the feature vector corresponding to the feature point. For details, see the detailed explanation of the objects generated by SiftDescriptorExtractor later.



  1. DescriptorExtractor

    Like the previous class, the definition method is as follows:

 Ptr<extractor>=DescriptorExtractor::create("ExtractorType"); 
 /*
 ExtractorType有"SIFT""SURF""BRIEF""BRISK""ORB""FREAK"
 */
同上一个类一样,它也可直接调用每种类型的类名。方法如下:
Ptr<DescriptorExtractor> descriptor = 类名::create();
/*
类名有:FREAK、ORB、BRISK、SURF、BriefDescriptorExtractor、SIFT
*/
    1. DescriptorMatcher

    DescriptorMatcher is an abstract class for matching feature vectors, and the feature matching methods in OpenCV2 are all inherited from this class (for example: BFmatcher, FlannBasedMatcher). This class mainly contains two sets of matching methods: matching between image pairs and matching between images and an image set.

    (1) Declaration of the method used for matching between image pairs

// Find one best match for each query descriptor (if mask is empty).
    CV_WRAP void match( const Mat& queryDescriptors, const Mat& trainDescriptors,
    CV_OUT vector<DMatch>& matches, const Mat& mask=Mat() ) const;
// Find k best matches for each query descriptor (in increasing order of distances).
// compactResult is used when mask is not empty. If compactResult is false matches
// vector will have the same size as queryDescriptors rows. If compactResult is true
// matches vector will not contain matches for fully masked out query descriptors.
    CV_WRAP void knnMatch( const Mat& queryDescriptors, const Mat& trainDescriptors,
    CV_OUT vector<vector<DMatch> >& matches, int k,const Mat& mask=Mat(), bool compactResult=false ) const;
// Find best matches for each query descriptor which have distance less than
// maxDistance (in increasing order of distances).
    void radiusMatch( const Mat& queryDescriptors, const Mat& trainDescriptors,
    vector<vector<DMatch> >& matches, float maxDistance,const Mat& mask=Mat(), bool compactResult=false ) const;
 (2)  方法重载,用于图像和图像集匹配的方法声明
CV_WRAP void match( const Mat& queryDescriptors, CV_OUT vector<DMatch>& matches,const vector<Mat>& masks=vector<Mat>() );
CV_WRAP void knnMatch( const Mat& queryDescriptors, CV_OUT vector<vector<DMatch> >& matches, int k,const vector<Mat>& masks=vector<Mat>(), bool compactResult=false );
void radiusMatch( const Mat& queryDescriptors, vector<vector<DMatch> >& matches, float maxDistance,const vector<Mat>& masks=vector<Mat>(), bool compactResult=false );
定义方式如下:
Ptr matcher = DescriptorMatcher::create("matchType");
/*
BruteForce (it uses L2 )
BruteForce-L1
BruteForce-Hamming
BruteForce-Hamming(2)
FlannBased
*/
    1. Dmatch
      Dmatch is a structure used to store matching information in opencv. The following is the definition of the Dmatch class in opencv:
struct CV_EXPORTS_W_SIMPLE DMatch  
{  
//默认构造函数,FLT_MAX是无穷大  
//#define FLT_MAX         3.402823466e+38F        
/* max value */  
    CV_WRAP DMatch() : queryIdx(-1), trainIdx(-1), imgIdx(-1), distance(FLT_MAX) {}  
//DMatch构造函数  
    CV_WRAP DMatch( int _queryIdx, int _trainIdx, float _distance ) : queryIdx(_queryIdx), trainIdx(_trainIdx), imgIdx(-1), distance(_distance) {}  
//DMatch构造函数  
    CV_WRAP DMatch( int _queryIdx, int _trainIdx, int _imgIdx, float _distance ) :queryIdx(_queryIdx), trainIdx(_trainIdx), imgIdx(_imgIdx), distance(_distance) {}  
//queryIdx为query描述子的索引,match函数中前面的那个描述子  
    CV_PROP_RW int queryIdx;  
//trainIdx为train描述子的索引,match函数中后面的那个描述子  
    CV_PROP_RW int trainIdx;   
//imgIdx为进行匹配图像的索引  
//例如已知一幅图像的sift描述子,与其他十幅图像的描述子进行匹配,找最相似的图像,则imgIdx此时就有用了。  
    CV_PROP_RW int imgIdx; 
//distance为两个描述子之间的距离  
    CV_PROP_RW float distance;  
//DMatch比较运算符重载,比较的是DMatch中的distance,小于为true,否则为false   
    bool operator<( const DMatch &m ) const  
    {  
        return distance < m.distance;  
    }  
};  

    1. The function drawKeypoints() in drawKeypoints feature point detection (feature2d) is as follows:
void drawKeypoints(const Mat& image, const vector<KeyPoint>& keypoints, Mat& outImg, const Scalar& color=Scalar::all(-1), int flags=DrawMatchesFlags::DEFAULT )
//例如:drawKeypoints( img_1, keypoints_1, outimg1, Scalar::all(-1), DrawMatchesFlags::DEFAULT )
其中有 :    image – 原图像、keypoints – 原图像的关键点、outImg – 输出图像、color – 关键点的颜色、flags – 设置绘图功能
    1. drawMatches
void drawMatches(const Mat& img1, const vector<KeyPoint>& keypoints1, const Mat& img2, const vector<KeyPoint>& keypoints2, const vector<DMatch>& matches1to2, Mat& outImg, const Scalar& matchColor=Scalar::all(-1), const Scalar& singlePointColor=Scalar::all(-1), const vector<char>& matchesMask=vector<char>(), int flags=DrawMatchesFlags::DEFAULT )

//例如:drawMatches ( img_1, keypoints_1, img_2, keypoints_2, matches, img_match );
其中参数如下:img1 –第一幅图
keypoints1 – 第一幅图的关键点
img2 – 第二幅图
keypoints2 – 第二幅图的关键点
matches – 正确匹配
outImg – 输出图
matchColor – 匹配的颜色(连接线)
如果 matchColor==Scalar::all(-1) , 颜色随机
singlePointColor – 无匹配的点的颜色(圆圈)
如果 singlePointColor==Scalar::all(-1) ,颜色随机
matchesMask – 决定哪些匹配要画出来,若空,则全画
flags – 标志位

Guess you like

Origin blog.csdn.net/Lolita_han/article/details/70169702